ionice 

After more testing, ionice is out. It works at the block device level and the VM doesn't HAVE block devices, rather it is using vzfs, which is a shared FS (copy-on-write), and so all ionice operations fail. The ones that looked like they didn't where just having their errors fed into /dev/null.
[ add comment ] ( 51 views ) permalink
ionice 

The rolling on-host backup was giving errors about "ioprio_set: Operation not permitted". Apparently this is because Idle priority is a root-only action. (The backup should be running as root, but that's a different problem.) I did some research and found out that Idle priority is actually a bit of a bad thing, because it can cause deadlocks, where an Idle process will hold a resource and never be able to release it when another non-Idle process is trying to access it, because the non-Idle access pushes the Idle back.
I've changed all the processes to "best effort" with lowest priority in that class. For one, this doesn't require root, and for another, it avoids the potential deadlock issue.
[ add comment ] ( 59 views ) permalink
ionice, DS VPS 

After logging in this morning to a warning the DS VPS was highly loaded again, due to running the web stats generation, I looked into back-porting ionice to Etch and instead found which package it was in. In Etch it is in schedutils, while in Lenny it is in utils-linux.

I've now set the web stats generation, the on-host rolling backup, and the off-host backup to run at "Idle" io priority. This should keep load down, or at the least, keep the user facing services responsive.
[ add comment ] ( 53 views ) permalink
Duplicity 

The Duplicity backup run for the DS VPS has completed. 49GB of space is being used on the S3 storage, which is 51GB of content, compressed, encrypted and signed. Clearly a lot of our content is already in compressed formats, such as JPEG, AVI and so on.

The interesting part that I couldn't find information about is how much meta-data Duplicity needs to store locally about the backup. During the backup run, the working folder was climbing above 2GB. On completion, the "sigtar" file was gzipped, and in the end has ended up as 1.2GB. This works out at approximately 2% the size of the total compressed data. I consider this a suitable cost to be able to have off-host incremental backups without a local duplicate copy of the backup data.

Also, something I realized that made obvious sense once I realized it: nice duplicity. It just makes everything happier. While duplicity isn't CPU intensive, it just seemed to help with the system load. ionice would have been better, but Debian Etch doesn't have this.
[ add comment ] ( 35 views ) permalink
Duplicity 

A note regarding Duplicity and failed full backups - It noticed the failed backup, displayed what file and block it was up to, and got back to it.
[ add comment ] ( 35 views ) permalink

<<First <Back | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | Next> Last>>