Wednesday, June 01, 2011

IMAP server folder backup script

Here's a backup script that we use to backup our user's IMAP directories on the server to another server.  It makes use of "rdiff-backup" and lets us keep 27 weeks of snapshots without using up a ton of disk space.

We process the folders on a account-by-account basis, which reduces the amount of files that rdiff-backup has to keep track of and lowers the memory requirements.  It uses a brute-force method of looking for a "subscriptions" file which appears in the root of the MailDir folder for each user.  If your installation doesn't have that file, you may need to search for files like "dovecot.index" or ".Junk".

The 3rd line in the "DIRS=" statement is a little one-line perl that will randomize the list of directories.  For a backup that runs each day, processing the directories in a random order gives a better chance that all directories will eventually be backed up - even if there is a directory that sometimes causes the script to break.  If the script always went in alphabetical order, and the script always breaks at the mid-point, then the directories towards the end of the alphabet will never be backed up. If you don't want that optimization, you can simply replace that section with $SORT and the directories will be processed in alphabetical order.

BKPHOST & BKPBASE control where the files get backed up to.  The BASE argument tells the script where to find the IMAP folders on the current server.

Note 1: In order to backup to a remote system like this, it works best if you setup SSH keys and a non-admin / non-root account on the destination server with limited access.

Note 2: "rdiff-backup" works best over LAN networks.  If the transfer is aborted due to a link going down, then it will back out completely from the transaction and the folder will not actually be backed up.  If the link is too unreliable, this means that rdiff-backup might never accomplish anything at all.  One workaround is to rdiff-backup to a local directory and then rsync (with --partial) to a remote host over the unreliable link.

#!/bin/bash

FIND=/usr/bin/find
GREP=/bin/grep
RM=/bin/rm
SED=/bin/sed
SORT=/bin/sort

# source directory 
BASE="/var/vmail/"

# destination
BKPHOST=backup-host.example.com
BKPBASE="/backup/mail/vmail/"

echo ""
echo "Backup $BASE to $BKPHOST"
echo "Started at: " `date`
echo ""

# since RHEL5/CentOS5 don't have "sort -R" option to randomize,
# use the following example
# echo -e "2\n1\n3\n5\n4" | perl -MList::Util -e 'print List::Util::shuffle <>'

DIRS=`$FIND $BASE -maxdepth 3 -name subscriptions | \
    $GREP '/var/vmail' | $SED 's:^/var/vmail/::' | $SED 's:subscriptions$::' | \
    perl -MList::Util -e 'print List::Util::shuffle <>'`

# keep track of directories processed so far (debug purposes)
DCNT=0

for DIR in ${DIRS}
do
    echo ""
    echo "`date` - Backup: $DIR"

    rdiff-backup -v3 --print-statistics --create-full-path \
        /var/vmail/$DIR ${BKPHOST}::${BKPBASE}${DIR}
    rdiff-backup -v3 --force --remove-older-than 27W \
        ${BKPHOST}::${BKPBASE}${DIR}

    # the following is debug code, to stop the script after N directories
    DCNT=$(($DCNT+1))
    #echo "DCNT: $DCNT"
    #if [[ $DCNT -ge 10 ]]; then exit 0; fi
done

echo ""
echo "Backup finished at:" `date`
echo ""


Peformance for rdiff-backup is limited by the speed of the disks, then the CPU, and possibly the SSH overhead.  On older 2GHz Opteron servers, I see throughput of 4-8Mbps over a gigabit LAN.  Not that great for an initial backup, but MailDir folders have thousands of individual files.  Since most files are mail messages (text) they compress well with the SSH compression.  Things go much faster on later runs as only the changes get transferred from the host to the backup server.

A directory of about 810MB and 105,000 files took just under 11 minutes for an initial backup.  The SSH compression sped things up quite a bit because the actual transfer speed was around 4.4 megabytes per second, while the network interface never went much above 5-6 megabits per second.   That puts the net throughput at around 15-60 gigabytes per hour.

No comments: