Time Machine vs. ZFS + rsync
Update: I actually got the fslogger thing at the end of this entry working so I can do incremental backups. Not really a product yet but it isn’t hard to do. Here is the super rough version of it.
I can’t stand inefficiency. Time Machine is fundamentally a very inefficient mechanism for backing up large files that change. So bad actually that most things like Parallels and VMWare disable backups of your disk images. Here is the basic algorithm:
1) Get the list of files that have changed since the last backup
2) Create new directory in backup store
3) Copy any file that has changed since the last backup
4) Create hard links to any file or even whole directory in the new backup to the last backup for any file that has not changed
Step 1 is pretty efficient for Time Machine as they keep hooks into the filesystem to track those changes as they occur. Step 2 is obviously easy. Step 3 is a doosy. If you change 1 byte in a VMWare image it will copy the several gigs over to the backup store. Not a great result from such small change and that would quickly consume your disk flushing valuable older changes out of the system. Step 4 is also very efficient because hard links are trivial to create and use virtually no space, though they did have to make special changes to HFS+ so that you could hard link directories to make Time Machine more efficient.
The obvious big problem here is that in the case that a file changes at all you need to copy the whole thing to you backup device. Not that viable over the internet or even WiFi for really big files that are updated often like VM images. You might have wondered why Apple is considering integrating ZFS directly into Mac OS X, now you know why. ZFS lets you do something very special: create a snapshot of a whole filesystem. Essentially a copy of that filesystem at a particular point in time and they do this without copying whole files when they change but instead at the block level. This amazing capability is critical in this more efficient way to backup your system with multi-level snapshots.
Enter rsync. Rsync has been around for a long time. It is used by system adminstrators everywhere to efficiently update files in one location with files from another location, even over the internet. It does this by comparing them at the block level and only sending diffs when needed to update files on the other end. Using the right command line options you can essentially make one filesystem look like a carbon copy of another filesystem. Using this in combination you can make a backup solution that is much better than most out there:
1) Rsync your current filesystem to a ZFS filesystem — remote or attached storage
2) Take a snapshot of the resulting filesystem to forever capture its state
Those are the two steps. Nothing more. Here is the script that I use to backup my Macbook Air to my server at home:
time rsync -av --delete sam 192.168.1.90:/Volumes/zdisk/macbookair
ssh 192.168.1.90 sudo zfs snapshot zdisk/macbookair@`date "+%s"`
This results in a set of filesytems that looks like this:
zdisk/macbookair 14.9G 898G 14.6G /Volumes/zdisk/macbookair
zdisk/macbookair@1225350709 125M - 14.6G -
zdisk/macbookair@1225351248 117M - 14.6G -
zdisk/macbookair@1225418584 21.7M - 14.6G -
This obviously isn’t as awesome as using Time Machine to recover my files because I don’t have a great UI, I have to run a script and generally have to know more about the system than a Time Machine user. However… I can update a VM without sending gigs of data over the internet to back it up or deal with not having a backup at all.
The only downside is that an empty backup still takes about 8 minutes to go through all my files. Next step would be to integrate into the fslogger into the solution and only look at those files that changed for sure.