Herr Bischoff


Backups on macOS: Time Machine vs. Borg

For the longest time, I used to use Apple’s own Time Machine backup solution. After all, it’s built right into macOS, promises seamless integration and a “set it and forget it” operation. In practice however, it didn’t work like that at all for me.

Probably because I was not backing up to a locally attached hard drive but a SMB share on my home server. This is a supported use case, given Samba is set up correctly. In my case it is, I have verified this with the Samba team. The same issues occurred when backing up via AFS, back when that was a thing.

Time Machine breaks seemingly at random, refusing to back up to the specified share. Backups run great for a few weeks, months, sometimes even for up to year. Then suddenly, it breaks. Logs reveal very little, repairing the sparse bundle’s1 file system doesn’t solve anything (because it’s not damaged according to Disk Utility). The only reliable way to get the backup to work again is starting an entirely new one. If you don’t have infinite space available, this usually means deleting the current backup in its entirety. Depending on how long the backup ran successfully, you lose months worth of backup data. Just hope you don’t need that one folder you don’t know yet you accidentally deleted a couple of months ago and can’t remember any more.

Obviously, this is an untenable situation, especially for a backup solution.

On many of my client’s servers I use Borg for daily backups. It has been very reliable for years now. Every restore so far went without any issues and the backup process itself never threw any errors. One time I even had to restore an entire file server (~3 TB worth of data) because three of four RAID-Z2 disks failed in sequence. It went great, considering the circumstances, so why not try using it for my local backups as well?

Skip to several years later and I’m still using the Borg setup. It has not broken down a single time, I have successfully restored folders and files I managed to delete or mangle and enjoyed even more space savings. Borg supports zstd compression on top of deduplication. More space means more backups which in turn mean a longer backup span. In fact, I have yet to hit the space limit to adjust the time after which backups are discarded. This was very different for Time Machine backups.

Backups tend to progress slower, since Borg has to go though the entire file system structure tasked to be backed up. I have decided to just save my user folder, as that’s where my important data resides. Software can be downloaded and reinstalled, critical pieces I have archived offline. Global settings should be documented anyway. The backup can be fully encrypted, so you can use untrusted storage without worry.

Sure, restoring from a backup can be a hassle, if the machine fails catastrophically and you have to store the encryption key somewhere safe if you chose to do so. What you gain is a reliable, private, safe, long-term backup, no matter what happens.

For me, the trade-off is worth it many times over. There are many interesting use cases. You could, for example, set up a separate, smaller backup set that runs every hour, containing only the most critical data. Since the backup destination can be a SSH connection, you can even run those from wherever you are, as long as you have a working internet connection.

If you’re reading this and are interested in trying this out for yourself, write to me and I will be happy to share configuration tips. With enough interest, I may even write this up as a How-to in a separate post.


  1. Time Machine creates disk images on non-native file systems to be able to hard-link files to save space. Sparse bundles are images that don’t allocate the entire space beforehand, instead they grow in relation to the data they contain up to the maximum size given. ↩︎