Hey fellow nerds.

How do you handle large-ish volume backups between Linux hosts? About 1TB of data. Source has 100Mbps upload. Receiving end is Gbit.

Incrementals will be essential, since full backup is nearly a day's worth of upload.

FOSS suggestions only at this stage. It's currently performing backups put of my control with Veeam and I'm a bit nervous about the whole situation being out of my hands.

#linux #backups #nerdery

@lakeswimmer rsync with compression. How much speedup you will get depends on how much data is new. The expense of restarting on error will be painful, so use partials if link reliability is an issue.
@kauer Thank you - I'll make rsync work. It's what I've been using for the last few weeks since I started getting cold feet about the Veeam backup. But was wondering if there might be a better thing. Probably not!

@lakeswimmer What would be "better", in your view? It depends a lot on what you want to back up, how often, and where from/to. If you are backing up disk images, you really should be using something snaphotty (as the ZFS enthusiast suggested), if you are backing up files, especially if Linux to Linux, it's hard to go past rsync - partials, hardlinking, lots of controls, exclusion lists, inclusion lists, compression...

If you need some kind of scaffolding to look at, arrange, select, restore from, document or index your backups, then rsync may be too low-level.