Hey fellow nerds.

How do you handle large-ish volume backups between Linux hosts? About 1TB of data. Source has 100Mbps upload. Receiving end is Gbit.

Incrementals will be essential, since full backup is nearly a day's worth of upload.

FOSS suggestions only at this stage. It's currently performing backups put of my control with Veeam and I'm a bit nervous about the whole situation being out of my hands.

#linux #backups #nerdery

@lakeswimmer I use #rsnapshot. Been using it for years and it works great. I'm also one of the project maintainers so am biased.

In a brand new installation where everything used #ZFS (and really, everything *should* use ZFS - if it's not properly supported in your OS the OS is faulty and you should use a different one) then these days I'd use zfs send/recv, wrapped in #sanoid / #syncoid - https://github.com/jimsalterjrs/sanoid

@lakeswimmer unfortunately my future is a mixed ZFS/other stuff one, at least for a few years, and I'm not aware of any tool that combines the niftiness of both sanoid/syncoid and rsnapshot, so I'm writing my own.
@DrHyde Snapshots is an interesting idea. But moving those snapshots offsite is pretty significant when it comes to bandwidth, yes?
@lakeswimmer snapshots are effectively only storing a diff. `zfs send` sends a complete copy, but once you've got the first one over you can send subsequent updates with `zfs send -i` for incrementals: https://openzfs.github.io/openzfs-docs/man/master/8/zfs-send.8.html#i
zfs-send.8 — OpenZFS documentation