@shur3d, if you are able to use #ZFS on the machines to be backed up too, you can use its snapshot replication for this. While there are many tools for this, I’ve been using #sanoid + #syncoid for this and it works very well.

Wow, #sanoid went a bit crazy, it's been creating and not pruning thousands and thousands of snapshots over the past few months on my main server, making it impossibly slow to get a list of zfs snapshots, which led to many sanoid processes running at the same time trying to do the same time, but only after creating ever more snapshots.

I've intervened and now it's been cleaning up snapshots. For days already.

Hier eine ZFS Frage: sanoid scheint die bevorzugte Lösung für automatische Snapshots zu sein, oder übersehe ich was?
#followerpower #zfs #sanoid

Heya #zfs peeps--Klara just published an article of mine which goes over the basics of observability and monitoring of #openzfs systems. Including breadcrumbs leading you to a free to use web service and how to use it with #sanoid, if you're not feeling up to the challenge of running your own Nagios instance!

https://klarasystems.com/articles/openzfs-monitoring-and-observability-what-to-track-and-why-it-matters/

OpenZFS Monitoring and Observability

Learn how to monitor OpenZFS pool health, scrubs, snapshots, and free space using zpool status, Sanoid, and automated alerts.

Klara Systems

@ruari apparently so... I've heard Allan Jude talk about it at least!

And if you think zfs send/recv is good - take a look at Jim Salter's #Sanoid / #Syncoid which orchestrates the process - absolute gold!

Made a minor update to my GitHub script to install #sanoid on #truenas scale.

Comments and bug reports welcome

https://github.com/furicle/Syncoid-Scale

Today I had cause to revisit my home lab setup, and that in turn caused me to take a new look at my backup configuration and validate that everything is still in order.

If anyone is curious about setting up 3-2-1 style backups with #ZFS #snapshots using #Sanoid and #Syncoid, perhaps my writeup may be of some service.

https://oxcrag.net/blog/2025/11/16/zfs-backup-strategy-with-sanoid-and-syncoid.html

ZFS Backup Strategy with Sanoid and Syncoid

In my previous post, I discussed how I’ve migrated VMs to new storage. This gave me cause to also take a look at my backup configuration, to ensure I can still come back from catastrophic events.

oxcrag.net
I just had to restore a folder from backup, and it turned out, that my regular #ZFS snapshot service (Sanoid) was not running, I must've stopped the timer some time ago and forgot. But the nightly backups were still running, so the actual lost work is not that much. Test your #backup!
Ps. Thanks for @jimsalter and all other contributors for the amazing tool of #Sanoid and #Syncoid (which saved me this time)!

It's too bad that an SSD failed in my server. But, thanks to #ZFS and #sanoid, I am not concerned. The pool is redundant, and it syncs to two separate offsite systems every 30 minutes.

For good measure, I'm now also sync'ing to a different array in the same server.

The 2nd disk in the mirror shows no signs of trouble, but if it also decides it's had enough I'm confident downtime will be minimal.

@kolev I store data on servers backed by #Ubuntu #KVM hypervisors with #ZFS.

#Sanoid gives me hourly snapshots for 36 hours, daily snapshots for a month, and weekly snapshots for three months.
#Syncoid ensures the snapshots are replicated to a separate pool in my main machine, plus (over SSH) to a separate backup server.

Of course if you run your Borg server as a VM on such a setup, you can easily keep secure backups of your actual workstations even if they run lesser file systems. I do something like this for the Macs in my family: the TimeMachine server is one of the VMs that get the Sanoid+Syncoid treatment. .