AIUI, ZFS really requires multiple drives to be effective.
You might gain a little value from extra checksums on file system blocks on a single drive, but if those checksums ever start failing on a hard drive there is a high likelihood that most of the drive is about to fail completely.
I had researched ZFS a fair bit as I planned to build my own FreeBSD NAS around 3-4 drives in ZFS, but eventually decided to buy an off-the-shelf ZFS NAS from the TrueNAS people.
I think those are both true in general though I don't know ext4 well enough to compare in depth.
1) One of the fundamental ideas of ZFS is Copy-On-Write. This makes it function similarly to a VCS, in that this makes snapshots nearly free. It sets a checkpoint where from now until you release the snapshot, your new present state of the file system stores only the changed blocks.
2) ZFS supports several compression algorithms all of which (including the default) work very well.
+
3) ZFS also has built-in "ZFS send" and "ZFS receive" functions for copying an entire ZFS filesystem to new media of similar or different drive layout, on the same system or over a network.
I've got limited experience with those, but it seems to me like they work well.
Oh, forgot to say about the compression:
2.A.) I always think of compressing and decompressing as slowing things down. The reverse seems to be true - ZFS benchmarks I've looked at say that having strong compression integrated in the FS actually *speeds up* the file system, because it saves more than enough disk writes/reads to make up for the CPU overhead.
It also can do auto deduplication if you like - more useful fall-out of the COW mechanism - but that's a bit too freaky for me.
The other thing about ZFS that's a bit hard to explain, and frankly I don't know well enough to know if I'm explaining it right, is that it seems to integrate much more detailed knowledge of physical drives than most file systems.
It talks to SCSI or SATA at a very low level, uses SMART data from HDDs, does slow background "scrubbing" of the drives over time, to force the drive to see & reallocate sectors starting to fail, etc.
I don't know all the details, but it seems like good stuff.
@CliftonR "It talks to SCSI or SATA at a very low level"
Imagine I plugged a SATA drive into a USB3 enclosure. Should I assume this will not happen the way ZFS hopes?
Ya, I was wondering that myself as I wrote it. It's another damn good question.
The answer is I really don't know how much it may affect that, or to what extent it can see "through" the USB3/SATA converter. If Google still worked properly it would be easier to find out.
I earlier mentioned Michael Lewis @mwl as a ZFS expert (which he is) and he seems like a nice guy, and you know, the good kind of tech weirdo.
So I am, with minor hesitation, tagging him in now to correct any misinformation I may be spreading about ZFS.
He might also find your base question interesting, what kind of file system is best to put on a single standalone drive being used as a system or data backup.
I've never seen that discussed much, though it's a great question to ask.
Single disk system? Set copies=2 for error correction.
ZFS snapshots are the most efficient of any filesystem thanks to copy-on-write.
ZFS is fine for backing up, but error correction applies to the data it gets. Send garbage, you'll have high-integrity garbage.
Compression trades CPU cycles for disk I/O. Most hosts today have more CPU than IOPS, so it's a fair trade.
@CliftonR @mcc if I may - I'm not a ZFS dev but I have done the odd bit of debugging & patch contributing over the last decade and a bit -
ZFS only talks to devices at the usual block level, ie. read block/write block/discard block, so it will work fine with any fairly usual block device (SD card, HDD in USB enclosure, etc).
It verifies checksums every time a block is read, but a scrub (which reads every block) is only triggered if you explicitly request one (eg from a cron job)