"Due to potential legal incompatibilities between the CDDL and GPL, despite both being OSI-approved free software licenses which comply with DFSG, ZFS development is not supported by the Linux kernel"
@mcc been that way for decades now
@whitequark I have a new hard drive I intend to use primarily for backup and I am currently considering BTRFS or ZFS for the Linux part instead of ext4 (because I hear they can do some thing of storing extra error-checking data to protect against physical disk corruption). In your view, if I intend to use mainline Debian indefinitely, will BTRFS, ZFS, both, or neither give me the least pain getting things working?
@whitequark A few people are commenting on BTRFS reliability problems which is weird because I thought the whole point was to be "the more reliable fs". Debian's wiki links to this bewildering compatibility table that looks like a bunch of stuff I don't care about (the only features I care about are reliability, and some of zfs's auto-backup stuff sounded compelling) but the weird "mostly ok" line around defragmentation/autodefragmentation worries me a little https://btrfs.readthedocs.io/en/stable/Status.html
Status — BTRFS documentation

@mcc @whitequark I've been running btrfs on servers for years now, no filesystem bugs so far. (One issue had arisen around power being cut leading to some data corruption but that wasn't btrfs' fault)
@nogweii @whitequark i thought the entire point of a journaling fs was that cutting power doesn't lead to data corruption (unless the corruption was at the app level I suppose)
@mcc @nogweii @whitequark it doesn't *if the hardware upholds its end of the bargain*. no fs can protect against hardware that does not fulfill the guarantees its supposed to provide, and the only corruption I've had in btrfs was indeed due to faulty hardware. Btrfs has self-validation features so when faulty hardware breaks things btrfs is noisier about it than many fses, and that leads to a perception that it is worse when its just better at knowing what's broken.

@mcc @nogweii @whitequark btrfs isn't a journalling FS -- it's copy-on-write, which is subtly different.

The problem with unexpected power-off is when the hardware lies. btrfs requires that when the disk says that data's hit permanent storage, it really has. In some cases of buggy firmware, disks can pass a write barrier while the data's still only in cache. With a power-fail, that can lead to metadata corruption, because the FS has updated the superblock, pointing to an incomplete transaction.

@darkling @nogweii @whitequark I see. But it seems like that would be no greater a problem for BTRFS than ext4.

"it's copy-on-write, which is subtly different"

Does it have different performance characteristics? Intuitively it seems like it must but I can't really justify the idea it does moreso than modern journaling/autodefrag

@mcc @nogweii @whitequark I don't know about performance. I can describe the algorithm.

Due to the way it works (the copy-on-write part), a lost write is going to effectively drop an entire page of metadata, rather than simply not updating an existing page. It *never* writes updated data in place, except for the superblocks, which have fixed locations. So the damage in the missed-write case is rather larger than with non-CoW FSes