So, sizewise #btrfs with deduplication + zstd (lvl. 3) compression seems quite similar to what I had with #ZFS for my #OpenWrt / #Gluon / #Linux git worktrees. Dedup. + compression each cut the disk usage by about half. "compsize" says: "Referenced: 587G, Uncompressed: 281G, Disk usage: 127G".
I had to use a slightly larger partition, 256GiB w. btrfs vs. 192GiB w. ZFS, though and copy + deduplicate w. duperemove in incremental steps as btrfs unfortunately has no inband/online deduplication.
Also, the #duperemove hash file, an sqlite 3 database it seems, takes up about 10 GiB for this #btrfs partition for me. So that needs to be added to the reported 127G disk usage / 256GiB disk, I guess.
@T_X I quite like https://github.com/Zygo/bees - its a bit more effort to get it running than a file-based dedup tool like duperemove, but in general it gives me quite a bit more savings
GitHub - Zygo/bees: Best-Effort Extent-Same, a btrfs dedupe agent

Best-Effort Extent-Same, a btrfs dedupe agent. Contribute to Zygo/bees development by creating an account on GitHub.

GitHub
@youam read about that briefly, sounds very nice, the daemon mode sounded very interesting to me to kind of get an inband deduplication like experience / compromise. The reason why I didn't try it yet though was because I couldn't find it in the Debian Sid package repository. #duperemove on the other hand was readily available on Debian Sid.
I still need to figure out how/when to best run duperemove though.
@T_X yup, that's most of the hassle for bees. I took a peek at packaging it, but didn't follow that up. I think I just didn't have the time to do it properly, but it shouldn't be that hard - or you just build it yourself and if you need it on more systems, copy the single binary over. That's what I ended up doing