When I set up this Debian box with ZFS I tried setting up ZFS swap to see how it worked. There were warnings in the OpenZFS docs but I gave it a go anyway.

The system is configured with two fast NVME connected to the CPU and mirrored for the root pool. Swap was created as part of the rpool.

It's a bad configuration. After a number of hours the load average consistently heads into the high teens or higher. Turning off swap makes it drop immediately to ~1.0 (typical for this box running a desktop and futzing around in browsers and shells).

As far as I can tell it's not possible to resize a ZFS partition in situ so I added the swap partition on the third NVME that is formatted EXT4 (south bridge x4).

Not ideal but it is perfectly serviceable.

NVME drives are so fast, would I really notice a difference by the seat of the pants with the swap on partitions on the north bridge connected mirrored devices? Probably not.

Leaving sleeping dragons for now.

#Linux #Debian #ZFS #OpenZFS

Someone on Reddit launched a petition to relicense ZFS from CCDL to UPL.
This would make it compatible with the GPL and makes it possible to upstream it into the Linux kernel.

There have been attempts in the past and this will likely fail too.

Anyway,
the Reddit post: https://www.reddit.com/r/zfs/comments/1t24hkj/relicense_zfs_petition/
The petition: https://www.change.org/p/re-license-zfs-to-upl

#zfs #openzFS #opensource #foss

ZFS On Linux After Building A New NAS

PeerTube
Linux 7.1 compat by robn · Pull Request #18471 · openzfs/zfs

[Sponsors: TrueNAS] Motivation and Context We are building a religion, we are building it bigger We are widening the corridors and adding more lanes We are building a religion, a limited edition W...

GitHub
Why does `zfs destroy -r $ds` sometimes fail with [EIO] when there are no actual I/O errors taking place? It seems like there is some unstated limit on the queue of deferred frees, and once you delete enough stuff, it just fails for a while. #OpenZFS #FreeBSD

Current status: Backing up the #HardenedBSD #Radicle seed node.

I love you, #OpenZFS #ZFS. You make backing up and replicating storage incredibly easy.

Not me, every time I set off a `zpool scrub` (with apologies to TLC):
🎵
Yes, I just want to scrub,
A scrub is a kind of check that gets love from me,
Countin' out the checksum of your best friend's drive,
Trying to verify at me

#FreeBSD #ZFS #OpenZFS #90s #RuinASong

Woke up this morning to new #Radicle seeders for the #HardenedBSD src and ports repos. This is encouraging to see.

There are a number of issues to fix:

  • STALE_CONNECTION_TIMEOUT should either be bumped or made configurable (or both).
  • Filesystem permissions get messed up on #OpenZFS #ZFS but not on UFS or tmpfs.
  • Bump the node.limits.fetchPackReceive maximum by default to higher than 500MiB (exact value to be chosen later.)
  • Improvements to cloning without seeding. Right now, a user will need to run rad seed <RID>, letting both git and radicle to quiet down, then run rad clone <RID>. Otherwise, rad clone <RID> will fail.
  • There's something that #ZFS #OpenZFS does at mount time (I'm told it's processing deferred frees) that sometimes takes an incredibly long time (hours), and I'm still struggling to understand: why must this be done at mount? (And why mount and not pool import?) Why can't it continue to be deferred like it was during operations?

    Still waiting for `zfs mount -a` after an unplanned power outage, 90 minutes after restore.