I need to try out the ssh bit
https://ftfl.ca/blog/2025-12-28-seamless-login-logout-with-a-zfs-encrypted-home-directory.html
@bob_zim @hyc @oxyhyxo also, this review might benefit from more reviewers:
I need to try out the ssh bit
https://ftfl.ca/blog/2025-12-28-seamless-login-logout-with-a-zfs-encrypted-home-directory.html
Next NYC*BUG: Wednesday April 1st
What's Changed Since The Last Time I Came this Way - a talk that was supposed to be about OpenZFS, by Michael W Lucas
2026-04-01 @ 18:45 local (22:45 UTC) - Backroom of Brass Monkey 55 Little West 12th St
https://www.nycbug.org/
Hear how the newest ZFS book is going and what @mwl has planned.
Flyer: https://www.nycbug.org/media/2026-04-01_Lucas_Flyer.pdf
progress on my slides for the NYCBUG talk next Wednesday.
Yes, an April Fools' talk. On #openzfs, sort of.
I tend to keep too many #OpenZFS #ZFS boot environments. Case in point, my DNS server:
BE Active Mountpoint Space Created
15s-2026-01-21_01 NR / 30.4G 2026-01-21 12:53
default - - 735M 2020-03-22 05:06
master-2020-03-22_01 - - 948M 2020-03-22 09:23
master-2020-06-19_01 - - 1001M 2020-06-19 03:00
master-2020-10-20_01 - - 1017M 2020-10-20 09:11
master-2021-01-11_01 - - 1014M 2021-01-11 12:21
master-2021-04-05_01 - - 95.4M 2021-04-05 19:43
master-2021-06-25_01 - - 1.20G 2021-06-25 10:58
master-2021-08-31_01 - - 1.26G 2021-08-31 08:13
master-2021-10-07_01 - - 1.36G 2021-10-07 10:39
master-2022-03-25_01 - - 1.25G 2022-03-25 18:14
master-2023-01-22_01 - - 1.41G 2023-01-22 06:15
master-2023-06-02_01 - - 1.52G 2023-06-02 20:11
master-2023-08-27_01 - - 1.51G 2023-08-27 12:34
master-2023-11-09_01 - - 1.55G 2023-11-09 14:08
master-2024-03-29_01 - - 1.60G 2024-03-29 18:47
master-2024-10-01_01 - - 1.54G 2024-10-01 16:29
master-2024-11-30_01 - - 1.46G 2024-11-30 17:58
master-2025-08-06_01 - - 2.05G 2025-08-06 14:00
master-2025-12-27_01 - - 1.40G 2025-12-27 18:09
Still have the original boot environment created six years ago.
I've been tinkering with the automatic unlock of encrypted zfs datasets on login (at the console) which works great but when the user logs out they get the following errors:
login[65695]: zfs_unmount failed for zroot/home/$homedir with: -1
login[65695]: unmount_dataset failed with: -1
2 questions:
- Where do I configure the unmounting of datasets on logout?
- Where do I look to figure out why its not unmounting? The user has the mount zfs permission (and they can mount fine)
What am I getting wrong?
@ToshInMacc What was the typo? What OpenZFS document mentions the absolute need for minimum of 4 disks?
For #RAIDZ2 #OpenZFS documentation mentions https://openzfs.github.io/openzfs-docs/Basic%20Concepts/VDEVs.html#what-are-the-different-types-of-vdevs ...
『... Requires at least 3 disks (5+ recommended), can tolerate two drive failures.』
The recording of the March 18th, 2026 #OpenZFS Production User Call is up:
We discussed debz, a ZFS-based system deployment and orchestration platform, #CachyOS, ZFSBootMenu, recovering a single file from a ZVOL, and more! (Pardon the poor quality audio while I was on the road. It got better!)
"Don't forget to slam those Like and Subscribe buttons."
You can support all Call For Testing efforts via BSD Fund: https://bsdfund.org

#OpenZFS no longer warns you when you create a pool with mismatched VDEVs, e.g.:
# zpool create daft raidz s2d0-WD-08NRX3 s2d1-NCC-1701D s2d2-TOSH-9268 s2d3-SEA-4N0M7 mirror s2d4-SAN-XL905 s2d5-GOOG-4C02
Once upon a time, you needed to use -f to force this. I wonder why this is allowed? Seems unnecessarily risky. Once upon a time, such pools were full-on unrepairable.