Started #zpool shinannigans. I'm switching from RAIDZ1 to Stripe storage type and want to remove two old, smaller drives. At the end, it should be one of those:

1. Same storage size, less power consumption, colder NAS, faster storage, and two empty drive bays.

2. Lost data, fire, and explosions.

#homelab #NAS #DYNAS #zfs #RAID1 #Raidz1 #storage

RE: https://chaos.social/@schenklklopfer/115894643759324286

Wie langsam kรถnnen SSDs sein?

Diese beiden in einem #ZFS #zpool #Mirror so: 6,2MB/s.

Okay, die sind wirklich hart im Arsch.

Hatte gedacht noch was mit Mirror plus copies=2 retten zu kรถnnen, aber nein, das kann wirklich nur noch auf den Elektroschrott...

This is a success, I suppose =)
#zpool #zfs #homelab #Proxmox

@LordCaramac
You can exercise your #zpool / #zfs statements by building a small experimental zpool using files as vdevs instead of physical devices.

https://openzfs.github.io/openzfs-docs/man/master/7/zpoolconcepts.7.html

zpoolconcepts.7 โ€” OpenZFS documentation

I wish to thank all the people that gave me recommendations for this new adventure of mine. I just created by first #zpool and #zfs dataset, and I'm copying over all the data I have on my โ€œclassicโ€ mdadm RAID. After I'm done with the copy, I'll swap the relative place of the ZFS disks and RAID disks, and make the ZFS pool the primary.

Pictured: Kubuntu shutting down gracefully โ€“ without forcing off the computer โ€“ following an insane zpool-scrub(8) command.

For the insanity:

https://github.com/openzfs/zfs/issues/17527

โ€• Gracefully reject an attempt to scrub a read-only pool

#Kubuntu #Ubuntu #Debian #ZFS #OpenZFS #zpool #scrub

New ๐—™๐—ฎ๐—ถ๐—น๐—ฒ๐—ฑ ๐—•๐—ฎ๐—ฐ๐—ธ๐˜‚๐—ฝ ๐—ฆ๐—ฒ๐—ฟ๐˜ƒ๐—ฒ๐—ฟ ๐—•๐˜‚๐—ถ๐—น๐—ฑ [Failed Backup Server Build] article on my https://vermaden.wordpress.com/ blog.

https://vermaden.wordpress.com/2025/05/28/failed-backup-server-build/

#backup #data #freebsd #hardware #nvme #server #small #ssd #storage #tiny #unix #zfs #zpool

New ๐—™๐—ฎ๐—ถ๐—น๐—ฒ๐—ฑ ๐—•๐—ฎ๐—ฐ๐—ธ๐˜‚๐—ฝ ๐—ฆ๐—ฒ๐—ฟ๐˜ƒ๐—ฒ๐—ฟ ๐—•๐˜‚๐—ถ๐—น๐—ฑ [Failed Backup Server Build] article on my https://vermaden.wordpress.com/ blog.

https://vermaden.wordpress.com/2025/05/28/failed-backup-server-build/

#backup #data #freebsd #hardware #nvme #server #small #ssd #storage #tiny #unix #zfs #zpool

First time trying to expand a ZFS raidz2...

What am I missing? Forest + trees problem?

I thought it might be a requirement to escape the colons in the device name but I've been backwards and forwards of escaping and quoting to no effect.

For giggles tried adding a 'single' vdev and then 'zpool attach' but it doesn't work.

I was initially following @vermaden tutorial at https://freebsdfoundation.org/blog/openzfs-raid-z-expansion-a-new-era-in-storage-flexibility/ but working on a live system with physical devices and without a net. Ha.

"Linux" things?

Debian Bookwork, Linux 6.12.12+bpo-amd64, zfs 2.3.1. Very vanilla other than the backport kernel.

#Linux #ZFS #draid2 #raidz2 #zpool

Fun when your #zfs #zpool scrub comes back with an I/O read error on one disk, but you can't find anything, and all smart tests come back fine.

It was a one-off, so I guess there was a "Glitch In The Matrix"