Oh, this combination of parameters and features causes lvcreate to incorrectly calculate the extents it needs and the command fails. (Needed to calculate them manually to fully use the drives. Probably could just have yolo-ed it with 89%FREE instead or something)

lvcreate --type raid6 --size 100%FREE --config allocation/raid_stripe_all_devices=1 --name raid6Data --raidintegrity y --raidintegritymode journal --raidintegrityblocksize 512 --autobackup y lvm_vg_<<UUID>>

#LVM #Linux #dmraid

@lispi314 good question.

  • Maybe because #dmraid as Software-#RAID was tasked with checking itself and the underlying filesystems like #ext3 were taked to check integrity on their level as well?

After all physical device <=> physical volume partution <=> logical volume groups (w/ RAID) <=> filesystems were used in classic #Linux-RAID transparently to each other.

But yeah, you are correct in that an integrity checkibg module was overdue...

RAID - Wikipedia

@neil @ai6yr @restic I'd say an external #SATA-SSD in an enclosure with #brtfs or "journalless #ext4" works fine.

  • Maybe consider a cheap multi-port case with #HDDs and using #dmraid for #RAID-based failsafe?
@thomholwerda The easiest route I know is using #lvm-#dmraid for a #Linux-#RAID10 and access it via #SSH / #SFTP

@lispi314 @mos_8502 nodds in agreement

Note that "Hardware-#RAID" is a hack for OSes without proper integrated RAID support (i.e. Windows) and #dmraid on #Linux requires the OS to be aware if the entire storage achitecture and correct thibgs accordingly.

  • #ZFS solves that since #VolumeManager and RAID are part if the #Filesysten and thus it's aware of the exact storage location and redundancy status of every single block of data, making both restore and growth operations exponentially faster to begin with...

@ascherbaum yeah, makes sense...

Usually the system should resync /boot shortly after mounting in the pre-#dmraid #RAID1 configuration...

@ascherbaum Reminds me.of non-#dmraid/preboot #RAID1 which I did setup on multiple occasions...

@saiki kommt immer drauf an.

Ich find #btrfs ist nen guter Kompromiss zwischen "journalless" #ext4 und #ZFS...

Außerdem ist es #Linux-typisch mit #dmraid & #LUKS / #dmcrypt kompatibel (wobei ZFS das intern macht!)...

@foxysen personally I prefer #Linux #dmraid #RAID10 which is basically a #RAID1 with #striping and it's blazing fast - like a #RAID01 but better because it can dynamically grow...

For everyone else, there's #ZFS's #ZRAID which is just #ChefsKiss...

@ernstdemoor @nixCraft that's because on basically all #Linux #Filesystems, #RAID and #Encryption is handled by dedicaded subsystems like #dmraid and #dmcrypt / #LUKS respectably, thus not on filesystem but OS level...

This allows extra cursed shit like a an encrypted & RAID-5 running NTFS - Tho that won't be useable by anything but Linix and I disrecommend it almost as hard as mixing hardware RAID controllers and/or dmraid with ZFS.

Remember: NEVER EVER LIE TO ZFS!!!