Rough draft NAS is complete!
Rough draft NAS is complete!
Ultimately I would love to use ZFS but I read that it’s difficult to expand/upgrade. Not familiar with ZFS RAIDz1 though, I’ll look into it. Thanks!
I build robots for a living, the power is fine, at least for a rough draft. I’ll clean everything up once the enclosure is set up.
Z1 is just single parity.
Sweet build! I have all these parts laying around so this would be a fun project. Please share your enclosure design if you’d like!
Basically the equivalent of RAID 5 in terms of redundancy.
You don’t even need to do RAIDz expansion, although that feature could save some space. You can just add another redundant set of disks to the existing one. E.g. have a 5-disk RAIDz1 which gives you the space of 4 disks. Then maybe slap on a 2-disk mirror which gives you the space of 1 additional disk. Or another RAIDz1 with however many disks you like. Or a RAIDz2, etc. As long as the newly added space has adequate redundancy of its own, it can be seamlessly added to the existing one, “magically” increasing the available storage space. No fuss.
Awesome. It’s my understanding that ZFS can help prevent bit rot, so would ZFS RAIDz1 also do this?
I found this, it seems to show all the steps I would need to take to install RAIDz1: jeffgeerling.com/…/htgwa-create-zfs-raidz1-zpool-…
Yes, it prevents bit rot. It’s why I switched to it from the standard mdraid/LVM/Ext4 setup I used before.
The instructions seem correct but there’s some room for improvement.
Instead of using logical device names like this:
sudo zpool create zfspool raidz1 sda sdb sdc sdd sde -fYou want to use hardware IDs like this:
sudo zpool create zfspool raidz1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA2FERAP /dev/disk/by-id/wwn-0x5000cca27dc48885 ...You can discover the mapping of your disks to their logical names like this:
ls -la /dev/disk/by-id/*Then you also want to add these options to the command:
sudo zpool create -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa zfspool ...These do useful things like setting optimal block size, compression (basically free performance), a bunch of settings that make ZFS behave like a typical Linux filesystem (its defaults come from Solaris).
Your final create command should look like:
sudo zpool create -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa zfspool raidz1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA2FERAP /dev/disk/by-id/wwn-0x5000cca27dc48885 ...You can experiment till you get your final creation command since creation/destruction is nearly instant. Don’t hesitate to create/destroy multiple times till you got it right.
ZRAID expansion is now better than ever before!
In the beginning of this year (with ZFS 2.3.0) they added zero-downtime expansion along with some other things like enhanced deduplication.
This might be of interest to you, he also has his files up on maker world
My understanding is that the only issues were the write hole on power loss for raid 5/6 and rebuild failures due to un-seen damage to surviving drives.
Issues with single drive rebuild failures should be largely mitigated by regular drive surface checks and scrubbing if the filesystem supports it. This should ensure that any single drive errors that might have been masked by raid are removed and all drives contain the correct data.
The write hole itself could be entirely mitigated since the OP is building their own system. What I mean by that is that they could include a "mini UPS" to keep 12v/5v up long enough to shut down gracefully in a power loss scenario (use a GPIO for "power good" signal). Now, back in the day we had raid controllers with battery backup to hold the cache memory contents and flush it to disk on regaining power. But, those became super rare quite some time ago now. Also, hardware raid was always a problem with getting a compatible replacement if the actual controller died.
Is there another issue with raid 5/6 that I'm not aware of?
they could include a “mini UPS” to keep 12v/5v up long enough to shut down gracefully in a power loss scenario
That’s a fuckin great idea.
I think so. I would consider perhaps allowing a short time without power before doing that. To handle short cuts and brownouts.
So perhaps poll once per minute, if no power for more than 5 polls trigger a shutdown. Make sure you can provide power for at least twice as long as the grace period. You could be a bit more flash and measure the battery voltage and if it drops below a certain threshold send a more urgent shutdown on another gpio. But really if the batteries are good for 20mins+ then it should be quite safe to do it on a timer.
The logic could be a bit more nuanced, to handle multiple short power cuts in succession to shorten the grace period (since the batteries could be drained somewhat). But this is all icing on the cake I would say.
so I’m going to run a USB C cable to the Pi
Isn’t that already the case in the photo? It looks like the converter including all that cabling is only there to get 5v for the fan, but it’s difficult to see where the usb-c comes from
PLA warps over time even at low heat. that said, as long as you have good airflow it shouldn’t be a problem to use it for housing, but anything directly contacting the drives might warp.
I thought about doing this myself and was leaning towards reusing drive sleds from existing hardware. it’ll save on design and printing time as well as alleviate problems with heat and the printed parts.
the sleds are usually pretty cheap on ebay, and you can always buy replacements without much effort.
I would argue either RAID 5 or ZFS RAIDz1 are inherently unsafe, since recovery would take a lot of read-write operations, and you better pray every one of 4 remaining drives will hold up well even after one clearly failed.
I’ve witnessed many people losing their data this way, even among prominent tech folks (looking at you, LTT).
RAID6/ZFS RAIDz2 is the way. Yes, you’re gonna lose quite a bit more space (leaving 24TB vs 32TB), but added reliability and peace of mind are priceless.
(And, in any case, make backups for anything critical! RAID is not a backup!)
RAID 5 is fine, as part of a storage and data management plan. I run it on an older NAS, though It can do RAID 6.
No RAID is reliable in the sense of “it’ll never fail” - fault tolerance has been added to it over the years but it’S still a storage pool from multiple drives.
ZFS adds to it’s fault resistance, but you still better have proper backups/redundancy.
Dust is going to be a problem (well, maybe not that much electrically, but it maks it a pita to keep clean) after some months, especially for the Raspberry Pi.
Consider getting (or, even better, 3D printing) an enclosure for it at least (maybe the HDDs will be fine as they are since the fan keeps the air moving and dust probably can’t actually settle down on it).