My experience with #OMV #NAS with #RaspberryPi, and using #SnapRAID + #MergerFS, is that while SnapRAID's really cool, it def could not beat real #RAID. My understanding is RAID works on a low/block level so it can achieve redundancy 'in real time' no matter what files you have, but since SnapRAID is in software, it's not only more involved but it also simply could not achieve full redundancy as you will def need to exclude a lot of files (which could render ur redundancy useless, depending on ur use case) and it's also more involved where you might need to e.g. stop a bunch of #Docker containers prior to it doing its thing.

I'm just not sure how to achieve 'real' RAID on an RPi 4 (of varying memory capacities). One benefit of SnapRAID (and MergerFS) is that you could easily implement it with USB-attached disks, which is the 'friendliest' choice for SBCs like the Pi which has no SATA connectivity built-in.
I don't normally think much of how parity works with #RAID, since RAID's kinda 'magic' lol (feels like it anyway) - but on my #OpenMediaVault #NAS, which uses #SnapRAID (i.e. not 'real' RAID), I'm wondering how it works.

In a SnapRAID setup, to set up a parity disk, you just need to designate a disk that is the same size, or larger than all other disks in your pool. Mine's a 1TB SSD, others in the pool (i.e. content drives) being the exact same drives.

Today, I got an alert saying that my parity disk has used up ~89% of its storage space - should I do something about it? What happens when the parity disk is filled up completely - no parity, anymore? If I have 2x 1TB content disks (i.e. 2TB worth of data), can I expect that 1TB of parity drive to be sufficient, since that's as per the requirement of SnapRAID after all?

Any pointers from
#homelab folks is much appreciated - I've been relying a lot on #OMV 's wiki, which is pretty comprehensive, but I saw no mention of this. I'm also still scanning through SnapRAID's own documentation.

🔗 https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:snapraid

🔗 https://www.snapraid.it/manual
omv7:omv7_plugins:snapraid [omv-extras.org]

Because I had a conveersation about #backup yesterday at the pool.

trapexit, author of mergerfs, has a collection of IMHO , good and new user friendly howtos over here.

https://github.com/trapexit/backup-and-recovery-howtos

#linix #btrfs #zfs #snapraid #mergerfs #backup #rsync

GitHub - trapexit/mergerfs: a featureful union filesystem

a featureful union filesystem. Contribute to trapexit/mergerfs development by creating an account on GitHub.

GitHub

@diegolakatos this was great. I went with Open Media Vault (OMV). The installer for that was a little painful because it didn't support manual partitioning. But it loaded on to a TerraMaster F4-424 without any issues.

I added the OMV extras repository. Installed mergerfs and SnapRAID.
Configured through the UI.

It works! Simple and effective. I checked my files by mounting independently and they all look good. Then tested a recovery by formatting a drive and using SnapRAID and it worked beautifully.

Extremely happy! Thank you.

#selfhosting #selfhosted #openmediavault #mergerfs #snapraid #terramaster

Anyone familiar with #SnapRAID/#OpenMediaVault can tell me why SnapRAID sync that happens on a schedule post-SnapRAID diff sometimes syncs, sometimes don't?

Like in this case, it doesn't sync and seems to only scan 1 of 2 of my data storage (
data2):

There are differences! SnapRAID DIFF finished - Wed Jun 11 04:31:38 +08 2025 ---------------------------------------- Changes detected [A-131,D-2,M-0,C-0,U-142] -> there are updated files (142) but update threshold (0) is disabled. Changes detected [A-131,D-2,M-0,C-0,U-142] -> there are deleted files (2) but delete threshold (0) is disabled. SnapRAID SYNC Job started - Wed Jun 11 04:31:38 +08 2025 ---------------------------------------- Self test... Loading state from /srv/dev-disk-by-uuid-<data1>/snapraid.content... Scanning... Scanned data2 in 50 seconds SnapRAID SYNC Job finished - Wed Jun 11 04:33:10 +08 2025
but then the next day, it does sync and seems to scan both my data drives (
data1 and data2):
There are differences! SnapRAID DIFF finished - Thu Jun 12 04:31:45 +08 2025 ---------------------------------------- Changes detected [A-189,D-3,M-0,C-0,U-143] -> there are updated files (143) but update threshold (0) is disabled. Changes detected [A-189,D-3,M-0,C-0,U-143] -> there are deleted files (3) but delete threshold (0) is disabled. SnapRAID SYNC Job started - Thu Jun 12 04:31:45 +08 2025 ---------------------------------------- Self test... Loading state from /srv/dev-disk-by-uuid-<data1>/snapraid.content... Scanning... Scanned data2 in 64 seconds Scanned data1 in 90 seconds Using 863 MiB of memory for the file-system. Initializing... Hashing... # doing hashing stuffs Everything OK Resizing... Saving state to /srv/dev-disk-by-uuid-<data1>/snapraid.content... Saving state to /srv/dev-disk-by-uuid-<data2>/snapraid.content... Verifying... Verified /srv/dev-disk-by-uuid-<data2>/snapraid.content in 5 seconds Verified /srv/dev-disk-by-uuid-<data1>/snapraid.content in 6 seconds Using 48 MiB of memory for 64 cached blocks. Selecting... Syncing... # doing syncing stuffs data1 39% | ************************ data2 27% | **************** parity 30% | ****************** raid 1% | hash 1% | sched 0% | misc 0% | |______________________________________________________________ wait time (total, less is better) Everything OK Saving state to /srv/dev-disk-by-uuid-<data1>/snapraid.content... Saving state to /srv/dev-disk-by-uuid-<data2>/snapraid.content... Verifying... Verified /srv/dev-disk-by-uuid-<data1>/snapraid.content in 3 seconds Verified /srv/dev-disk-by-uuid-<data2>/snapraid.content in 3 seconds SnapRAID SYNC Job finished - Thu Jun 12 04:36:09 +08 2025
Everyday, I'm not quite confident that my SnapRAID array is
ready for a failure or not, cos I assume on days when it did not sync - that means the parity drive has no idea or copy of the latest set of files, no?

I think the one big 'flaw' or downside to #SnapRAID is that, it's not really #RAID is it if you're sorta meant to exclude certain directories from the parity storage like #Docker stuffs, etc. (mostly, 'moving parts')

When it comes to 'actual' RAID, you can easily replace disk(s), no issue - the replacement disk would be 1:1 exactly the way the old disk was. With SnapRAID, that might not be the case.
I think I might RMA that one SATA SSD - that SSD is currently setup as 1 of 2 #SnapRAID data drives (there's another identical parity drive), that is then part of a #MergerFS pool.

There are a ton of video guides on how to replace a disk part of a
#RAID array for failure/upgrade reasons on #TrueNAS, but not found one for this SnapRAID-MergerFS setup on #OpenMediaVault.

From what I can tell, there doesn't seem to be a graphical option on
#OMV to do this easily/intuitively. Might need to look up some written guides/docs to do this safely, so I can send the SSD back, and recover my data on the replacement drive once it arrived.

---

Hopefully, this written guide should suffice:

🔗 https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:snapraid#recovery_operations

RE: https://sakurajima.social/notes/a8mrrt78bi
I've managed to get #OpenMediaVault working on my #RaspberryPi (running #Raspbian Lite) and the performance seems pretty impressive! Despite relying on USB storage for the SSDs.

This is my first time running a
#NAS on the Pi, on #OMV, not using #ZFS or #RAID but rather an #Unraid like solution, 'cept, #FOSS called #SnapRAID in combination with #mergerfs (the drives themselves are simply #EXT4).

So far, honestly, so good. I got 2x 1TB SSDs for data, and another 1TB SSD for parity. Don't have a backup for the data themselves atm, but I do have a scheduled backup solution (
#RaspiBackup) setup for the OS itself (SD card). It's also got #Timeshift for creating daily snapshots.

I'm not
out of the woods yet though, cos after this comes the (somewhat) scary part - deploying #Immich on the Pi lol (using OMV's #Docker compose interface perhaps). I really could just deploy it in my #Proxmox #homelab, and I wouldn't have to worry about system resources or hardware transcoding, etc. but I really wanna experiment this 'everything hosted/contained in 1 Pi' concept.
What's a good #SnapRaid exclusion rule list (for #OpenMediaVault/#OMV, if that matters)?
@krutonium ok. Sorry for the eagerness. Im still surprised about #snapraid fix, that in every opportunity I recommended it :D