Pequeño adelanto del nuevo servidor que estoy configurando que va a correr únicamente Mastodon Uruguay, gemelo al actual. Todo indica que vamos a contar con un RAID-6 con 10x1TB HDD SATA3 #undernet #mejoras #mantenimiento #mastodon #servidor #autogestion #raid #raid6
@madeindex I did have a #fedora box but over a year and a half ago I made a terrible mistake at stupid o'clock. One disk had a SMART error (bad sector) so I replaced it .. but then I thought to self: why not make /boot #raid5 like the others (server is #raid6 and #raid1)? I was not thinking ... /boot was messed up. Couldn't salvage it. And I haven't been bothered enough to repair it. #anaconda is pathetic with md+lvm and I have the laptop so not worth it to me for now.

Learn how to recover lost data from a failed QNAP RAID-6 NAS using Stellar Toolkit for Data Recovery step-by-step.

Full guide here: https://ostechnix.com/qnap-raid6-nas-data-recovery-stellar-toolkit/

#Stellar #Stellartoolkit #Datarecovery #Qnap #Nas #Raid6 #Linux #Windows #Macos #Software #Storage

Recover Data From Failed QNAP RAID-6 NAS Using Stellar Toolkit - OSTechNix

Learn how to recover lost data from a failed QNAP RAID-6 NAS using Stellar Toolkit for Data Recovery step-by-step.

OSTechNix
Ok: this is the foundation of my new #raspberrypi #arm64 based #kvm infrastructure. Will do #RAID1 on the #nvme storage for the VM data and back it up via #iscsi on #nas with #raid6
Still working on the migration strategy for the x86 VMs ...

12 Tage war #FreshRSS krank. Nun läuft es wieder, Gott sei Dank.

#Restore mit #restic
✅ Havarierte InnoDB repariert und Datenbestand gerettet
✅ Containerlandschaft mit #nginx ProxyManager und aktueller #MariaDB via #Podman lokal
✅ Deployment auf 24x7 Server mit #RAID6

Nun fehlt noch das Upgrade von 1.19 auf die aktuelle Release. Habe das lange sträflich schludern lassen, aber in der aktuellen Containerlandschaft ist es auch leichter, alles sauber zu halten.

Jetzt hab' ich einen #RaspberryPi 1 über...

Ich versuch es mal mit #RAID6.
question for tech-y storage people: I just nabbed 4 extra (used) disks with a 6 TB capacity from a place that re-sells electronics (oregonrecycles.com); I think they did some testing but obviously I dunno what to extent, and also, if it was just limited to stuff like SMART data, well...

anyway, I wanna RAID it, which is fine, but I don't know how much lifetime these have left on them. For four disks with unknown usage, should I use RAID6 or RAID10? It's not
job critical data I'll be storing on these (mostly media and such). They'll eventually be migrated into a larger RAID array but that won't happen until I'm stable and can afford to rebuild my server so this is fine for now.

I wouldn't
mind the better read/write performance that comes with RAID10 even though it has less parity. I suspect these disks were all used together so they might have similar wear/tear patterns; in that case, I'm wondering if RAID6's double parity actually buys me any extra life? Like, given 4 disks with the same history and a probably known disk failure rate, I'm not really clear as to whether double parity is going to make much of a difference (and that if one goes down, the others probably aren't too far behind).

#techPosting #raid #raid6 #raid10 #storage #nas

He wrote a similar article in 2010 about how #RAID 6 would be dead by 2019.

https://www.zdnet.com/article/why-raid-6-stops-working-in-2019/

#storage #raid5 #raid6

Why RAID 6 stops working in 2019

Three years ago I warned that RAID 5 would stop working in 2009. Sure enough, no enterprise storage vendor now recommends RAID 5. Now it's RAID 6, which protects against 2 drive failures. But in 2019 even RAID 6 won't protect your data. Here's why.

ZDNET

I was really worried about why the RAID drives on our new #Linux #server were so noisy. A quick "write" noise every second, like a heartbeat.

Some investigation revealed that a "journal" service seemed to be writing ~512k of data every second, and only when I had that exact amount did my googling/ducking generate a useful result:

When I set up the #raid6 stuff, I did a "lazy" ext4 formatting, so the OS keeps doing that extremely slow and noisy process in the background.

Non-lazy reformat, go!

I am looking for a tutorial/advice/best-practice on how to setup a #debian server with multiple disks on #ext4. Machine for now should just be used as a fileserver until we integrate the #infrastructure into #proxmox. Still I am unsure how to format the disks. Looking at previous setups we used #raid1 for the root partition and #raid6 for all additional disks. I am just not sure how or where #lvm should come into play...