SSD death ☠️💀

I am busy rebuilding the data from an SSD which died all of a sudden from one moment to the next without giving any warnings in any manner in any way.

I always monitor SMART output of SSD drives & mechanical spinners (HDD)

I've not seen any smart output indicating imminante dead

This drive has acted like your girlfriend when she's just not in a good mood and without explanation says nothing to you in the morning, for hours.

This hard crash, means that the S.M.A.R.T. monitoring hardware didn't have proper Communications with the integrated circuits on the SSD.

The drive was always powered many times a month, never left without power for more than a week or so, thus that has not been a contributing factor to the SSD catastrophic failure

The drive itself is fairly small.

The data on this dead SSD I've backed up on remote drives connected in JBOD format to machines which I have running remotely.

No ZFS on my backup machines

  • I want to run ZFS native
  • that means running a BSD OS on those machines
  • that also means I will need to backup / restore all data on those drives meaning
  • I will need 200% of the used HDD / SSD space on those machines
  • I need patience for that backup
  • I need expensive extra HDD's for that project
  • I won't pay USD 300 for a USD 120 HDD

NO ZFS under these global SSD / HDD market prices

Sources:

  • Moi
  • man ls(1)
  • man lsd(1)
  • man cp(1)
  • man smartctl(8)
  • man zfs(8)

#HDD #SSD #crash #no #warning #on #TV #filesystems #remote #backup #network #JBOD #SMART #programming #bacula

Год в проде с Ceph: как мы пришли к новой референсной архитектуре

Привет, Хабр! Меня зовут Игорь Шишкин, я руковожу отделом разработки облачной платформы и архитектором SDS в Рунити. Ранее я уже рассказывал про то, как мы выбирали SDS (Software Defined Storage), почему остановились на Ceph, а также о наших процессах в R&D. В этой статье , поделюсь, что мы поймали за год в продакшене, какие решения в дизайне кластеров оказались ошибочными, как это изменило нашу референсную архитектуру и к чему мы пришли в итоге.

https://habr.com/ru/companies/runity/articles/1021222/

#регоблако #ceph #s3 #hsdc #конфигурация #exhausted #jbod #hba #архитектура #кластер

Год в проде с Ceph: как мы пришли к новой референсной архитектуре

Привет, Хабр! Меня зовут Игорь Шишкин, я руковожу отделом разработки облачной платформы и архитектором SDS в Рунити. Ранее я уже рассказывал про то, как мы выбирали SDS (Software Defined Storage),...

Хабр

Someone here with more troubleshooting steps/ideas?

I have a refurbished LSI 9361-8i card and it doesn‘t detect any drives at all.
I checked 8643-8643 to a backplane - nothing.
I checked a 8643-4x8482 cable directly attached to the drives - which spin up on startup, but aren‘t recognized at all by the controller. Regardless of whether these are 2013 SATA or 2020 SATA drives.
Firmware is latest, and neither the #RAID nor #JBOD personality make a difference.

#HomeLab #NAS #Proxmox #Storage

All services are back online on the new JBOD. The new JBOD uses Luks + BTRFS R10 on top. It's purring! #btrfs #jbod #sysadmin #gnulinux #freesoftware #opensource #floss #debian

I bought a 16 TB #Seagate Ironwolf Pro drive and just got done installing it into my Sabrent 5 bay #JBOD.

When I turn it on it makes a strange sound and neither Disk Management nor DiskPart at the command prompt can see the drive to initialize or format it.

I don't have to do anything special to a NAS drive running as a regular non-raid HD do I?

My other four drives contain 8 TB barracudas and they have all worked perfectly fine.

Is this perhaps a bad drive?

#AskFedi #AskMastodon

Just a Bunch Of Disks? Really?

YouTube
Künftig mit 3,26 Petabyte: Western Digital packt noch zwei 32-TB-Platten mehr ins JBOD

Zur SC25 bestückt Western Digital das Ultrastar Data102 JBOD mit 32-TB-HDDs und bringt so insgesamt rund 3,26 Petabyte unter.

ComputerBase
🌗 在 Framework 筆記型電腦與磁碟上自架 10TB S3 儲存空間
➤ 二手筆電化身家用雲端儲存中心,穩定運行 10TB S3 數據
https://jamesoclaire.com/2025/10/05/self-hosting-10tb-in-s3-on-a-framework-laptop-disks/
作者分享了他在一部二手 Framework 筆記型電腦上,搭配外接的「JUST BUNCH OF DISKS」(JBOD) 磁碟陣列,成功自架並運行 10TB S3 儲存空間的經驗。這個方案最初是為了以低成本滿足 AppGoblin 專案對大量儲存空間的需求。作者將其舊的 Framework 筆記型電腦改裝成家用伺服器,運行 ZFS 和 garage S3 軟體。經過幾個月的穩定運行,包括經歷幾次重大更新,證明瞭此方案的可靠性。作者也提到,為了克服 USB 連接 JBOD 裝置時,ZFS 在高讀寫負載下的潛在問題,他將 SQLite 中繼資料移至筆記型電腦內部儲存,解決了效能疑慮。
+
#自架儲存 #S3 #ZFS #Framework Laptop #JBOD
Self hosting 10TB in S3 on a framework laptop + disks – James O'Claire

hey hey #Linux #FileSystem #ZFS #RAID #XFS entities! I'm looking for extremely opinionated discourses on alternatives to ZFS on Linux for slapping together a #JBOD ("Just a Bunch Of Disks", "Just a Buncha Old Disks", "Jesus! Buncha Old Disks!", etc) array.

I like ZFS
but the fact that it's not in tree in-kernel is an issue for me. What I need most is reliability and stability (specifically regarding parity) here; integrity is the need. Read/write don't have to be blazingly fast (not that I'm mad about it).

I also have one
#proxmox ZFS array where a raw disk image is stored for a #Qemu #VirtualMachine; in the VM, it's formatted to XFS. That "seems" fine in limited testing thus far (and seems fast?, so it does seem like the defaults got the striping correct) but I kind of hate how I have multiple levels of abstraction here.

I don't think there's been any change on the
#BTRFS front re: raid-like array stability (I like and use BTRFS for single disk filesystems but) although I would love for that to be different.

I'm open to
#LVM, etc, or whatever might help me stay in tree and up to date. Thank you! Boosts appreciated and welcome.

#techPosting

Well, added a storage tier this evening.

A couple 4 Tb drives which I had laying about, as #JBOD (#MergerFS), for those media files which are "replaceable" and don't really need to be on #ZFS mirrors (~70 & ~16 Tb).

I now feed into #Plex from the different tiers using #OverlayFS.

Pretty neat. I can move things around without Plex noticing, select the tier in #Sonarr / #Radarr, and if one of those 4 Tb drives goes down, NBD and I can see what is missing.

More iops don't hurt either.