Someone here with more troubleshooting steps/ideas?

I have a refurbished LSI 9361-8i card and it doesn‘t detect any drives at all.
I checked 8643-8643 to a backplane - nothing.
I checked a 8643-4x8482 cable directly attached to the drives - which spin up on startup, but aren‘t recognized at all by the controller. Regardless of whether these are 2013 SATA or 2020 SATA drives.
Firmware is latest, and neither the #RAID nor #JBOD personality make a difference.

#HomeLab #NAS #Proxmox #Storage

All services are back online on the new JBOD. The new JBOD uses Luks + BTRFS R10 on top. It's purring! #btrfs #jbod #sysadmin #gnulinux #freesoftware #opensource #floss #debian

I bought a 16 TB #Seagate Ironwolf Pro drive and just got done installing it into my Sabrent 5 bay #JBOD.

When I turn it on it makes a strange sound and neither Disk Management nor DiskPart at the command prompt can see the drive to initialize or format it.

I don't have to do anything special to a NAS drive running as a regular non-raid HD do I?

My other four drives contain 8 TB barracudas and they have all worked perfectly fine.

Is this perhaps a bad drive?

#AskFedi #AskMastodon

Just a Bunch Of Disks? Really?

YouTube
Künftig mit 3,26 Petabyte: Western Digital packt noch zwei 32-TB-Platten mehr ins JBOD

Zur SC25 bestückt Western Digital das Ultrastar Data102 JBOD mit 32-TB-HDDs und bringt so insgesamt rund 3,26 Petabyte unter.

ComputerBase
🌗 在 Framework 筆記型電腦與磁碟上自架 10TB S3 儲存空間
➤ 二手筆電化身家用雲端儲存中心,穩定運行 10TB S3 數據
https://jamesoclaire.com/2025/10/05/self-hosting-10tb-in-s3-on-a-framework-laptop-disks/
作者分享了他在一部二手 Framework 筆記型電腦上,搭配外接的「JUST BUNCH OF DISKS」(JBOD) 磁碟陣列,成功自架並運行 10TB S3 儲存空間的經驗。這個方案最初是為了以低成本滿足 AppGoblin 專案對大量儲存空間的需求。作者將其舊的 Framework 筆記型電腦改裝成家用伺服器,運行 ZFS 和 garage S3 軟體。經過幾個月的穩定運行,包括經歷幾次重大更新,證明瞭此方案的可靠性。作者也提到,為了克服 USB 連接 JBOD 裝置時,ZFS 在高讀寫負載下的潛在問題,他將 SQLite 中繼資料移至筆記型電腦內部儲存,解決了效能疑慮。
+
#自架儲存 #S3 #ZFS #Framework Laptop #JBOD
Self hosting 10TB in S3 on a framework laptop + disks – James O'Claire

hey hey #Linux #FileSystem #ZFS #RAID #XFS entities! I'm looking for extremely opinionated discourses on alternatives to ZFS on Linux for slapping together a #JBOD ("Just a Bunch Of Disks", "Just a Buncha Old Disks", "Jesus! Buncha Old Disks!", etc) array.

I like ZFS
but the fact that it's not in tree in-kernel is an issue for me. What I need most is reliability and stability (specifically regarding parity) here; integrity is the need. Read/write don't have to be blazingly fast (not that I'm mad about it).

I also have one
#proxmox ZFS array where a raw disk image is stored for a #Qemu #VirtualMachine; in the VM, it's formatted to XFS. That "seems" fine in limited testing thus far (and seems fast?, so it does seem like the defaults got the striping correct) but I kind of hate how I have multiple levels of abstraction here.

I don't think there's been any change on the
#BTRFS front re: raid-like array stability (I like and use BTRFS for single disk filesystems but) although I would love for that to be different.

I'm open to
#LVM, etc, or whatever might help me stay in tree and up to date. Thank you! Boosts appreciated and welcome.

#techPosting
Shouting in the Datacenter

YouTube

Well, added a storage tier this evening.

A couple 4 Tb drives which I had laying about, as #JBOD (#MergerFS), for those media files which are "replaceable" and don't really need to be on #ZFS mirrors (~70 & ~16 Tb).

I now feed into #Plex from the different tiers using #OverlayFS.

Pretty neat. I can move things around without Plex noticing, select the tier in #Sonarr / #Radarr, and if one of those 4 Tb drives goes down, NBD and I can see what is missing.

More iops don't hurt either.

One of my server nodes is a sort of "leftover consumer parts in a 4U chassis" sort of deal. It's never been especially good but got 24 HDDs online in a pinch to utilize random drives lying around as a #JBOD. This week it started misbehaving more than usual due to a bad software bug that consumed its meager 16G of RAM like it was nothing, so it's been drained ever since. Ordered a #BLIKVM to be able to rescue it while traveling since it also likes to hard lock. Today was the day to install it but turns out I'm out of PCI-E slots, now suddenly needing a fifth. Long story short, I lost my last shreds of five a duck about rescuing it and now there's a bunch of eBay orders incoming including a supermicro E-ATX mobo, a couple of Xeon Silvers and 128G of RAM plus the bits and bobs to hook it all up. (1/2)