I'm going to admit that I am doing something immature in my #homelab and I'm looking for opinions. I've got multiple #XCPng hosts, all using local storage. I have no NFS or iSCSI storage. That's kinda silly. Shared storage is super useful and I'm literally not using it.

Unless I go to some serious effort to make a high-performance SAN, I expect network storage performance to be so-so for VM storage, but maybe I'm too pessimistic. I currently only have copper gigabit in the rack. No fiber, no 2.5G copper or anything like that. I'm not sure if that's going to be viable for NFS or iSCSI.

I could dedicate a host to running TrueNAS Core with a bunch of storage. But what has always bugged me about this is that my storage host becomes a single point of failure for all the compute nodes. #TrueNAS is super reliable but everything has to reboot once in a while, and these stupid enterprise-grade servers take anywhere from 4-8 minutes to boot. If I had a single storage node, and I needed to reboot it for an OS upgrade, everything would hang for a while. That's no good. Not updating the OS on the storage system is also not good.

So what am I supposed to be doing for shared storage on a #Xen cluster? How do I avoid a storage host becoming a single point of failure? How do you update and reboot a storage node, without disrupting everything that depends on it?

#selfhosting #san #storage

@paco
@kiraso

I want to see what smarter people are doing too. I was planning to set up iSCSI and NFS on my Synology this weekend, but just for data, not VM disks so I'm less worried about speed.

In the end, it's just a #homelab and I'm sure it'll be performant enough to support two users.

@unixorn For me, same as for @paco, there is a concern of single point of failure. And my #homelab is partially a #homeprod, hosting services and data that my family is using daily. Performance is less of the concern.