What are people using for storage on their resource constrained home #kubernetes clusters (lab)?
Currently using longhorn for ease of use. But would like lighter and even simpler. Need all nodes to access shared data. NFS could work, but heard bad things. Don’t see a lot of good info on s3 compatible. I imagine smb would suck. I’m sure I’ll stay where I am. But lazy google I guess. (Even though I have been) #k8s #storage #homelab
@bashfulrobot I actually use SMB. It does suck, but not quite as bad as you’d think.
@bashfulrobot I don’t have anything running that requires fast I/o though.
@Marcus @bashfulrobot SMB only sucks if you start to get fancy with authentication and encryption. The basics are easy.
@Marcus Curious. Why did you pick it?
@bashfulrobot I already had a NAS set up with a huge amount of storage. I didn’t want to have to get more storage as my cluster is just made up of a bunch of old Mac Minis. The only thing I had to change was Minio. I moved that off my cluster to a dedicated VM with its own storage as was seeing too much of a bottleneck.
@Marcus Minio is a backup target I assume?
@bashfulrobot I use it as the long term storage for my Loki logs. That’s all.

@bashfulrobot Check into SSHFS to see if that’ll fit your use case.

You can mount it on your remote nodes as such:

sshfs -o Ciphers=aes128-ctr -o Compression=yes -o ServerAliveCountMax=2 -o ServerAliveInterval=15 remoteuser@server:/data/ /media/mountpoint

Just mess with the compression option and server alive numbers to fit your CPU usage and network.

@vertana Huh. I never even considered that. Then I assume you just mount into your pods at the host level. Any corruption issues, etc? Gotchas?

@bashfulrobot Not that I am aware of. Only thing I can think of is your backing media and backups. Since it’s encrypted, I recommend block-by-block backups (like dd).

But I should note I’ve also never tried it for your specific purposes. I have used it for plenty of file sharing purposes though in professional settings including Active Directory setups with Linux clients. Never had any special issues with this one unlike SMB and NFS which required very specific configs.

@bashfulrobot I'm currently using longhorn, but I did consider using my old Synology NAS as an iSCSI target.
@bashfulrobot I haven't been using glusterfs for K8S, but I've been using it for my Proxmox cluster and it works pretty well. Like another respondent, I don't have high IOPS requirements though!

@bashfulrobot I've used NFS on a GKE cluster I was playing with. It's okay when it works, but when it doesn't, things go wrong very quickly.

Using s3fs would be my choice if I decided to revisit this. Plus most providers have their own s3-compatible services you can plug into this (S3 for AWS, GCS for GCP, etc)

@blenderfox I haven’t looked at fuse lately. I wonder what the performance would be like. I came across an s3proxy (to the file system) that I could run on a Vm to provided and s3 target. What would drive you this way?

@bashfulrobot performance is variable. If you're running s3fs from your homelab to AWS/GCP/Azure then everything gets transferred to and from the cloud, leading to slight lag (e.g. mounting then doing a ls will result in a noticable delay before you get the listing back)

Also note that you may be charged by the provider for the transfer.

As for s3proxy, I assume it's more or less the same as s3fs but without mounting a folder? (I've not used that so I don't know too much about it)