I've been putting off some necessary maintenance and overall system streamlining on my home server for a while. Everything works, and services are secure and up to date, but I've got a bit of a messy setup that mixes #podman with #docker containers, #tailscale with #tsdproxy. I set this all up before I had my own domain, hence tsdproxy.

Now I have my own domain, I want to refactor my server using #netbird with #caddy and #pocketid.

It's a little daunting, but I'm going to take the plunge

@pfr may i ask what your reasons for your setup are?
@cyntheon I started my debian homelab with #jellyfin (running as a system service) combined with #tailscsle. then I added #immich and initially opted for #podman. Then I later added #nextcloud which necessitates the need for https tls certs, hence #tsdproxy. Also, I had issues running nextcloud with podman and that's when I just decided to go with docker. I'm also running #snikket via docker container also. A fairly modest home server, but it handles the essentials for me and my family.
@pfr ty!! i am just learning about homelabbing and i like hearing about what ppl r actually doing with it
@cyntheon the only advice for someone starting out is to really consider the possibility of scaling up in the future. Even if you only want to start with one or two services, structure your stack in a way that allows for scaling up easily. This will save yourself a lot of headache in the future.

@pfr @cyntheon As someone who has torn out several homelabs over the years (aka advice granted through rack nut blood, 3am something not working sweat, and migration tears), I want to emphasize that the scaling is not just up, but out. Even if you run it on the same machine initially, separating your storage and compute will do wonders for your sanity later. Keep your compute as slim as you can to start, and only add more resources as necessary. If it needs storage, attach it over the network - even if that means it’s just using the loopback right now. When you need to add more storage, your compute is designed to be easily pointed at the new target. When you need to add more compute resources or tasks, your storage is designed to be unbothered and uninterrupted by the new demands.

I still run compute and storage on the same machine, but I’ve gone:

* One independent system

* Two independent systems

* Three independent systems

* Two independent systems, plus a cluster of four servers

* Just the cluster of four servers (I finally moved all of my storage to the cluster so I could shut down the power hogs)

Migrating to the cluster took by far the longest, because I had to tear out and rebuild everything I had never planned to scale in any way. Nextcloud was running on a VM with a 2TB boot disk that both took up too much space 24/7 yet would also run out of space if I ever reached that level.

Tl;dr: If you think you will ever want to do more, plan on it from the start. That’s my advice.

@ClickyMcTicker @cyntheon I was hoping someone with more experience would chime in, thank you.

The tl;dr advice is essentially the same, though I appreciate hearing your journey and I can see myself wanting to upgrade in the near future.

I'm currently on 1 system, 7040 optiplex, bifurcated nvme pcie caddy holding two 8tb drives, and a single 16tb 3.5" backup drive.

I know I should store additional backups elsewhere, but for now, that's more storage than I'll need for years to come.