What's your "base" stack of choice?

https://lemmy.blahaj.zone/post/169169

What's your "base" stack of choice? - Blåhaj Lemmy

How do you set up a server? Do you do any automation or do you just open up an SSH session and YOLO? Any containers? Is docker-compose enough for you or are you one of those unicorns who had no issues whatsoever with rootless Podman? Do you use any premade scripts or do you hand craft it all? What distro are you building on top of? I’m currently in process of “building” my own server and I’m kinda wondering how “far” most people are going, where do y’all take any shortcuts, and what do you spend effort getting just right.

Proxmox, then create LXC for everything (moslty debian and a bit of alpine), no automation, full yolo, if it break I have backup (problems are for future me eh)
I used to do the same, but nowadays I just run everything in docker, within a single lxc container on proxmox. Having to setup mono or similar every time I wanted to setup a game server or even jellyfin was annoying.

This.
Proxmox and then LXCs for anything I need.

and yes - I cheat a bit, I use the excellent Proxmox scripts - https://tteck.github.io/Proxmox/ because I'm lazy like that haha

Proxmox VE Helper-Scripts

Scripts for Streamlining Your Homelab with Proxmox VE

Mostly the same. Proxmox with several LXC, two of which are running docker. One for my multimedia, the other for my game servers.
Ansible and docker compose.
i do this, mixed with a little docker run inside of Makefiles. i store all my ansible playbooks in a repo, along with other repos for different projects and purposes. i store all of those in git repos that i clone via ssh from a server that acts as a NAS backed by zfs.
Fedora-server with Podman and Quadlet on btrfs drives. Although I must admit I often use rootful mode in Podman as it works better with Containers made for Docker. Ah and you might want to turn off SElinux in the beginning as it can get frustrating fast.
Debian and docker compose
I'm a lazy piece of shit and containers give me cancer, so I just keep iptables aggressive and spin up whatever on an Ubuntu box that gets upgrades when I feel like wasting a weekend in my underwear.
An honest soul

I get paid to do shit with rigor; I don't have the time, energy, or help to make something classy for funsies. I'm also kind of a grumpy old man such that while I'll praise and embrace Python's addition of f-strings which make life better in myriad ways, I eschew the worse laziness of the all the containers attitude that we see for deployment.

Maybe a day shall come when containers are truly less of a headache than just thinking shit through the first time, and I'll begrudgingly adapt and grow, but that day ain't today.

I use debian VMs and create rootless podman containers for everything. Here's my collection so far.

I'm currently in the process of learning how to combine this with ansible... that would save me some time when migrating servers/instances.

GitHub - eycer1995/containers: Container collection

Container collection. Contribute to eycer1995/containers development by creating an account on GitHub.

GitHub
Thanks for sharing. There’s some great stuff in the repo.
I have a base Debian template with a few tweaks I like for all my machines. Debating setting up something like terraform but I just don't spin up VMs frequently enough to wan tto do that. I do have a few Ansible playbooks I run on a fresh server to really get it to where I want though.

Xen on Gentoo with Gentoo VMs. I've scripted the provisioning in bash, it's fairly straightforward - create lvm volume, extract latest root, tell xen whick kernel to boot.

Ideally would like to netboot a readonly root off nfs and apply config from some source. Probably bash :D

Some things like opnsense are much more handcrafted because they're a kind of unicorn compared to the rest of the stuff.

That’s impressive effort for a home lab.

Hi jago,

Sorry for the delayed response. I do also see the inconsistency between looking at your post directly vs lemmy.ml. I have noticed, however, that every now and the lemmy.ml throws a bad gateway error, which would imply it's getting overloaded again. That might create situations where lemmy.ml has all comments marked as federated, while some of them were actually dropped mid transit. Same applies to lemmy.one.

I don't know of any workarounds for that, unfortunately. Feels a lot like a bug.

In regards to subscriptions - you're right, the pending state does seem to actually impact federation. Some of my subscriptions to beehaw have been pending since day one but I can see the content just fine. I've written this off for another bug in the software.

I had a look at your profile - I can definitely see the posts you've created as well as the comments. I've noticed some UI bits fail to get refreshed - things like notification status, etc. I found forcing a page refresh helps with that.

Sorry I couldn't be of more help.

I run unraid on my server box with a few 8tb hdd and nvme for cache. From there it is really easy to spin up Docker containers or stacks using compose, as well as VMs using your iso of choice.

For automation, I use Ansible to run one click setup machines; it is great for any cloud provider work too.

I use Unraid and their docker and VM integration, Works great for me as a home user with mixed drives. Most of the dockers i want already have unraid templates so require less configuration. Does everything i want and made it a bit easier for me with less configuration and the mixed drive support.

I run unraid on my server box with a few 8tb hdd and nvme for cache. From there it is really easy to spin up Docker containers or stacks using compose, as well as VMs using your iso of choice.

For automation, I use Ansible to run one click setup machines; it is great for any cloud provider work too.

I run unraid on my server box with a few 8tb hdd and nvme for cache. From there it is really easy to spin up Docker containers or stacks using compose, as well as VMs using your iso of choice.

For automation, I use Ansible to run one click setup machines; it is great for any cloud provider work too.

I love unraid! Definitely wait between updates though to let them stabilize.
Probably the odd one here with Arch Linux + docker compose with still a lot of manual labor
updating it after maximum 4 weeks is enough, container more often
Debian + docker-compose
I'm currently Ubuntu, but if I were to start from scratch this would be it, simple, basic, does everything I need.

I use the following procedure with ansible.

  • Setup the server with the things I need for k3s to run
  • Setup k3s
  • Bootstrap and create all my services on k3s via ArgoCD
  • People like to diss running kubernetes on your personal servers, but once you have enough services running in your servers, managing them using docker compose is no longer cut it and kubernetes is the next logical step to go. Tools such as k9s makes navigating as kubernetes cluster a breeze.
    I'm doing basically the same thing with microk8s and flux. I'd probably switch to argo if it wasn't already working.

    Proxmox and shell scripts. I have everything automated from base install to updates.

    All the VMs are Debian which install with a custom seed file. Each VM has a config script that will completely setup all users, ip tables, software, mounts, etc. SSL certs are updated on one machine with acme.sh and then pushed out as necessary.

    One of these days I’ll get into docker but half the fun is making it all work. I need some time to properly set it up and learn how to configure it securely.

    @ShittyKopper arch linux with sandboxed systemd units for most purposes. I should really set up podman to be rootless, but for now I can still enjoy running containers as systemd services, albeit unsandboxed on the systemd level
    systemd.exec

    After many years of tinkering, I finally gave in and converted my whole stack over to UnRAID a few years ago. You know what? It's awesome, and I wish I had done it sooner. It automates so many of the more tedious aspects of home server management. I work in IT, so for me it's less about scratching the itch and more about having competent hosting of services I consider mission-critical. UnRAID lets me do that easily and effectively.

    Most of my fun stuff is controlled through Docker and VMs via UnRAID, and I have a secondary external Linux server which handles some tasks I don't want to saddle UnRAID with (PFSense, Adblocking, etc). The UnRAID server itself has 128GB RAM and dual XEON CPUs, so plenty of go for my home projects. I'm at 12TB right now but I was just on Amazon eyeing some 8TB drives...

    I've set up some godforsaken combination of docker, podman, nerdctl and bare metal at work for stuff I needed since they hired me. Every day I'm in constant dread something I made will go down, because I don't have enough time to figure out how I was supposed to do it right T.T
    A bunch of old laptops running Ubuntu Server and docker-compose. Laptops are great; built in screen, keyboard, and UPS (battery), and more than capable of handling the kind of light workloads I run.
    I setup my bare metal boxes and vms with ansible. Then I use ansible to provision docker containers on those.
    Synology with docker-compose stack
    I have a single desktop running Proxmox with a TrueNAS VM for handling my data and a Debian VM for my Docker containers which accesses the NAS data through NFS.
    I just have a pi 4 running OpenMediaVault with docker and portainer. 😅
    Right now, I just flash ubuntu server to whatever computer it is, ssh and yolo lmao. no containers, no managers, just me, my servers, and a vpn, raw dogging the internet lmao. The box is running a nas, jellyfin, lemmy, and a print server; the laptop a minecraft server, and the pi is running a pihole, and a website that controls gpio that controls the lights. In the pictured setup i dont have access to the apartment complex's router, so i vpn through a openvpn server i setup in a digitalocean server.
    I use SSH to manage docker compose. I'm just using a raspberry pi right now so I don't have room for much more than Syncthing and Dokuwiki.
    Don't underestimate a pi! If you have a 3 or up, it can easily handle a few more things.
    I forgot to mention I also have a samba share running on it and it's sooooooo sloooooow. I might need to reflash the thing just to cover my bases but it's unusable for large or many files.
    Sqlite where possible, nginx, linux, no containers. I hate containers.

    I’m somewhere in between. I hated containers for a long time but now work a lot with Kubernetes for work.

    For my personal projects I’ve always hated containers a lot. Once I started learning how to build them and build them well however I really started enjoying it.

    Using others’ containers is always hit or miss because a lot of them are WAY bloated. I especially hate all the docker-compose files that come with some database included as if I’m dying to run a ton of containerized database servers. Usually the underlying software supports the Postgres I run on the host itself.

    I usually set up SSH keys and disable password login.

    Then I git-pull my base docker-compose stack that sets up:

    • Nginx proxy manager
    • Portainer
    • Frontend and backend networks

    I have a handful of other docker-compose files that hook into that setup to make it easy to quickly deploy various services wherever in a modular way.

    @ShittyKopper I've been using and contributing to #sandstorm for several years: It doesn't just containerize apps, it containerizes individual documents, and only runs them on demand. Installing apps is a single click and they cost *no* resources when I'm not actively using them.

    About two years ago my set up had gotten out of control, as it will. Closet full of crap all running vms all poorly managed by chef. Different linux flavors everywhere.

    Now its one big physical ubuntu box. Everything gets its own ubuntu VM. These days if I can't do it in shell scripts and xml I'm annoyed. Anything fancier than that i'd better be getting paid. I document in markdown as i go and rsync the important stuff from each VM to an external every night. Something goes wrong i just burn the vm, copy paste it back together in a new one from the mkdocs site. Then get on with my day.

    For personal Linux servers, I tend to run Debian or Ubuntu, with a pretty simple "base" setup that I just run through manually in my head.

    • Setup my personal account.
    • Upload my SSH keys.
    • Configure the hostname (usually after something in Star Trek 🖖).
    • Configure the /etc/hotss file.
    • Make sure it is fully patched.
    • Setup ZeroTier.
    • Setup Telegraf to ship some metrics.
    • Reboot.

    I don't automate any of this because I don't see a whole of point in doing it.

    Super interesting to me that you swap between Debian and Ubuntu. Is there any rhyme or reason to why you use one over the other?

    I tend to prefer installing Debian on a server, but recently I did install Ubuntu's recent LTS on a box because I was running into an issue with the latest version of Debian. I didn't want to revert to an earlier version of Debian or spend a bunch of time figuring out the problem I was having with Python, so I opted to use Ubuntu, which worked.

    Ubuntu is based on Debian, so it's like using the same operating system, as far as I'm concerned.

    I'd like to use rootless podman, but since I include zerotier in my containers, they need access to the tunnel device and net_admin, so rootless isn't an option right now.

    Podman-compose works for me. I'd like to learn how to use Ansible and Kubernetes, but right now, it's just my Lemmy VPS and my Raspberry Pi 4, so I don't have much need for automation at the moment. Maybe some day.

    Super interesting to me that you switch between Debian and Ubuntu. Is there any rhyme or reason to when you use one over the other?
    You can add net_admin to the user running podman, I have added it to the ambient capability mask before, which acts like an inherited override for everything the user runs.

    Cloud vps with debian. Then fix/update whatever weird or outdated image my vps provider gave me (over ssh). Then setup ssh certs instead of password. I use tmux a lot. Sometimes I have local scripts with scp to move some files around.

    Usually I'm just hosting mosquitto, maybe apache2 webserver and WordPress or Flask. The latter two are only for development and get moved to other servers when done.

    I don't usually use containers.

    I'm better at hardware development than all this newfangled web stuff, so mostly just give me a command line without abstractions and I'm happy.

    I use Proxmoxn then stare at the dashboard realizing I have no practical use for a home lab
    So i'm not alone. I am trying to better myself.
    Usually Debian as base, then ansible to setup openssh for accessandd for the longest time, I just ran docker-compose straight on bare metal, these days though, I prefer k3s.

    I use NixOS on almost all my servers, with declarative configuration. I can also install my config in one command with NixOS-Anywhere

    It allows me to improve my setup bit by bit without having to keep track of what I did on specific machines

    GitHub - numtide/nixos-anywhere: install nixos everywhere via ssh

    install nixos everywhere via ssh. Contribute to numtide/nixos-anywhere development by creating an account on GitHub.

    GitHub

    Up until now I've been using docker and mostly manually configuring by dumping docker compose files in /opt/whatever and calling it a day. Portainer is rubbing, but I mainly use it for monitoring and occasionally admin tasks. Yesterday though, I spun up machine number 3 and I'm strongly considering setting up something better for provisioning/config. After it's all set up right, it's never been a big problem, but there are a couple of bits of initial with that are a bit of a pain (mostly hooking up wireguard, which I use as a tunnel for remote admin and off-site reverse proxying.

    Salt is probably the strongest contender for me, though that's just because I've got a bit of experience with it.