Those who are hosting on bare metal: What is stopping you from using Containers or VM's? What are you self hosting?

https://lemmy.world/post/36414259

Those who are hosting on bare metal: What is stopping you from using Containers or VM's? What are you self hosting? - Lemmy.World

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

Are you concerned about your self-hosted bare metal machine being a single point of failure? Or, are you concerned it will be difficult to reproduce?
Considering I have a full backup, all services are Arch packages and all important data is on its own drive, I’m not concerned about anything
I use Raspberry Pi 4 with 16GB SD-card. I simply don’t have enough memory and CPU power for 15 separate database containers for every service witch I want to use.
So, are you running 15 services on the Pi 4 without containers?

The list of what I run on my RPi:

Some of them runs in containers, some of them runs on bare metal.

GitHub - pucherot/Pi.Alert: WIFI / LAN intruder detector. Check the devices connected and alert you with unknown devices. It also warns of the disconnection of "always connected" devices

WIFI / LAN intruder detector. Check the devices connected and alert you with unknown devices. It also warns of the disconnection of "always connected" devices - pucherot/Pi.Alert

GitHub
I see. Are you the only user?

Databases on sd cards are a nightmare for sd card lifetimes. I would really recommend getting at least a USD SSD stick instead if you want to keep it compact.

Your SD card will die suddenly someday in the near future otherwise.

Thank you for your advice. I do use a external hard drive for my data.

my two bare metal servers are the file server and music server. I have other services in a pi cluster.

file server because I can’t think of why I would need to use a container.

the music software is proprietary and requires additional complications to get it to work properly…or at all, in a container. it also does not like sharing resources and is CPU heavy when playing to multiple sources.

if either of these machines die, a temporary replacement can be sourced very easily(e.g. the back of my server closet) and recreated from backups while I purchase new or fix/rebuild the broken one.

IMO the only reliable method for containers is a cluster because if you’re running several containers on a device and it fails you’ve lost several services.

Cool, care to share more specifics on your Pi Cluster

I followed one of the many guides for installing proxmox on Rpis. 3node, 4gb rpi4s

I use the cluster for lighter services like Trilium, FreshRss, secondary DNS, a jumpbox… and something else I forget. I’m going to try immich and see how it performs.

my recent goto for cheap($200-300) servers are Debian + old Intel Macbook pros. I have two Minecraft bedrock servers on MBPs… one an i5, the other an i7.

I also use a Lenovo laptop to host some industrial control software for work.

TrueNAS is on bare metal has I have a dedicated NAS machine that’s not doing everything else and also is not recommended to virtualize. Not sure if that counts.

Same for the firewall (opnsense) since it is it’s own machine.

Have you tried running containers on Truenas?
No because I run my containers elsewhere, not on the NAS

For me it’s lack of understanding usually. I haven’t sat down and really learned what docker is/does. And when I tried to use it once I ended up with errors (thankfully they all seemed contained by the docker) but I just haven’t gotten around to looking more into than seeing suggestions to install say Pihole in it. Pretty sure I installed Pihole outside of one. Jellyfin outside, copyparty outside, and I something else im forgetting at the moment.

I was thinking of installing a chat app in one, but I put off that project because I got busy at work and it’s not something I normally use.

I guess I just haven’t been forced to see the upsides yet. But am always wanting to learn

containerisation is to applications as virtual machines are to hardware.

VMs share the same CPU, memory, and storage on the same host.
Containers share the same binaries in an OS.

When you say binaries do you mean locally stored directories kind of like what Lutris or Steam would do for a Windows game. (Create a false c:\ )

Not so much a fake one but overlay the actual directory with specific needed files for that container.

Take the Linux lib directory. It exists on the host and had python version 3.12 installed. Your docker container may need python 3.14 so an overlay directory is created that redirects calls to /lib/python to /lib/python3.14 instead of the regular symlinked /lib/python3.12.

So let’s say I theoretically wanted to move a docker container to another device or maybe if I were re-installing an OS or moving to another distro, could I in theory drag my local docker container to an external and throw my device in a lake and pull that container off into the new device? If so … what then, I link the startups, or is there a “docker config” where they are all able to be linked and I can tell it which ones to launch on OS launch, User launch, delay or what not?

For ease of moving containers between hosts I would use a docker-compose.yaml to set how you want storage shared, what ports to present to the host, what environment variables your application wants. Using Wordpress as an example this would be your starting point
github.com/docker/awesome-compose/…/compose.yaml

all the settings for the database is listed under the db heading. You would have your actual database files stored in /home/user/Wordpress/db_data and you would link /home/user/Wordpress/db_data to /var/lib/MySQL inside the container with the line

volumes: - db_data:/var/lib/mysql

As the compose file will also be in home/user/Wordpress/ you can drop the common path.

That way if you wanted to change hosts just copy the /home/user/Wordpress folder to the new server and run docker compose up -d and boom, your server is up. No need to faf about.

Containers by design are suppose to be temporary and the runtime data is recreated each time the container is launched. The persistent data is all you should care for.

awesome-compose/wordpress-mysql/compose.yaml at master · docker/awesome-compose

Awesome Docker Compose samples. Contribute to docker/awesome-compose development by creating an account on GitHub.

GitHub

“Containers by design are suppose to be temporary and the runtime data is recreated each time the container is launched. The persistent data is all you should care for.”

So that’s really why they should be good for Jellyfin/File servers, as the data isn’t needing to be stored in container, just the run files. I suppose the config files as well.

When I reverse proxy to my network using wireguard (set up on the jellyfin server, I also think I have a rustdesk server on there) on the other hand, is it worth using a container, or is that just the same either way?

I have shoved way to many things on an old laptop, but I never have to touch it really, and the latest update mint put out actually cured any issues I had. I used to have to reboot once a week or so to get everything back online when it came to my Pihole and shit. Since the latest update I ran in September 4th, I haven’t touched it for anything. Screen just stays closed in a corner of my desk with other shit stacked on top

I’m doing this on a couple of machines. Only running NFS, Plex (looking at a Jellyfin migration soon), Home Assistant, LibreNMS and some really small other stuff. Not using VMs or LXC due to low-end hardware (pi and older tiny pc). Not using containers due to lack of experience with it and a little discomfort with the central daemon model of Docker, running containers built by people I don’t know.

The migration path I’m working on for myself is changing to Podman quadlets for rootless, more isolation between containers, and the benefits of management and updates via Systemd. So far my testing for that migration has been slow due to other projects. I’ll probably get it rolling on Debian 13 soon.

I started hosting stuff before containers were common, so I got used to doing it the old fashioned way and making sure everything played nice with each other.

Beyond that, it’s mostly that I’m not very used to containers.

That I’ve yet to see a containerization engine that actually makes things easier, especially once a service does fail or needs any amount of customization. I’ve two main services in docker, piped and webodm, both because I don’t have the time (read: am too lazy) to write a PKGBUILD. Yet, docker steals more time than maintaining a PKGBUILD, with random crashes (undebuggable, as the docker command just hangs when I try to start one specific container), containers don’t start properly after being updated/restarted by watchtower, and debugging any problem with piped is a chore, as logging in docker is the most random thing imagineable. With systemd, it’s in journalctl, or in /var/log if explicitly specified or obviously useful (eg. in multi-host nginx setups). With docker, it could be a logfile on the host, on the guest, or stdout. Or nothing, because, why log after all, when everything “just works”? (Yes, that’s a problem created by container maintainers, but one you can’t escape using docker. Or rather, in the time you have, you could more easily properly(!) install it bare metal) Also, if you want to use unix sockets to more closely manage permissions and prevent roleplaying a DHCP and DNS server for ports (by remembering which ports are used by which of the 25 or so services), you’ll either need to customize the container, or just use/write a PKGBUILD or similar for bare metal stuff.

Also, I need to host a python2.7 django 2.x or so webapp (yes, I’m rewriting it), which I do in a Debian 13 VM with Debian 9 and Debian 9 LTS repos, as it most closely resembles the original environment, and is the largest security risk in my setups, while being a public website. So into qemu it goes.

And, as I mentioned, either stuff is officially packaged by Arch, is in the AUR or I put it into the AUR.

Do you host on more than one machine? Containerization / virtualization begins to shine most brightly when you need to scale / migrate across multiple servers. If you’re only running one server, I definitely see how bare metal is more straight-forward.
This is a big part of why I don’t use VMs or containers at home. All of those abstractions only start showing their worth once you scale them out.

Hm, I don’t know about that either. While scale is their primary purpose, another core tenant of containerization is reproducibility. For example

  • If you are developing any sort of software, containers are a great way to ensure that the environment of your builds remains consistent.
  • If you are frequently rebuilding a server/application for any reason, containers provide a good way to ensure everything is configured exactly as it was before, and when used with Git, changes are easy to track. There are also other tools that excel at this (like Ansible).
  • That to me still feels like a variety of “scale”. All of these tools (Ansible is a great example) are of dubious benefit when your scale of systems is small. If you only have a single dev machine or server, having an infrastructure-as-code system or containerized abstraction layer, just feels to me like unnecessary added mental overhead. If this post had been in a community about FOSS development or general programming, I’d feel differently as all of these things can be of great use there. Maybe my idea of selfhosting just isn’t as grandiose as some of the people in here. If you have a room full of server racks in your house, that’s a whole other ballgame.

    Personally I have seen the opposite from many services. Take Jitsi Meet for example. Without containers, it’s like 4 different services, with logs and configurations all over the system. It is a pain to get running, as none of the services work without everything else being up. In containers, Jitsi Meet is managed in one place, and one place only. (When using docker compose,) all logs are available with docker compose logs, and all config is contained in one directory.

    It is more a case-by-case thing whether an application is easier to set up and maintain with or without docker.

    For logs dozzle is also fantastic, and you can do “agents” if you have multiple docker nodes and connect them togetherb
    You can customize and debug pretty easily, I’ve found. You can create your own Dockerfile based on one you’re using and add customizations there, and exec will get you into the container.

    especially once a service does fail or needs any amount of customization.

    A failed service gets killed and restarted. It should then work correctly.
    If it fails to recover after being killed, then it’s not a service that’s fully ready for containerisation.
    So, either build your recovery process to account for this… or fix it so it can recover.
    It’s often why databases are run separately from the service. Databases can recover from this, and the services are stateless - doesn’t matter how many you run or restart.

    As for customisation, if it isn’t exposed via env vars then it can’t be altered.
    If you need something beyond the env vars, then you use that container as a starting point and make your customisation a part of your container build processes via a dockerfile (or equivalent)

    It’s a bit like saying “chisels are great. But as soon as you need to cut a fillet steak, you need to sharpen a side of the chisel instead of the tip of the chisel”.
    It’s using a chisel incorrectly.

    Exactly. Therefore, docker is not useful for those purposes to me, as using arch packages (or similar) is easier to fulfill my needs.
    I’m running a TrueNAS server on bare metal with a handful of hard drives. I have virtualized it in the past, but meh, I’m also using TrueNAS’s internal features to host a jellyfin server and a couple of other easy to deploy containers.
    So Truenas itself is running your containers?

    Yeah, the more recent versions basically have a form of Docker as part of its setup.

    I believe it’s now running on Debian instead of free BSD, which probably simplified the containers set up.

    Depends on the application for me. For Mastodon, I want to allow 12K character posts, more than 4 poll question choices, and custom themes. Can’t do it with Docker containers. For Peertube, Mobilizon, and Peertube, I use Docker containers.
    Why could you not have that Mastodon setup in containers? Sounds normal afaik

    I’ll chime in: simplicity. It’s much easier to keep a few patches that apply to local OS builds: I use Nix, so my Mastodon microVM config just has an extra patch line. If there’s a new Mastodon update, the patch most probably will work for it too.

    Yes, I could build my own Docker container, but you can’t easily build it with a patch (for Mastodon specifically, you need to patch js pre-minification). It’s doable, but it’s quite annoying. And then you need to keep track of upstream and update your Dockerfile with new versions.

    I’ve always done things bare metal since starting the selfhosting stuff before containers were common. I’ve recently switched to NixOS on my server, which also solves the dependency hell issue that containers are supposed to solve.

    I thought about running something like proxmox, but everything is too pooled, too specialized, or proxmox doesn’t provide the packages I want to use.

    Just went with arch as the host OS and firejail or lxc any processes i want contained.

    I’ve never installed a package on proxmox.
    I’ve BARELY interacted with CLI on proxmox (I have a script that creates a nice Debian VM template, and occasionally having to really kill a VM).

    What would you install on proxmox?!

    Firmware update utilities, host OS file system encryption packages, HBA management tools, temperature monitoring, and then a lot of the packages had bugs that were resolved with newer versions, but proxmox only provided old versions.

    My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.

    As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.

    Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)

    So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.

    A NAS as bare metal makes sense.
    It can then correctly interact with the raw disks.

    You could pass an entire HBA card through to a VM, but I feel like it should be horses for courses.
    Let a storage device be a storage device, and let a hypervisor be a hypervisor.

    I feel like this too. I do not feel comfortable using docker containers that I didn’t make myself. And for many people, that defeats the purpose.

    Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.

    Yes, I’ll die on this hill.

    Move over, bud. That’s my hill to die on, too.
    Learning this fact is what got me to finally dockerize my setup

    But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!

    In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.

    kubernetes

    Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.

    Those terms do mean something, but they’re a lot simpler than execs claim they are.

    I love using it at work. Its a great tool to get everything up and running kinda like ansible. Paired with containerization it can make applications more “standard” and easy to spin back up.

    That being said, for a home server, it feels like overkill. I dont need my resources spread out so far. I dont want to keep updating my kub and container setup with each new iteration. Its just not fun.

    Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.

    …oh shit, the RAM is on fire.

    We don’t need no water let the mothefuxker burn.

    Burn mothercucker, burn.

    (Thanks phone for the spelling mistakes that I’m leaving).

    Speak english doctor! But really is this a fancy way of saying its ok to docker all the things?

    Depends on the application. My NAS is bare metal. That box does exactly one thing and one thing only, and it’s something that is trivial to setup and maintain.

    Nextcloud is running in docker (AIO image) on bare metal (Proxmox OS) to balance performance with ease of maintenance. Backups go to the NAS.

    Everything else is running on in a VM which makes backups and restores simpler for me.

    After many failures, I eventually landed on OMV + Docker. It has a plugin that puts the Docker management into a web UI and for the few simple services I need, it’s very straightforward to maintain. I don’t cloud host because I want complete control of my data and I keep an automatic incremental backup alongside a physically disconnected one that I manually update.