Proxmox vs Host+Docker

https://lemmy.world/post/150921

[QUESTION] Proxmox vs Host+Docker - Lemmy.world

Hello, I figured this would have been asked a lot on reddit but given the sub went private I figured it would be good to ask the question here for people like me in this situation. I’ve recently managed to upgrade my server from an old 4gb Celeron laptop to an Optiplex 3070 i5-8500 16gb and see it as an opportunity to improve my system. Currently on the laptop I have Ubuntu Server with docker. The containers I have running are: - pihole - rtsp-server - nextcloud - ntfy - wireguard - nginx - zoneminder (was previously shinobi) cctv - php7.4 - portainer - mosquitto - homeassistant - phpmyadmin - certbot - mariadb - openproject (currently unable to run this alongside zm due to lack of ram) Its been running well for a few years. I’m surprised given the specs of the laptop but there we go. I don’t know if in future I’d want to spin up other services like the *arr’s and jellyfin and I dont really have much media or the knowhow on how to get them. I’m new to indexers and whatnot. I’m sure another post will come up/ has already asking what services people run Anyway, the question is should I : 1) Install Proxmox and a Debian/ Ubuntu Server VM and move my docker containers there? 2) Try and transfer to LXC alternatives? 3) Stick with docker on host? I feel like I may have previously (years ago) tried installing proxmox on the laptop but it didnt work well (either lack of hypervisor support or low specs). If its the latter then I know host+docker can run on a low spec system and therefore uses less resources. Does the question essentially come down to whether or not I need a VM? Or is the overhead of proxmox so low that it doesnt matter? What would you guys recommend and why? TIA and it’s nice to meet you all on lemmy!

Hello, I use Proxmox on a weaker MiniPC and have no problems with it at all. You can install Docker in LXC containers via container nesting and then have even less overhead than running Docker in a VM in Proxmox. Personally, I find the possibility of setting up VMs in parallel and having disposable systems with just a few clicks very appealing. Proxmox also offers some support for backups. But I haven't tried that myself yet. Maybe that's something you would also be interested in for your Docker containers.

Second the proxmox suggestion, but I’m still on the fence myself between VM+docker versus individual LXCs.

Nothing more convenient than a docker-compose pull && docker-compose up -d when everything is defined in a compose file.

The simplicity and ubiquity of docker compose is exactly why I haven't decided to use LXCs but instead have proxmox and one VM that handles the majority of my docker containers which I view / manage with Portainer (though not fully utilizing stacks). Proxmox is great for VM management, resource allocation, and backup and docker is great to keep overhead low, quickly recreate services, and is more commonplace these days than something akin to Vagrant
I do really like this pro too and at this point am very used to it and it works for me. I'll see what the overhead is with Proxmox and go from there. At the end of the day, hopefully I dont need to squeeze every last drop of RAM out of my system anymore so if it works and I can run more services then its a win
Fedora server with Podman and Quadlet Systemd scripts is the best option for small scale self-hosters IMHO. I would not go down the VM route, waste of resources and overly complicated.
That's basically what I am running inside a VM on Proxmox. Sure, it has some slight overhead but that is normally not an issue, but it makes some things like snapshots a little bit easier, and allows for a little more options for firewalls.
I just run a btrfs filesystem so that I have much more control over the snapshots than in a VM. But yes more firewall options are nice.
I hadn't heard of that before now! Very interesting. Through very briefly looking in to podman its basically a docker alternative - so if i rephrased my questions to "host+container vs proxmox" your answer is the former. How is the back up solution? As a few people have mentioned that as a pro of proxmox?
I run everything on Btrfs (but zfs is also good), which allows filesystem level snapshotting. Together with a tool like btrbk (packaged in most distros) that makes for a very nice automated backup system. Zfs has the function of btrbk built in directly.
GitHub - digint/btrbk: Tool for creating snapshots and remote backups of btrfs subvolumes

Tool for creating snapshots and remote backups of btrfs subvolumes - digint/btrbk

GitHub
I run OpenSUSE tumbleweed as my daily driver with btrfs and the snapshots have saved me from breaking updates. I think i'll need to experiment at this stage... (but i guess thats exactly what a homelab is for!)
Just saw a post about podman here you may find interesting/ want to input on
Podman is awesome—and totally frustrating - Lemmy.world

So Podman [https://podman.io/] is an open source container engine like Docker—with "full"1 Docker compatibility. IMO Podman’s main benefit over Docker is security. But how is it more secure? Keep reading… Docker traditionally runs a daemon as the root user, and you need to mount that daemon’s socket into various containers for them to work as intended (See: Traefik, Portainer, etc.) But if someone compromises such a container and therefore gains access to the Docker socket, it’s game over for your host. That Docker socket is the keys to the root kingdom, so to speak. Podman doesn’t have a daemon by default, although you can run a very minimal one for Docker compatibility. And perhaps more importantly, Podman can run entirely as a non-root user.2 Non-root means if someone compromises a container and somehow manages to break out of it, they don’t get the keys to the kingdom. They only get access to your non-privileged Unix user. So like the keys to a little room that only contains the thing they already compromised.2.5 Pretty neat. Okay, now for the annoying parts of Podman. In order to achieve this rootless, daemonless nirvana, you have to give up the convenience of Unix users in your containers being the same as the users on the host. (Or at least the same UIDs.) That’s because Podman typically3 runs as a non-root user, and most containers expect to either run as root or some other specific user. The "solution"4 is user re-mapping. Meaning that you can configure your non-root user that Podman is running as to map into the container as the root user! Or as UID 1234. Or really any mapping you can imagine. If that makes your head spin, wait until you actually try to configure it. It’s actually not so bad on containers that expect to run as root. You just map your non-root user to the container UID 0 (root)… and Bob’s your uncle. But it can get more complicated and annoying when you have to do more involved UID and GID mappings—and then play the resultant permissions whack-a-mole on the host because your volumes are no longer accessed from a container running as host-root… Still, it’s a pretty cool feeling the first time you run a “root” container in your completely unprivileged Unix user and everything just works. (After spending hours of swearing and Duck-Ducking to get it to that point.) At least, it was pretty cool for me. If it’s not when you do it, then Podman may not be for you. The other big annoying thing about Podman is that because there’s no Big Bad Daemon managing everything, there are certain things you give up. Like containers actually starting on boot. You’d think that’d be a fundamental feature of a container engine in 2023, but you’d be wrong. Podman doesn’t do that. Podman adheres to the “Unix philosophy.” Meaning, briefly, if Podman doesn’t feel like doing something, then it doesn’t. And therefore expects you to use systemd for starting your containers on boot. Which is all good and well in theory, until you realize that means Podman wants you to manage your containers entirely with systemd. So… running each container with a systemd service, using those services to stop/start/manage your containers, etc. Which, if you ask me, is totally bananasland. I don’t know about you, but I don’t want to individually manage my containers with systemd. I want to use my good old trusty Docker Compose. The good news is you can use good old trusty Docker Compose with Podman! Just run a compatibility daemon (tiny and minimal and rootless… don’t you worry) to present a Docker-like socket to Compose and boom everything works. Except your containers still don’t actually start on boot. You still need systemd for that. But if you make systemd run Docker Compose, problem solved! This isn’t the “Podman Way” though, and any real Podman user will be happy to tell you that. The Podman Way is either the aforementioned systemd-running-the-show approach or something called Quadlet or even a Kubernetes compatibility feature. Briefly, about those: Quadlet is “just” a tighter integration between systemd and Podman so that you can declaratively define Podman containers and volumes directly in a sort of systemd service file. (Well, multiple.) It’s like Podman and Docker Compose and systemd and Windows 3.1 INI files all had a bastard love child—and it’s about as pretty as it sounds. IMO, you’d do well to stick with Docker Compose. The Kubernetes compatibility feature lets you write Kubernetes-style configuration files and run them with Podman to start/manage your containers. It doesn’t actually use a Kubernetes cluster; it lets you pretend you’re running a big boy cluster because your command has the word “kube” in it, but in actuality you’re just running your lowly Podman containers instead. It also has the feel of being a dev toy intended for local development rather than actual production use.5 For instance, there’s no way to apply a change in-place without totally stopping and starting a container with two separate commands. What is this, 2003? Lastly, there’s Podman Compose. It’s a third-party project (not produced by the Podman devs) that’s intended to support Docker Compose configuration files while working more “natively” with Podman. My brief experience using it (with all due respect to the devs) is that it’s total amateur hour and/or just not ready for prime time. Again, stick with Docker Compose, which works great with Podman. Anyway, that’s all I’ve got! Use Podman if you want. Don’t use it if you don’t want. I’m not the boss of you. But you said you wanted content on Lemmy, and now you’ve got content on Lemmy. This is all your fault! 1 Where “full” is defined as: Not actually full. 2 Newer versions of Docker also have some rootless capabilities [https://docs.docker.com/engine/security/rootless/]. But they’ve still got that stinky ol’ daemon. 2.5 It’s maybe not quite this simple in practice, because you’ll probably want to run multiple containers under the same Unix account unless you’re really OCD about security and/or have a hatred of the convenience of container networking. 3 You can run Podman as root and have many of the same properties as root Docker, but then what’s the point? One less daemon, I guess? 4 Where “solution” is defined as: Something that solves the problem while creating five new ones. 5 Spoiler: Red Hat’s whole positioning with Podman is like they see it is as a way for buttoned-up corporate devs to run containers locally for development while their “production” is running K8s or whatever. Personally, I don’t care how they position it as long as Podman works well to run my self-hosting shit…

I don't think I will be able to convince them that the Systemd integration is actually the best part of Podman ;)
Depending on what you need. If you plan to run everything via docker, install docker+portainer on the host and manage your docker instances via portainer. Proxmox is great with lot of functionality. If you don't plan to use proxmox as a hypervisor, avoid the overhead and just go with portainer and docker on bare metal.
Thanks. I guess that is ultimately the question I need to ask myself - will I need a VM in future. Looking at the other comments it seems the overhead isnt that high (though no-one gave me a number). In the event I did need a VM its ready to go rather than being stuck with a messy solution to fire one up

I use option 1 and would do it again. Can recommend. Proxmox as the base OS is just so flexible. Then a VM with Debian which runs docker and most of my docker containers.

Some selected docker containers run in there own VM for segregation. (For example troubleshooting zigbee2mqtt required rebooting the VM, so it had its own VM. Wireguard runs in its own VM because (at the time) a newer kernel was needed. nginx runs as an LXC cause I wanted some integration that wasn't straight forward with Docker.

For the most part Docker is probably easier than LXC, but for Jellyfin I'll go with LXC for GPU accelerated transcoding. (Which can be a pain in VMs)

Proxmox doesn't have much overhead compared to bare metal, (unless you run in to a bug or issue, and don't need GPU Acceleration) so for me the flexibility of Proxmox is 100% worth it.

Also check out Proxmox Backup Server, which makes VM and LXC backups stupid easy

With proxmox you can also start out with a VM with all you're current docker images like a drop-in replacement, then gradually move them to LXC at your own pace. (Though I don't see any reason to do so except the learning exercise)

If you do go with Proxmox, the recommendation is to install docker in a VM. Do not install Docker on the proxmox host directly

I am currently struggling through to change over from a SBC running Ubuntu with everything in Docker to a mini PC running Proxmox and utilizing Docker. I think it will depend on your pain tolerance and existing linux-fu.

Where I am totally comfortable doing as another poster said and spinning up Fedora server and plugging along, I wanted to give proxmox a go in order to have access to all that extra potential with VM's (home assistant, pfsense/opnsense, etc)... But multiple times I have been very close to saying fuck it and binning the whole Proxmox set up.

I am not a power user, so everything is a steep learning curve, but the complexities of unprivileged containers running docker and permissions on the host nearly broke me.

But the nice thing with Proxmox is being able to fuck around and try different things, and when it doesn't work just burn the VM or container and not have to reinstall. I ended up setting up docker in a privileged container (which the internet would make it sound like doing so is committing a deadly soon) just so I could get my services up and running through docker without issue, and then I can dick around with "properly" setting up an unprivileged container that can accomplish the same.

But the simplicity of going back to Fedora + Docker + BTRFS snapshots will always tempt me, I think.

I have been running Debian+LXC and its pretty pain free. Each container interacts like a VM or bare metal machine so installing new services is trivial. With a reverse proxy on the host and certbot, https is covered. Its not as "easy" as deploying with docker but you aren't reliant on other people to package up and release updates or customizations.
Option 1 is what I've gone with. I have my firewall VM and nomad servers and agents also running in VMs. Storage is cephfs and rbd across three nodes I've still got some lxc services that I need to migrate.