Should I move to Docker?

https://lemmy.ca/post/11247660

Should I move to Docker? - Lemmy.ca

I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training. It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install. I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?

dude, im kinda you. i just jumped into docker over the summer... feel stupid not doing it sooner. there is just so much pre-created content, tutorials, you name it. its very mature.

i spent a weekend containering all my home services.. totally worth it and easy as pi[hole] in a container!.

As a guy who’s you before summer.

Can you explain why you think it is better now after you have ‘contained’ all your services? What advantages are there, that I can’t seem to figure out?

Please teach me Mr. OriginalLucifer from the land of MoistCatSweat.Com

No more dependency hell from one package needing libsomething.so 5.3.1 and another service absolutely can only run with libsomething.so 4.2.0

That and knowing that when i remove a container, its not leaving a bunch of cruft behind

You can also back up your compose file and data directories, pull the backup from another computer, and as long as the architecture is compatible you can just restore it with no problem. So basically, your services are a whole lot more portable. I recently did this when dedipath went under. Pulled my latest backup to a new server at virmach, and I was up and running as soon as the DNS propagated.

Modularity, compartmentalization, reliability, predictability.

One software needs MySQL 5, another needs mariadb 7. A third service needs PHP 7 while the distro supported version is 8. A fourth service uses cuda 11.7 - not 11.8 which is what everything in your package manager uses. a fifth service’s install was only tested on latest Ubuntu, and now you need to figure out what rpm gives the exact library it expects. A sixth service expects odbc to be set up in a very specific way, but handwaves it in the installation docs. A seventh program expects a symlink at a specific place that is on the desktop version of the distro, but not the server version.

And so on and so forth… with docker not only is all this specified in excruciating details, it’s also the exact same setup on every install.

You don’t have it not working on arch because the maintainer of a library there decided to inline a patch that supposedly doesn’t change anything, but somehow causes the program to segfault.

I can develop a service on windows, test it, deploy it to my Kubernetes cluster, and I don’t even have to worry about which machine to deploy it on, it just runs it on a machine. Probably an Ubuntu machine, but maybe on that Gentoo node instead. And if my osx friend wants to try it out, then no problem. I can just give him a command, and it’s running on his laptop.

If you’re an old Linux admin… This is what utopia looks like.

It sounds very nice and clean to work with!

If I’m lucky enough to get the Raspberry 5 at Christmas, I will try to set it up with docker for all my services!

Thanks for the explanation.

Just remember that Raspberry is an ARM cpu, which is a different architecture. Docker can cross compile to it, and make multiple images automatically. It takes more time and space though, as it runs an arm emulator to make them.

docker.com/…/faster-multi-platform-builds-dockerf… has some info about it.

Faster Multi-Platform Builds: Dockerfile Cross-Compilation Guide | Docker

Learn from Docker experts to simplify and advance your app development and management with Docker. Stay up to date on Docker events and new version

Docker

Well, that wasn’t a huge investment :-) I’m in…

I understand I’ve got LOTS to learn. I think I’ll start by installing something new that I’m looking at with docker and get comfortable with something my users (family…) are not ye relying on.

Forget docker run, docker compose up -d is the command you need on a server. Get familiar with a UI, it makes your life much easier at the beginning: portainer or yacht in the browser, lazy-docker in the terminal.

docker compose up -d

no configuration file provided: not found

like just docker run by itself, it’s not the full command, you need a compose file: docs.docker.com/engine/reference/…/compose/

Basically it’s the same as docker run, but all the configuration is read from a file, not from stdin, more easily reproducible, you just have to store those files. The important is compose commands are very important for selfhosting, when your containers expected to run all the time.

RTFM: docs.docker.com/compose/

"docker compose"

""

Docker Documentation
Yeah, I get it now. Just the way I read it the first time it sounded like you were saying that was a complete command and it was going to do something “magic” for me :-)

you need to create a docker-compose.yml file. I tend to put everything in one dir per container so I just have to move the dir around somewhere else if I want to move that container to a different machine. Here’s an example I use for picard with examples of nfs mounts and local bind mounts with relative paths to the directory the docker-compose.yml is in. you basically just put this in a directory, create the local bind mount dirs in that same directory and adjust YOURPASS and the mounts/nfs shares and it will keep working everywhere you move the directory as long as it has docker and an available package in the architecture of the system.

`version: ‘3’ services: picard: image: mikenye/picard:latest container_name: picard environment: KEEP_APP_RUNNING: 1 VNC_PASSWORD: YOURPASS GROUP_ID: 100 USER_ID: 1000 TZ: “UTC” ports: - “5810:5800” volumes: - ./picard:/config:rw - dlbooks:/downloads:rw - cleanedaudiobooks:/cleaned:rw restart: always volumes: dlbooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:NFSPATH”

cleanedaudiobooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:OTHER NFSPATH” `

If you are interested in a web interface for management check out portainer.
This makes me happy. As a fellow CLI nerd, welcome to the party.
Learning docker is always a big plus. It’s not hard. If you are comfortable with cli commands, then it should be a breeze. Even if you are not comfortable, you should get used to it very fast.
Definitely not a fad. It’s used all over the industry. It gives you a lot more control over the environment where your hosted apps run. There may be some overhead, but it’s worth it.
It just making things easier and cleaner. When you remove a container, you know there is no leftover except mounted volumes. I like it.
It’s also way easier if you need to migrate to another machine for any reason.
I use LXC for all the reasons most people use Docker, it’s easy to spin up a new service, there are no leftovers when I remove a service, and everything stays separate. What I really like about LXC though is that you can treat containers like VMs, you start it up, attach and install all your software as if it were a real machine. No extra tech to learn.
It completely true you probably have to prune some images, or volumes.
As someone who is not a former sysadmin and only vaguely familiar with *nix, I’ve been able to turn my home NAS (bought strictly to hold photos and videos backed up from our phones) into a home media sever by installing Docker and learning home the yml files work and it’s been awesome
Why not jumping directly to Podman if you want more resilent system from beginning? Just my opinion
Why not? Because I’ve never heard of it until this thread - lots of people mentioning it so obviously I’ll look into it.

IMO, yes. Docker (or at least OCI containers) aren’t going anywhere. Though one big warning to start with, at a sysadmin, you’re going to be absolutely aghast at the security practices that most docker tutorials suggest. Just know that it’s really not that hard to do things right (for the most part[0]).

I personally suggest using rootless podman with docker-compose via the podman-system-service.

Podman re-implements the docker cli using the system namespacing (etc.) features directly instead of through a daemon that runs as root. (You can run the docker daemon rootless, but it clearly wasn’t designed for it and it just creates way more headaches.) The Podman System Service re-implements the docker daemon’s UDS API which allows real Docker Compose to run without the docker-daemon.

[0] If anyone can tell me how to set SELinux labels such that both a container and a samba server can have access, I could fix my last remaining major headache.

Docker is amazing, you are late to the party :)

It’s not a fad, it’s old tech now.

Yes. Let me give you an example on why it is very nice: I migrated one of my machines at home from an old x86-64 laptop to an arm64 odroid this week. I had a couple of applications running, 8 or 9 of them, all organized in a docker compose file with all persistent storage volumes mapped to plain folders in a directory. All I had to do was stop the compose setup, copy the folder structure, install docker on the new machine and start the compose setup. There was one minor hickup since I forgot that one of the containers was built locally but since all the other software has arm64 images available under the same name, it just worked. Changed the host IP and done.

One of the very nice things is the portability of containers, as well as the reproducibility (within limits) of the applications, since you divide them into stateless parts (the container) and stateful parts (the volumes), definitely give it a go!

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters DNS Domain Name Service/System IP Internet Protocol NAS Network-Attached Storage

[Thread #349 for this sub, first seen 13th Dec 2023, 17:15] [FAQ] [Full list] [Contact] [Source code]

Decronym

As a casual self-hoster for twenty years, I ran into a consistent pattern: I would install things to try them out and they’d work great at first; but after installing/uninstalling other services, updating libraries, etc, the conflicts would accumulate until I’d eventually give up and re-install the whole system from scratch. And by then I’d have lost track of how I installed things the first time, and have to reconfigure everything by trial and error.

Docker has eliminated that cycle—and once you learn the basics of Docker, most software is easier to install as a container than it is on a bare system. And Docker makes it more consistent to keep track of which ports, local directories, and other local resources each service is using, and of what steps are needed to install or reinstall.

It's very, very useful.

For one thing, its a ridiculously easy way to get cross-distro support working for whatever it is you're doing, no matter the distro-specific dependency hell you have to crawl through in order to get it set up.

For another, rather related reason, it's an easy way to build for specific distros and distro versions, especially in an automated fashion. Don't have to fuck around with dual booting or VMs, just use a Docker command to fire up the needed image and do what you gotta do.

A couple of security rules you should bear in mind:

  • expose only what you need to. If what you're doing doesn't need a network port, don't provide one. The same is true for files on your host OS, RAM, CPU allocation, etc.
  • never use privileged mode. Ever. If you need privileged mode, you are doing something wrong.
  • consider podman over docker. The former does not run as root.
  • I’m gonna play devil’s advocate here.

    You should play around with it. But I’ve been a Linux server admin for a long time and — this might be unpopular — I think Docker is unimportant for your situation. I use Docker daily at work and I love it. But I didn’t bother with it for my home server. I’ll never need to scale it or deploy anything repeatedly or where I need 100% uptime.

    At home, I tend to try out new things and my old docker-compose files are just not that valuable. Docker is amazing at work where I have different use cases but it mostly just adds needless complexity on a home server.

    That’s exactly how I feel about it. Except (as noted in my post…) the software availability issue. More and more stuff I want is “docker first” and I really have to go out of my way to install and maintain non docker versions.
    The advantage of docker, as I see it for home labs, is keeping things tidy, ensuring compatibility, and easy to manage/backup setup configs, app configs, and app data. It is all very predictable and manageable. I can move my docker compose and data from one host to another in literal seconds. I can, likewise, spin up and down test environments in seconds too. Obviously the whole scaling thing that people love containers for is pointless in a homelab, but many of the things that make it scalable also make it easy to manage.

    Im probably the opposite of you! Started using docker at home after messing up my raspberry pi a few too many times trying stuff out, and not really knowing what the hell I was doing. Since moved to a proper nas, with (for me, at least) plenty of RAM.

    Love the ability to try out a new service, which is kind of self-documenting (especially if I write comments in the docker-compose file). And just get rid of it without leaving any trace if it’s not for me.

    Added portainer to be able to check on things from my phone browser, grafana for some pretty metrics and graphs, etc etc etc.

    And now at work, it’s becoming really, really useful, and I’m the only person in my (small, scientific research) team who uses containers regularly. While others are struggling to keep their fragile python environments working, I can try out new libraries, take my env to the on-prem HPC or the external cloud, and I don’t lose any time at all. Even “deployed” some little utility scripts for folks who don’t realise that they’re actually pulling my image from the internal registry when they run it. A much, much easier way of getting a little time-saving script into the hands of people who are forced to use Linux but don’t have a clue how to use it.

    This is kinda where I’m at as well. I have always run my home services each in their own VM. There’s no fuss to set up a new one, if I want to move it to a different server I just copy the *.img file over and launch it. Sure I run a lot of internet services across my various machines but it all just works so I don’t understand what purpose there would be to converting all the custom configurations over to docker. It might make sense if I was trying to run all my services directly on the bare metal, but who does that?

    VM’s have much bigger overhead, for one. And VM’s are less reproducible too. If you had to set up a VM again, do you have all the steps written down? Every single step? Including that small “oh right” thing you always forget? A Dockerfile is basically just a list of those steps, written in a way a computer can follow. And every time you build an image in docker, it just plays that list and gives you the resulting file system ready to run.

    It’s incredibly practical in some cases, let’s say you want to try a different library or upgrade a component to a newer version. With VM’s you could do it live, but you risk not being able to go back. You could make a copy or make a checkpoint, but that’s rather resource intensive. With docker you just change the Dockerfile slightly and build a new image.

    The resulting image is also immutable, which means that if you restart the docker container, it’s like reverting to first VM checkpoint after finished install, throwing out any cruft that have gathered. You can exempt specific file and folders from this, if needed. So every cruft and change that have happened gets thrown out except the data folder(s) for the program.

    I’m not sure I understand this idea that VMs have a high overhead. I just checked one of my servers, there are nine VMs running everything from chat channels to email to web servers, and the server is 99.1% idle. And this is on a poweredge R620 with low-power CPUs, it’s not like I’m running something crazy-fast or even all that new. Hell until the beginning of this year I was running all this stuff on poweredge 860’s which are nearly 20 years old now.

    If I needed to set up the VM again, well I would just copy the backup as a starting point, or copy one of the mirror servers. Copying a VM doesn’t take much, I mean even my bigger storage systems only use an 8GB image. That takes, what, 30 seconds? And for building a new service image, I have a nearly stock install which has the basics like LDAP accounts and network shares set up. Otherwise once I get a service configured I just let Debian manage the security updates and do a full upgrade as needed. I’ve never had a reason to try replacing an individual library for anything, and each of my VMs run a single service (http, smtp, dns, etc) so even if I did try that there wouldn’t be any chance of it interfering with anything else.

    Honestly from what you’re saying here, it just sounds like docker is made for people who previously ran everything directly under the main server installation and frequently had upgrades of one service breaking another service. I suppose docker works for those people, but the problems you are saying it solves are problems I have never run in to over the last two decades.

    Nine. How much ram do they use? How much disk space? Try running 90, or 900. Currently, on my personal hobby kubernetes cluster, there’s 83 different instances running. Because of the low overhead, I can run even small tools in their own container, completely separate from the rest. If I run say… a postgresql server… spinning one up takes 90mb disk space for the image, and about 15 mb ram.

    I worked at a company that did - among other things - hosting, and was using VM’s for easier management and separation between customers. I wasn’t directly involved in that part day to day, but was friend with the main guy there. It was tough to manage. He was experimenting with automatic creating and setting up new VM’s, stripping them for unused services and files, and having different sub-scripts for different services. This was way before docker, but already then admins were looking in that direction.

    So aschually, docker is kinda made for people who runs things in VM’s, because that is exactly what they were looking for and duct taping things together for before docker came along.

    Yeah I can see the advantage if you’re running a huge number of instances. In my case it’s all pretty small scale. At work we only have a single server that runs a web site and database so my home setup puts that to shame, and even so I have a limited number of services I’m working with.

    Yeah, it also has the effect that when starting up say a new postgres or web server is one simple command, a few seconds and a few mb of disk and ram, you do it more for smaller stuff.

    Instead of setting up one nginx for multiple sites you run one nginx per site and have the settings for that as part of the site repository. Or when a service needs a DB, just start a new one just for that. And if that file analyzer ran in it’s own image instead of being part of the web service, you could scale that separately… oh, and it needs a redis instance and a rabbitmq server, that’s two more containers, that serves just that web service. And so on…

    Things that were a huge hassle before, like separate mini vm’s for each sub-service, and unique sub-services for each service doesn’t just become practical but easy. You can define all the services and their relations in one file and docker will recreate the whole stack with all services with one command.

    And then it also gets super easy to start more than one of them, for example for testing or if you have a different client. … which is how you easily reach a hundred instances running.

    So instead of a service you have a service blueprint, which can be used in service stack blueprints, which allows you to set up complex systems relatively easily. With a granularity that would traditionally be insanity for anything other than huge, serious big-company deployments.

    Well congrats, you are the first person who has finally convinced me that it might actually be worth looking at even for my small setup. Nobody else has been able to even provide a convincing argument that docker might improve on my VM setup, and I’ve been asking about it for a few years now.

    It’s a great tool to have in the toolbox. Might take some time to wrap your head around, but coming from vm’s you already have most of the base understanding.

    From a VM user’s perspective, some translations:

    • Dockerfile = script to set up a VM from a base distro, and create a checkpoint that is used as a base image for starting up vm’s
    • A container is roughly similar to a running VM. It runs inside the host os, jailed, which account for it’s low overhead.
    • When a container is killed, every file system change gets thrown out. Certain paths and files can be mapped to host folders / storage to keep data between restarts.
    • Containers run on their own internship network. You can specify ports to nat in from host interface to containers.
    • Most service setup is done by specifying environment variables for the container, or mapping in a config file or folder.
    • Since the base image is static, and config is per container, one image can be used to run multiple containers. So if you have a postgres image, you can run many containers on that image. And specify different config for each instance.
    • Docker compose is used for multiple containers, and their relationship. For example a web service with a DB, static file server, and redis cache. Docker compose also handles things like setting up a unique network for the containers, storage volumes, logs, internal name resolution, unique names for the containers and so on.

    A small tip: you can “exec” into a running container, which will run a command inside that container. Combined with interactive (-i) and terminal (-t) flags, it’s a good way to get a shell into a running container and have a look around or poke things. Sort of like getting a shell on a VM.

    One thing that’s often confusing for new people are image tags. Partially because it can mean two things. For example “postgres” is a tag. That is attached to an image. The actual “name” of an image is it’s Sha sum. An image can have multiple tags attached. So far so good, right?

    Now, let’s get complicated. The actual tag, the full tag for “postgres” is actually “docker.io/postgres:latest”. You see, every tag is a URL, and if it doesn’t have a domain name, docker uses it’s own. And then we get to the “: latest” part. Which is called a tag. Yup. All tags have a tag. If one isn’t given, it’s automatically set to “latest”. This is used for versioning and different builds.

    For example postgres have tags like “16.1” which points to latest 16.1.x version image, built on postgres maintainers’ preferred distro. “16.1-alpine” that point to latest Alpine based 16.1.x version. “16” for latest 16.x.x version, “alpine” for latest alpine based version, be it 16 or 17 or 18… and so on. You can find more details here.

    The images on docker hub are made by … well, other people. Often the developers of that software themselves, sometimes by docker, sometimes by random people. You can make your own account there, it’s free. If you do, make an image and pushes it, it will be available as shdwdrgn/name - if it doesn’t have a user component it’s maintained / sanctioned by docker.

    You can also run your own image repository service, as long as it has https with valid cert. Then it will be yourdomain.tld/something

    So that was a brief introduction to the strange World of docker. Docker is a for profit company, btw. But the image format is standardized, and there’s fully open source ways to make and run images too. At the top of my head, podman and Kubernetes.

    postgres - Official Image | Docker Hub

    The PostgreSQL object-relational database system provides reliability and data integrity.

    One thing I’m not following in all the discussions about how self-contained docker is… nearly all of my images make use of NFS shares and common databases. For example, I have three separate smtp servers which need to put incoming emails into the proper home folders, but also database connections to track detected spam and other things. So how would all these processes talk to each other if they’re all locked within their container?

    The other thing I keep coming back to, again using my smtp servers as an example… It is highly unlikely that anyone else has exactly the same setup that I do, let alone that they’ve taken the time to build a docker image for it. So would I essentially have to rebuild the entire system from scratch, then learn how to create a docker script to launch it, just to get the service back online again?

    For the nfs shares, there’s generally two approaches to that. First is to mount it on host OS, then map it in to the container. Let’s say the host has the nfs share at /nfs, and the folders you need are at /nfs/homes. You could do “docker run -v /nfs/homes:/homes smtpserverimage” and then those would be available from /homes inside the image.

    The second approach is to set up NFS inside the image, and have that connect directly to the nfs server. This is generally seen as a bad idea since it complicates the image and tightly couples the image to a specific configuration. But there are of course exceptions to each rule, so it’s good to keep in mind.

    With database servers, you’d have that set up for accepting network connections, and then just give the address and login details in config.

    And having a special setup… How special are we talking? If it’s configuration, then that’s handled by env vars and mapping in config files. If it’s specific plugins or compile options… Most built images tend to cast a wide net, and usually have a very “everything included” approach, and instructions / mechanics for adding plugins to the image.

    If you can’t find what you’re looking for, you can build your own image. Generally that’s done by basing your Dockerfile on an official image for that software, then do your changes. We can again take the “postgres” image since that’s a fairly well made one that has exactly the easy function for this we’re looking for.

    If you would like to do additional initialization in an image derived from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under /docker-entrypoint-initdb.d (creating the directory if necessary). After the entrypoint calls initdb to create the default postgres user and database, it will run any *.sql files, run any executable *.sh scripts, and source any non-executable *.sh scripts found in that directory to do further initialization before starting the service.

    So if you have a .sh script that does some extra stuff before the DB starts up, let’s say “mymagicpostgresthings.sh” and you want an image that includes that, based on Postgresql 16, you could make this Dockerfile in the same folder as that file:

    FROM postgres:16 RUN mkdir /docker-entrypoint-initdb.d COPY mymagicpostgresthings.sh /docker-entrypoint-initdb.d/mymagicpostgresthings.sh RUN chmod a+x /docker-entrypoint-initdb.d/mymagicpostgresthings.sh

    and when you run “docker build . -t mymagicpostgres” in that folder, it will build that image with your file included, and call it “mymagicpostgres” - which you can run by doing “docker run -e POSTGRES_PASSWORD=mysecretpassword -p 5432:5432 mymagicpostgres”

    In some cases you need a more complex approach. For example I have an nginx streaming server - which needs extra patches. I found this repository for just that, and if you look at it’s Dockerfile you can see each step it’s doing. I needed a bit of modifications to that, so I have my own copy with different nginx.conf, an extra patch it downloads and applies to the src code, and a startup script that changes some settings from env vars, but that had 90% of the work done.

    So depending on how big changes you need, you might have to recreate from scratch or you can piggyback on what’s already made. And for “docker script to launch it” that’s usually a docker-compose.yml file. Here’s a postgres example:

    version: '3.1' services: db: image: postgres restart: always environment: POSTGRES_PASSWORD: example adminer: image: adminer restart: always ports: - 8080:8080

    If you run “docker compose up -d” in that file’s folder it will cause docker to download and start up the images for postgres and adminer, and port forward in 8080 to adminer. From adminer’s point of view, the postgres server is available as “db”. And since both have “restart: always” if one of them crashes or the machine reboots, docker will start them up again. So that will continue running until you run “docker compose down” or something catastrophic happens.

    GitHub - tiangolo/nginx-rtmp-docker: Docker image with Nginx using the nginx-rtmp-module module for live multimedia (video) streaming.

    Docker image with Nginx using the nginx-rtmp-module module for live multimedia (video) streaming. - tiangolo/nginx-rtmp-docker

    GitHub
    Hey I wanted to say thanks for all the info and I’ve saved this aside. Had something come up that is requiring all my attention so I just got around to reading your message but it looks like my foray into docker will have to wait a bit longer.

    Instead of setting up one nginx for multiple sites you run one nginx per site and have the settings for that as part of the site repository.

    Doesn’t that require a lot of resources since you’re running (mysql, nginx, etc.) numerous times (once for each container), instead of once globally?

    Or, per your comment below:

    Since the base image is static, and config is per container, one image can be used to run multiple containers. So if you have a postgres image, you can run many containers on that image. And specify different config for each instance.

    You’d only have two instances of postgres, for example, one for all docker containers and one global/server-wide? Still, that doubles the resources used no?

    Why would you try avoiding it if you understand how it works? It has so many upsides and so few downsides. About the only practical one is using more disk space. It was groundbreaking technology in 2013. Today it’s an old and essential tool.
    Because it seems overkill for a home server. Up until recently all I ran was Samba and a torrent daemon. Why would I install another layer of overhead to manage two applications on one server?

    Because the overhead is practically none, barring the extra disk space. Maybe it’s not worth using it for Samba and Transmission. But involve OpenVPN for Transmission in the mix and things get a lot more complicated if Samba has to keep serving LAN and Transmission has to stop whenever OpenVPN stops. If instead you grab this, the problem is solved by writing one 20-line docker-compose.yml and doing docker-compose up -d:

    version: '3.3' services: transmission-openvpn: cap_add: - NET_ADMIN volumes: - '/your/storage/path/:/data' - '/your/config/path/:/config' environment: - OPENVPN_PROVIDER=PIA - OPENVPN_CONFIG=france - OPENVPN_USERNAME=user - OPENVPN_PASSWORD=pass - LOCAL_NETWORK=192.168.0.0/16 logging: driver: json-file options: max-size: 10m ports: - '9091:9091' restart: on-failure image: haugene/transmission-openvpn

    A benefit of Docker’s that helps even with a single-service deployment is the the packaging side. It allows for running near-arbitrary service versions on top of your host OS, stale, stable, bleeding edge or anything in-between.

    GitHub - haugene/docker-transmission-openvpn: Docker container running Transmission torrent client with WebUI over an OpenVPN tunnel

    Docker container running Transmission torrent client with WebUI over an OpenVPN tunnel - haugene/docker-transmission-openvpn

    GitHub

    Similar story to yours. I was a HP-UX and BSD admin, at some point in the 00s, I stopped self-hosting. Felt too much like the work I was paid to do in the office.

    But then I decided to give it a go in the mid-10s, mainly because I was uneasy about my dependence on cloud services.

    The biggest advantage of Docker for me is the easy spin-up/tear-down capability. I can rapidly prototype new services without worrying about all the cruft left behind by badly written software packages on the host machine.

    The main downside of docker images is app developers don’t tend to play a lot of attention to the images that they produce beyond shipping their app. While software installed via your distribution benefits from marriculous scrutiny of security teams making sure security issues are fixed in a timely fashion, those fixes rarely trickle down the chain of images that your container ultimately depends on. While your distributions package manager sets up a cron job to install fixes from the security channel automatically, with Docker you are back to keeping track of this by yourself, hoping that the app developer takes this serious enough to supply new images in a timely fashion. This multies by number of images, so you are always only as secure as the least well maintained image.

    Most images, including latest, are piss pour quality from a security standpoint. Because of that, professionals do not tend to grab “off the shelve” images from random sources of the internet. If they do, they pay extra attention to ensure that these containers run in sufficient isolated environment.

    Self hosting communities do not often pay attention to this. You’ll have to decide for yourself how relevant this is for you.

    For sure! Most seem to be random git repo level of reviewed instead of being seriously tested and hardened. I really wish we had more of an source for reliable audits of containers, and flatpaks. Just someone trusted or collectively running trivy, clair, sonarqube, etc, posting the results publicly, and having tools like podman/K3s/etc have sane defaults for checkibg it against containers on pull.

    Are you familiar with lxc or chroots or bsd jails by any chance? If you are, you probably won’t find docker that much different to use other than a bigger selection of premade images.

    It is kind of sad that some projects are trending towards docker first, but I think learning how to make packages for package managers is also becoming less popular :(

    I think learning how to make packages for package managers is also becoming less popular :(

    Even learning how to do the simplest thing possible that is easy to package by anybody - something like a tarball or zip - is becoming less popular :(

    I learned “creating a zip” the hard way when I submitted an exam but forgot the -r on creation, meaning all the to-review code was gone.

    If you decide to use docker-compose.yml files, which I do recommend, then I’d also highly recommend this script for updating the docker containers.

    It checks each container for updates and then let’s you select the containers you would like to update. I just keep it in the main directory with all the other docker container directories.

    github.com/mag37/dockcheck/blob/…/dockcheck.sh

    dockcheck/dockcheck.sh at main · mag37/dockcheck

    A script checking updates for docker images without pulling - then selectively auto-update some/all containers. - mag37/dockcheck

    GitHub
    The preferred filename is now compose.yaml, see docs.docker.com/compose/…/03-compose-file/
    "Compose file"

    "Understand the compose file."

    Docker Documentation

    Compose also supports docker-compose.yaml and docker-compose.yml for backwards compatibility of earlier versions.

    I doubt they’re going to remove support for the previous filename anytime soon. It would break way too many things.

    Why not just run a watchtower container? Combined with a diun one to send gotify messages to my phone if you’re into that. (I am!)
    Sometimes automated updates are not desirable. I also prefer the simplicity of a bash script over a full container.