GitOps for docker compose stacks

https://piefed.social/post/1283831

Komodo works really well for this and is fairly easy to setup.
That’s what I’m currently implementing. Here’s a cool guide: nickcunningh.am/…/how-to-automate-version-updates…
How To: Automate version updates for your self-hosted Docker containers with Gitea, Renovate, and Komodo

In this guide I will go over how to automatically search for and be notified of updates for container images every night using Renovate, apply those updates by merging pull requests for them in Gitea, and automatically redeploy the updated containers using Komodo.

Nick Cunningham

At one of my clients I use gitlab CI with ansible. It took 3 days to setup, and requires thinkering. But all in all, I like the versitility, consistency and transparency of this approach.

If I’d start over again, I’d use pyinfra instead of ansible, but that’s a minor difference.

Wondering, Just after how many containers does ops make sense? I have a dozen containers, I check for updates once a month manually. I update the compose/docker files manually and up my containers. In stages, because my git and my container registry are also containers. Also my dev is my prod env.

I think it depends on the rate of change, rather than the amount of containers.

At home I do things manually as things change maybe 3 or 4 times a year.

Professionally I usually do setup automated devops because updates and deployments happen almost daily.

If feel like, for me at least, GitOps for containers is peace of mind. I run a small Kubernetes cluster as my home lab, and all the configs are in git. If need be, I know (because i tested it) if something happens to the cluster and I lose it all, I can spin up a new cluster and apply the configs from git and be back up and running. Because I do deployments directly from git, I know that everything in git is up to date and versioned so i can roll back.

I previously ran a set of docker containers with compose and then swarm, and I always worried something wouldn’t be recoverable. Adding GitOps here reduced my “What If?” Quotient tremendously.

How many hosts do you manage? What k8 tools do you use? I have just one host, I use bind mounts for container generated config/data/cache in docker compose, for which I dont have backup, and if gone, I have to start from scratch. But i try to keep most config in git.

Currently, I have a 3 node Proxmox cluster with 6 kube nodes on it (3 masters, 3 workers). Lets me do things like migrate services off of a host so I can take it out, do upgrades/maintenance, and put it back without hearing about downtime from the family/friends.

For storage, I’ve got a Synology NAS with NFS setup and then the pods are configured to use that for their storage if they need it (So, Jellyfin, Immich, etc). I do regular backups of the NAS with rsync. So, if that goes down, I can restore or standup a new NAS with NFS and it’ll be back to normal.

K3s with flux inside. Fun video from a guy with a nice easy repo from goto conference on YouTube. Might be a bit much but… Not sure of anything for compose and gitops