I have a #devops (or what we used to call #sysadmin) question...

I like Docker Swarm for its simplicity and apparent "lightweight" nature. From a user standpoint, you can simply define a set of services and it's not that much of a leap to go from a docker-compose file to a full blown distributed system for a small number f nodes.

The problems are that Docker Swarm only appears to be offered by Docker (tm) and requires the real Docker (tm) stack, as opposed to the solution most distros use today, which is to use podman as a Docker replacement (for many good reasons).

And the fact that Docker is owned by Mirantis, Mirantis's future seems uncertain is good reason not to stay.

Is anyone still using Docker Swarm? If not, do you have a lightweight alternative (not Kubernetes)? I've heard not-great things about Nomad.

I feel like this is a huge missing area in the orchestration landscape.

#docker #orchestration #dockerswarm #kubernetes #nomad

@serge A few years back I would have said Rancher v1 (Cattle) was a good, simple option as an alternative but they have gone all in with K8S...

Would Hashicorp Nomad perhaps scratch the itch for you?

@lukewhiting

I've heard not-great things about Nomad, like it gets into weird edge conditions and then just stops working. Have you used it yourself?

@serge I haven't no... I drank the K8S Koolaid after Rancher switched so not spent much time looking at alternatives.

@lukewhiting

Kubernetes is a lot of complexity for my use case of 3-5 nodes.

@serge 100% agree. My rule for K8S is that if you don't have a 4+ person team to look after it 24/7 as their sole job, then you aren't big enough to need K8S 😅

For a case that small where you perhaps don't need things like reusable ingress or overlay networking, could you perhaps get away with Ansible or Terraform to control podman directly instead? Deal with the containers more like how we used to treat bare metal / VMs?

@lukewhiting

It's funny; that's exactly where I'm leaning, essentially manual orchestration- and I know this well because 20 years ago my team was managing ~1000 bare metal hosts (with 62 distinct configurations) this way, including real time services, developer services, and others.

The host allocation part is easy, but what gives me more pause is some of the benefits that orchestration provides on the reconciliation phase, eg how if you change a container image version in a pod definition in k8s, it will first try to load that new container up, run it, and add it to the ingres, then wait to shut down the service, and then do the same across the nodes.

This is the part that was always very unfun. Doing this in ansible seems challenging, and would require writing a set of pre and post scripts to remove the service from the load balancer, etc. and it's not possible *AFAIK!* in ansible (without something like ansible tower) to do rolling deployment.