What reason would there be to seperate etcd out of the Kubernetes manifests of the control plane nodes but keep it as a native service installed on the same machines running the control plane?

There's nothing to gain in terms of high availability there.

You still have X amount of control plane nodes that also run etcd as cluster nodes.

I'm trying to figure out what my predecessor thought while building this Kubernetes environment.

The two etcd topologies mentioned in official K8S docs are:

- integrated etcd (etcd as a Kubernetes Manifest, started as containers together with coredns, kube-apiserver and so on)

- seperated etcd nodes (X amount of machines that host etcd as a native service on the OS and the control plane is configured to use them.

#kubernetes #devop #k8s #etcd

@patnat It used to be a fairly standard pattern back in Kubernetes version... oh, 1.12 or so. At that time, integrated etcd did not yet exist, and it was not uncommon to host etcd on the control plane nodes to prevent VM sprawl, since it was presumed that they have pretty much the same redundancy requirements.