I don't know who needs to hear this but.... Kubernetes isn't really solving your organizations tech problems.. Ya'll just got FOMO. Let's argue ( Civilly of course)😀
#homelab #kubernetes #k8 #selfhosted #selfhosting #containers #howifeelfriday
I don't know who needs to hear this but.... Kubernetes isn't really solving your organizations tech problems.. Ya'll just got FOMO. Let's argue ( Civilly of course)😀
#homelab #kubernetes #k8 #selfhosted #selfhosting #containers #howifeelfriday
@train No, I don't - that was my entire point, starting my post with "From a customer PoV". 😉
Ours is not a "Oh, I need to run an app, excuse me while I set up a Kubernetes cluster". Our's is more the story of: We need to provide compute to 5k engineers with 100k or so cores. How do we provide that to them, so it's simple for them, so they can get ahead with their actual work?
That's what this was all about. Going back from DevOps to (more or less) Dev/Ops again, just a bit more convenient.
@train One more: It's the standardization that counts here, from both directions: Ops and Dev.
Really doesn't matter what the standard ends up being - just that there is one.
And let's be honest: When you provide "Computers as a service" to a large group of engineers, you could do it yourself, of course - make your own standard.
But it would be crappier than Kubernetes. And with way less tooling around it. And with exactly the same complexity, given that you need the same features.
@mmeier Standards are an easy one to hide behind when deploying kubernetes. Standards is a process not a technology. Yea you are talking yaml but so is everything else now adays. K8 will get old just like Mesos did and Openstack before that and VMWare before that.. Then the standards change whenever the new uni-kernel-vm-orchistrator-thingy google builds comes out.
I understand you're argument! I really do! Just feels like we as tech just want to use whats new!
@train There's also still a lot of OpenStack ongoing in those datacenters. And of course it will go away like Mesos and others. But that's true for pretty much all tech. It's a pretty well supported standard now. And not only from the tech side, but also from the "can I hire people for it" side.
Sure, I'm pretty certain there's a lot of bespoke stuff going on in that cluster, but way less than what would be going on for a similar home-grown thing.
@train Oh, I'm absolutely with you there! Only make it as complex as it needs to be.
It's just that in our case, the requirements already complex, so that train has left the station already. 🙄
@train Yes and no, there's some autoscaling with switch-offs for energy savings (we are based in Europe), but the HW itself doesn't vanish.
Because it's cheaper to do it ourselves. They tried. Turns out "the cloud" is not actually cheaper than doing it yourself. I believe the project was with GCP back then. Our average utilization is just too frigging high.
It's all completely on-prem, "private cloud".
@train Yepp, that's what I meant to convey: When you've got a large amount of resources to be made available to a large amount of engineers for diverse purposes, Kubernetes is not the worst tech to reach for.
Mind you, it could still be incredibly bad - I never much looked behind the curtain. It's very possible that our Ops team is having sleepless nights because they received a mail from me saying that I will shortly receive another 2k cores, and to please prepare themselves...😅