I don't know who needs to hear this but.... Kubernetes isn't really solving your organizations tech problems.. Ya'll just got FOMO. Let's argue ( Civilly of course)😀

#homelab #kubernetes #k8 #selfhosted #selfhosting #containers #howifeelfriday

@train I do like it from a customer PoV. (At work) I just don't care about Proxies, DNS, resource management, Linux distributions, routing, process isolation, hell, not even what architecture it's running on. I don't have to know any of that. I just need to throw a couple hundred lines of Yaml over the wall and poof, 10k more cores in my compile cluster.Could I do that with just VMs and Ansible? Sure. It would be an order of magnitude uglier, though.
@mmeier So you are simplifying your deployment just to make your operations more complex? I mean!!!! sure, but you solved one problem just to introduce dozens more. You don't get to remove DNS, Proxies, Linux Distros and all the other stuff from the mix. They are all still there ( maybe if you don't see them) and when systems break you go looking at all those systems to try to find the needle in the stack of needles.

@train No, I don't - that was my entire point, starting my post with "From a customer PoV". 😉

Ours is not a "Oh, I need to run an app, excuse me while I set up a Kubernetes cluster". Our's is more the story of: We need to provide compute to 5k engineers with 100k or so cores. How do we provide that to them, so it's simple for them, so they can get ahead with their actual work?

That's what this was all about. Going back from DevOps to (more or less) Dev/Ops again, just a bit more convenient.

@train One more: It's the standardization that counts here, from both directions: Ops and Dev.

Really doesn't matter what the standard ends up being - just that there is one.

And let's be honest: When you provide "Computers as a service" to a large group of engineers, you could do it yourself, of course - make your own standard.

But it would be crappier than Kubernetes. And with way less tooling around it. And with exactly the same complexity, given that you need the same features.

@mmeier Standards are an easy one to hide behind when deploying kubernetes. Standards is a process not a technology. Yea you are talking yaml but so is everything else now adays. K8 will get old just like Mesos did and Openstack before that and VMWare before that.. Then the standards change whenever the new uni-kernel-vm-orchistrator-thingy google builds comes out.

I understand you're argument! I really do! Just feels like we as tech just want to use whats new!

@train There's also still a lot of OpenStack ongoing in those datacenters. And of course it will go away like Mesos and others. But that's true for pretty much all tech. It's a pretty well supported standard now. And not only from the tech side, but also from the "can I hire people for it" side.

Sure, I'm pretty certain there's a lot of bespoke stuff going on in that cluster, but way less than what would be going on for a similar home-grown thing.

@mmeier Yea I don't really subscribe to build it always mentality either. I subscribe more to don't complicate tech just because hacker news is saying this is the next big thing! That's what it feels like the tech industry has been pushing!

@train Oh, I'm absolutely with you there! Only make it as complex as it needs to be.

It's just that in our case, the requirements already complex, so that train has left the station already. 🙄

@mmeier Do those 100k cores go away after the engineers are done with them.. If that's the case, then you sir are doing tech right! Why not use a cloud provider like ECS or whatever google / azure has to have them just deploy containers? I'm assuming at 5k engineers you are not using on-prem stuff?

@train Yes and no, there's some autoscaling with switch-offs for energy savings (we are based in Europe), but the HW itself doesn't vanish.

Because it's cheaper to do it ourselves. They tried. Turns out "the cloud" is not actually cheaper than doing it yourself. I believe the project was with GCP back then. Our average utilization is just too frigging high.

It's all completely on-prem, "private cloud".

@mmeier In that case you need kuberneets! My arguments do not pertain to you! hahaha 😂 .. I mean you are a perfect example of the argument I'm trying to make.. You have scale problems!!! Those problems require scale solutions.. Most k8 deployments probably don't see anything like what you need. I mean we are deploying this thing everywhere! Then get mad when it breaks and we can't fix it.

@train Yepp, that's what I meant to convey: When you've got a large amount of resources to be made available to a large amount of engineers for diverse purposes, Kubernetes is not the worst tech to reach for.

Mind you, it could still be incredibly bad - I never much looked behind the curtain. It's very possible that our Ops team is having sleepless nights because they received a mail from me saying that I will shortly receive another 2k cores, and to please prepare themselves...😅