I don't know who needs to hear this but.... Kubernetes isn't really solving your organizations tech problems.. Ya'll just got FOMO. Let's argue ( Civilly of course)๐
#homelab #kubernetes #k8 #selfhosted #selfhosting #containers #howifeelfriday
I don't know who needs to hear this but.... Kubernetes isn't really solving your organizations tech problems.. Ya'll just got FOMO. Let's argue ( Civilly of course)๐
#homelab #kubernetes #k8 #selfhosted #selfhosting #containers #howifeelfriday
@train As a noob I really benefit from the infrastructure created by #truecharts on #truenas . Once you set it up, any app you deploy has a reverse proxy before it, ssl certs, split DNS for local and public access, and more I havenโt played with.
I certainly wouldnโt know how to achieve this from scratch. Heck, I wouldnโt even have known about these best practices.
@train That's like saying software locks you into using computers..not at all what lock-in means.
K8s operators allow you to spin up a HA DB cluster in a couple of hours, I see no alternatives to this that isn't proprietary cloud providers. How can you say it doesn't solve problems when it so clearly does?
@jgillich Running databases in containers is wild concept to me! Call me old fashioned. You are running a stateful workload on a system who's whole purpose ( at least started out that way) to be stateless. I'm sure the technology has advanced and works perfect now. The point i'm trying to make is that because it feels cool.. we want to be too!
Fair enough, I guess I didn't understand you're definition of lock-in. If we want to split hairs then we are all locked in.
@train K8s itself is a stateful application, I don't agree that it was designed to be stateless. It took many years to develop solutions like StatefulSets, CSI and the upcoming COSI. K8s is by no means perfect nor feature-complete; File systems take 10+ years to get good too.
We needed a programmable and extendable operating system, and that's what K8s is. Scripting SSH commands with Ansible is a poor way to do infrastructure in my opinion.
@train No, I don't - that was my entire point, starting my post with "From a customer PoV". ๐
Ours is not a "Oh, I need to run an app, excuse me while I set up a Kubernetes cluster". Our's is more the story of: We need to provide compute to 5k engineers with 100k or so cores. How do we provide that to them, so it's simple for them, so they can get ahead with their actual work?
That's what this was all about. Going back from DevOps to (more or less) Dev/Ops again, just a bit more convenient.
@train One more: It's the standardization that counts here, from both directions: Ops and Dev.
Really doesn't matter what the standard ends up being - just that there is one.
And let's be honest: When you provide "Computers as a service" to a large group of engineers, you could do it yourself, of course - make your own standard.
But it would be crappier than Kubernetes. And with way less tooling around it. And with exactly the same complexity, given that you need the same features.
@mmeier Standards are an easy one to hide behind when deploying kubernetes. Standards is a process not a technology. Yea you are talking yaml but so is everything else now adays. K8 will get old just like Mesos did and Openstack before that and VMWare before that.. Then the standards change whenever the new uni-kernel-vm-orchistrator-thingy google builds comes out.
I understand you're argument! I really do! Just feels like we as tech just want to use whats new!
@train There's also still a lot of OpenStack ongoing in those datacenters. And of course it will go away like Mesos and others. But that's true for pretty much all tech. It's a pretty well supported standard now. And not only from the tech side, but also from the "can I hire people for it" side.
Sure, I'm pretty certain there's a lot of bespoke stuff going on in that cluster, but way less than what would be going on for a similar home-grown thing.
@train Oh, I'm absolutely with you there! Only make it as complex as it needs to be.
It's just that in our case, the requirements already complex, so that train has left the station already. ๐
@train Yes and no, there's some autoscaling with switch-offs for energy savings (we are based in Europe), but the HW itself doesn't vanish.
Because it's cheaper to do it ourselves. They tried. Turns out "the cloud" is not actually cheaper than doing it yourself. I believe the project was with GCP back then. Our average utilization is just too frigging high.
It's all completely on-prem, "private cloud".
@train Yepp, that's what I meant to convey: When you've got a large amount of resources to be made available to a large amount of engineers for diverse purposes, Kubernetes is not the worst tech to reach for.
Mind you, it could still be incredibly bad - I never much looked behind the curtain. It's very possible that our Ops team is having sleepless nights because they received a mail from me saying that I will shortly receive another 2k cores, and to please prepare themselves...๐