I don't know who needs to hear this but.... Kubernetes isn't really solving your organizations tech problems.. Ya'll just got FOMO. Let's argue ( Civilly of course)๐Ÿ˜€

#homelab #kubernetes #k8 #selfhosted #selfhosting #containers #howifeelfriday

@train what would you recommend for #selfhosting instead of #k8s ?
@hmiron For selfhosting... in a homelab... docker-compose, portainer, proxmox, lxd/lxc almost anything before k8. K8 comes with a lot of benefits, I can recognize that.. Many of us are not using it because it solves a particular tech problem, cause it causes a bunch of others. We are using it because it's popular and everyone else is using it. I would only recommend K8 for learning the language and concept in the homelab.

@train As a noob I really benefit from the infrastructure created by #truecharts on #truenas . Once you set it up, any app you deploy has a reverse proxy before it, ssl certs, split DNS for local and public access, and more I havenโ€™t played with.

I certainly wouldnโ€™t know how to achieve this from scratch. Heck, I wouldnโ€™t even have known about these best practices.

@hmiron @train Yes, this is all nice and shiny. But when #TrueCharts issues a breaking change, again, as they did twice this year, you find out pretty quickly that it sucks to not know about that stuff to begin with. As there is only very little documentation, and the support is a discord server. (Both are red flags for a something you have to rely on.)
Don't get me wrong: If you want to learn this stuff: Go for TrueCharts ๐Ÿ‘. If you (just) need apps right now, don't ๐Ÿคทโ€โ™‚๏ธ.
@train Ok but what if it is? What alternatives do you suggest that don't come with heavy provider lock-in?
@jgillich The lock in doesn't come with the platform. You are pretty locked in if you use k8! You end up developing tooling, using integrations, processes, pipelines all around kubectl. You locked up anyways!

@train That's like saying software locks you into using computers..not at all what lock-in means.

K8s operators allow you to spin up a HA DB cluster in a couple of hours, I see no alternatives to this that isn't proprietary cloud providers. How can you say it doesn't solve problems when it so clearly does?

@jgillich Running databases in containers is wild concept to me! Call me old fashioned. You are running a stateful workload on a system who's whole purpose ( at least started out that way) to be stateless. I'm sure the technology has advanced and works perfect now. The point i'm trying to make is that because it feels cool.. we want to be too!

Fair enough, I guess I didn't understand you're definition of lock-in. If we want to split hairs then we are all locked in.

@train K8s itself is a stateful application, I don't agree that it was designed to be stateless. It took many years to develop solutions like StatefulSets, CSI and the upcoming COSI. K8s is by no means perfect nor feature-complete; File systems take 10+ years to get good too.

We needed a programmable and extendable operating system, and that's what K8s is. Scripting SSH commands with Ansible is a poor way to do infrastructure in my opinion.

@jgillich I can agree with you there that it's a stateful system as it is. The point I guess I really wan to drive home is we as technologist need to start looking at problems and finding solutions that make sense. We tend to look at problems and just through them solutions that make sense to others. K8 isn't for everyone and the complexity that stateful application brings is daunting even to people who sell it for living. ssh with Ansible is all people need sometimes, It's ok!
@jgillich As far as alternative! If the tooling is right for you then hey i'm just spewing hot air and you did right thing. There are many of us out there who's daytime day to day task isn't making kubernetes ( the platform) as robust as possible. We have other shit to do. So let's not complicate the shit that runs the thing that makes you money!
@train I do like it from a customer PoV. (At work) I just don't care about Proxies, DNS, resource management, Linux distributions, routing, process isolation, hell, not even what architecture it's running on. I don't have to know any of that. I just need to throw a couple hundred lines of Yaml over the wall and poof, 10k more cores in my compile cluster.Could I do that with just VMs and Ansible? Sure. It would be an order of magnitude uglier, though.
@mmeier So you are simplifying your deployment just to make your operations more complex? I mean!!!! sure, but you solved one problem just to introduce dozens more. You don't get to remove DNS, Proxies, Linux Distros and all the other stuff from the mix. They are all still there ( maybe if you don't see them) and when systems break you go looking at all those systems to try to find the needle in the stack of needles.

@train No, I don't - that was my entire point, starting my post with "From a customer PoV". ๐Ÿ˜‰

Ours is not a "Oh, I need to run an app, excuse me while I set up a Kubernetes cluster". Our's is more the story of: We need to provide compute to 5k engineers with 100k or so cores. How do we provide that to them, so it's simple for them, so they can get ahead with their actual work?

That's what this was all about. Going back from DevOps to (more or less) Dev/Ops again, just a bit more convenient.

@train One more: It's the standardization that counts here, from both directions: Ops and Dev.

Really doesn't matter what the standard ends up being - just that there is one.

And let's be honest: When you provide "Computers as a service" to a large group of engineers, you could do it yourself, of course - make your own standard.

But it would be crappier than Kubernetes. And with way less tooling around it. And with exactly the same complexity, given that you need the same features.

@mmeier Standards are an easy one to hide behind when deploying kubernetes. Standards is a process not a technology. Yea you are talking yaml but so is everything else now adays. K8 will get old just like Mesos did and Openstack before that and VMWare before that.. Then the standards change whenever the new uni-kernel-vm-orchistrator-thingy google builds comes out.

I understand you're argument! I really do! Just feels like we as tech just want to use whats new!

@train There's also still a lot of OpenStack ongoing in those datacenters. And of course it will go away like Mesos and others. But that's true for pretty much all tech. It's a pretty well supported standard now. And not only from the tech side, but also from the "can I hire people for it" side.

Sure, I'm pretty certain there's a lot of bespoke stuff going on in that cluster, but way less than what would be going on for a similar home-grown thing.

@mmeier Yea I don't really subscribe to build it always mentality either. I subscribe more to don't complicate tech just because hacker news is saying this is the next big thing! That's what it feels like the tech industry has been pushing!

@train Oh, I'm absolutely with you there! Only make it as complex as it needs to be.

It's just that in our case, the requirements already complex, so that train has left the station already. ๐Ÿ™„

@mmeier Do those 100k cores go away after the engineers are done with them.. If that's the case, then you sir are doing tech right! Why not use a cloud provider like ECS or whatever google / azure has to have them just deploy containers? I'm assuming at 5k engineers you are not using on-prem stuff?

@train Yes and no, there's some autoscaling with switch-offs for energy savings (we are based in Europe), but the HW itself doesn't vanish.

Because it's cheaper to do it ourselves. They tried. Turns out "the cloud" is not actually cheaper than doing it yourself. I believe the project was with GCP back then. Our average utilization is just too frigging high.

It's all completely on-prem, "private cloud".

@mmeier In that case you need kuberneets! My arguments do not pertain to you! hahaha ๐Ÿ˜‚ .. I mean you are a perfect example of the argument I'm trying to make.. You have scale problems!!! Those problems require scale solutions.. Most k8 deployments probably don't see anything like what you need. I mean we are deploying this thing everywhere! Then get mad when it breaks and we can't fix it.

@train Yepp, that's what I meant to convey: When you've got a large amount of resources to be made available to a large amount of engineers for diverse purposes, Kubernetes is not the worst tech to reach for.

Mind you, it could still be incredibly bad - I never much looked behind the curtain. It's very possible that our Ops team is having sleepless nights because they received a mail from me saying that I will shortly receive another 2k cores, and to please prepare themselves...๐Ÿ˜