@arichtman @vwbusguy @mttaggart exposed api endpoints, super secret secrets hanging out in env vars, rbac not configured or not present, public api access, shared usernames, images that are 2-5 years old with trivial kernel privesc bugs, containers built by people who dont security and spread far and wide. its just a risk matroshka doll full of exploitable surfaces and configs, and all the corners and edges full of "industry best practices", written by non-security people

@Viss @arichtman @mttaggart I'm more bothered by the fact that k8s secrets objects aren't actually encrypted (they're just base64 encoded) than scoped injection by env.

https://12factor.net/config

The Twelve-Factor App

A methodology for building modern, scalable, maintainable software-as-a-service apps.

@vwbusguy @arichtman @mttaggart one time i made a very attractive lady literally snotlaugh by saying "kubernetes appears to have been invented to solve a litany of problems that nobody actually appears to have"
@Viss @arichtman @mttaggart This just tells me you didn't have the wonderful joy of trying to run Docker Swarm in production in its early days and I'm happy for you in that regard. Sweet glory did Kubernetes solve a lot of problems compared to that.
@vwbusguy @arichtman @mttaggart this feels like one of those sorta 'if you go back further in time, you see that docker actually introduced a lot of problems, which were then fixed by k8s' scenario, so if your context window begins at docker, then yeah its a 'measurable improvement', but if it begins 'before you installed docker', then you're still at a net negative
@Viss @arichtman @mttaggart To be fair, you're not wrong for a whole lot of use cases. If you built your empire on a LAMP stack, that doesn't translate well in a scalable way in a Kubernetes world because it was stateful and built for vertical scaling. Forcing that into Kubernetes means retooling some core architectural things for the stack for an outcome that might not be demonstrably better.
@vwbusguy @arichtman @mttaggart unless youre dealing with like, dozens or hundreds of containers that are geographically distributed, i get the impression kubernetes is just massive overhead and lots of extra attack surface. I can see how in narrow circumstances it can be useful, but so far literally every single k8s deployment ive seen is "way more overhead and complexity and attack surface, for not enough benefit"

@Viss @arichtman @mttaggart Again, I agree with you that this is true for a lot of use cases and shops. That said, you can't pretend that things were gloriously secure en masse in the older days of LAMP, Tomcat, and ASPX. Moving to Kubernetes in some cases allowed for better hygiene in general around secrets, hardening, and idempotency. For stuff like multi-tenant JupyterHub, Kubernetes is highly practical. For serving your company's blog - maybe not.

https://jupyter.org/hub

Project Jupyter

The Jupyter Notebook is a web-based interactive computing platform. The notebook combines live code, equations, narrative text, visualizations, interactive dashboards and other media.

@vwbusguy @arichtman @mttaggart thats the tug though. everyones hamfisting it in everywhere, using it for their core business infra or making it part of ci/cd pipelines. nobody is using it 'the right way'

@Viss @arichtman @mttaggart CI/CD pipelines makes sense - not having designated hardware sit idle when workers aren't running, the worker agents can go away when the job is done leaving only intended artifacts meaning less attack vector for workers, idempotency, etc.

Of course you don't *have* to do it this way, but there's a clear case to be made.

@vwbusguy @arichtman @mttaggart that description is not how i have seen it deployed, though
@Viss @arichtman @mttaggart That's how I have it deployed 😀 . All on prem with Jenkins and Rancher RKE2 k8s backends.
@vwbusguy @Viss @arichtman This conversation is quite the piece of evidence that you are the exception to the rule. Your knowledge is impressive, and rare. Certainly moreso than orchestrated container deployments. Y'all are both right.

@mttaggart @vwbusguy @arichtman this is just the 2024 version of

- there is a 'way to do it right'
- most people do not do it that way
- the thing is almost certainly being used when it doesnt need to be
- the folks deploying the thing in most cases are not familiar enough with it, or architecture in general to adquately harden it
-- or they just dont care to, usually because compliance

it used to be lamp, now its containers

@mttaggart @vwbusguy @arichtman i guess the tl;dr for me is:

"if you give people a giant red george jetson button that does a thing, then people will just instinctively mash that button without ever considering the consequences. and you end up with a bunch of output that the button masher wasnt expecting and doesnt know what to do with, which often times ends up as someone elses problem, who wont be happy with this arrangement"

@Viss @mttaggart @arichtman I think it often happens more like this:
@vwbusguy @mttaggart @arichtman nailed it. but now with k8s you can cloud scale that debt at warp factor 9 :D
@vwbusguy @mttaggart @arichtman just like java was 'write once, exploit everywhere', now you can take "architectural and technical misconfigurations and lack of hardening and cloud scale it" :D

@Viss @vwbusguy @arichtman I really do think a giant piece of it—especially in the tech industry/startup space itself—is a decision-making process that assumes:

  • Old == bad
  • We will be the next 1M user unicorn and should build for that today.
  • @mttaggart @vwbusguy @arichtman 100% of the folks who follow that logic pathway are vc types or money types or upper-level-execs, who are solving for "return on their personal cash money investment" and not "to build some shit that actually works, or has longevity, or to solve some problem"
    @Viss @vwbusguy @arichtman I'm not so sure about that. Having lived in the dev space for long enough, the dev/founder folks do this as well.

    @mttaggart @vwbusguy @arichtman a lot of founders, especially founders who set out to score vc money tend to think the same way as the vc.

    ive done a loooooooot of M&A assessment work, and some of the environments ive seen smack of those scenes in home alone where its all cardboard cutouts, strings and shadowpuppets to give the illusion that some shit exists there

    @mttaggart @Viss @arichtman

    1. Containers are old. They're basically jails and Solaris had containers in the 1990s.
    2. Getting this right is a tricky problem. Arguably one viable reason *to* use public cloud is that you don't expect to scale big soon, so the cost to do so could be relatively low in OpX dollars.

    @mttaggart @Viss @arichtman The "magic" about containers in either direction tends to go away once you realize that containers are just Linux processes. That's all they are - wrapped in cgroups namespaces and with link hijacking like a jail. That's why when you run `ps` on Linux you see the actual container process and not a hypervisor, etc. Requests and limits? That's CFS.

    @vwbusguy @Viss @arichtman While the concept of containers is old, I think we can both agree that the "productization" of them is less so.

    And as far as scale, I'm referring specifically to choosing a container orchestrator as the deployment target from day one.

    @vwbusguy @Viss @arichtman Fair enough. To what do you attribute the rise of Docker?
    @mttaggart @vwbusguy @arichtman im gonna vote 'entirely 100% hype'. because thats what i saw in the infosec space. lots of people with little to no technical accumen suddenly going 500% in on docker and self-labeling themselves experts in it, while at the same time having little to no actual experience at the linux command line
    @Viss @mttaggart @arichtman That's also true. Much in the same way all the junior devs are putting AI on their resume today when their core experience is sticking an OpenAI token into some code they copy and pasted off the internet to make a chat bot.

    @vwbusguy @mttaggart @arichtman ive been telling folks lately that if they know even a modest amount of what we all would consider 'intro-level linux command line stuff, with maybe some bash or fun vi tricks sprinkled in' they are effectively wizards by comparison to their peers.

    its absolutely bananas how much outright fraud there is when looking at resumes or skillsets.

    i feel like every technical interview should involve a hands-on-keyboard 'we take your phone away' component to smoketest

    @mttaggart @Viss @arichtman Ripe timing with the advent of nodejs making stateless applications more mainstream plus complete lack of a coherent business model that meant others managed to productize Docker before Docker itself could figure out how to do it.

    @vwbusguy @Viss @arichtman Node resonates because that is a lot of how I got started using it. But it wasn't just hype. There were real problems of deployability and reproducibility that it solved for Linux admins and developers targeting Linux servers.

    I'll cop to missing Solaris on account of being still in school and not being a BSD expert, but when I was running school IT systems, Docker arrived and immediately solved longstanding complications.

    @vwbusguy @Viss @arichtman And I wasn't alone. I distinctly remember the conversation amongst a lot of working Linux folks at the time being one of excitement and optimism.

    @mttaggart @Viss @arichtman Indeed. In context, Red Hat had bought Qumranet and was competing with Xen, VMWare, and VirtualBox and saying things like you could run 5 VMs on Red Hat for the cost of 3 on VMWare, etc. Hypervisors were a huge deal. OpenStack vs Eucalyptus was the big hype.

    On top of that, proprietary PaaS like Heroku was huge.

    Docker came along as a way to do VM-like workloads with the overhead of a PaaS in the midst of all of that discussion.

    @mttaggart @Viss @arichtman Docker was way less complicated to deploy than something like Eucalyptus or OpenStack and you could run it on your existing Linux servers instead of a proprietary PaaS or something awkward like Red Hat OpenShift 2 was.

    Now you also had a way for a developer to actually ship what "works on my laptop" to the server with more assurances than before.