@Viss @arichtman @mttaggart I'm more bothered by the fact that k8s secrets objects aren't actually encrypted (they're just base64 encoded) than scoped injection by env.
@Viss @arichtman @mttaggart Again, I agree with you that this is true for a lot of use cases and shops. That said, you can't pretend that things were gloriously secure en masse in the older days of LAMP, Tomcat, and ASPX. Moving to Kubernetes in some cases allowed for better hygiene in general around secrets, hardening, and idempotency. For stuff like multi-tenant JupyterHub, Kubernetes is highly practical. For serving your company's blog - maybe not.
@Viss @arichtman @mttaggart CI/CD pipelines makes sense - not having designated hardware sit idle when workers aren't running, the worker agents can go away when the job is done leaving only intended artifacts meaning less attack vector for workers, idempotency, etc.
Of course you don't *have* to do it this way, but there's a clear case to be made.
@mttaggart @vwbusguy @arichtman this is just the 2024 version of
- there is a 'way to do it right'
- most people do not do it that way
- the thing is almost certainly being used when it doesnt need to be
- the folks deploying the thing in most cases are not familiar enough with it, or architecture in general to adquately harden it
-- or they just dont care to, usually because compliance
it used to be lamp, now its containers
@mttaggart @vwbusguy @arichtman i guess the tl;dr for me is:
"if you give people a giant red george jetson button that does a thing, then people will just instinctively mash that button without ever considering the consequences. and you end up with a bunch of output that the button masher wasnt expecting and doesnt know what to do with, which often times ends up as someone elses problem, who wont be happy with this arrangement"
@Viss @vwbusguy @arichtman I really do think a giant piece of it—especially in the tech industry/startup space itself—is a decision-making process that assumes:
@mttaggart @vwbusguy @arichtman a lot of founders, especially founders who set out to score vc money tend to think the same way as the vc.
ive done a loooooooot of M&A assessment work, and some of the environments ive seen smack of those scenes in home alone where its all cardboard cutouts, strings and shadowpuppets to give the illusion that some shit exists there
1. Containers are old. They're basically jails and Solaris had containers in the 1990s.
2. Getting this right is a tricky problem. Arguably one viable reason *to* use public cloud is that you don't expect to scale big soon, so the cost to do so could be relatively low in OpX dollars.
@vwbusguy @Viss @arichtman While the concept of containers is old, I think we can both agree that the "productization" of them is less so.
And as far as scale, I'm referring specifically to choosing a container orchestrator as the deployment target from day one.
@mttaggart @Viss @arichtman Nope - Solaris did it first and *very* commercially.
https://www.oracle.com/solaris/technologies/solaris-containers.html
@vwbusguy @mttaggart @arichtman ive been telling folks lately that if they know even a modest amount of what we all would consider 'intro-level linux command line stuff, with maybe some bash or fun vi tricks sprinkled in' they are effectively wizards by comparison to their peers.
its absolutely bananas how much outright fraud there is when looking at resumes or skillsets.
i feel like every technical interview should involve a hands-on-keyboard 'we take your phone away' component to smoketest
@vwbusguy @Viss @arichtman Node resonates because that is a lot of how I got started using it. But it wasn't just hype. There were real problems of deployability and reproducibility that it solved for Linux admins and developers targeting Linux servers.
I'll cop to missing Solaris on account of being still in school and not being a BSD expert, but when I was running school IT systems, Docker arrived and immediately solved longstanding complications.
@mttaggart @Viss @arichtman Indeed. In context, Red Hat had bought Qumranet and was competing with Xen, VMWare, and VirtualBox and saying things like you could run 5 VMs on Red Hat for the cost of 3 on VMWare, etc. Hypervisors were a huge deal. OpenStack vs Eucalyptus was the big hype.
On top of that, proprietary PaaS like Heroku was huge.
Docker came along as a way to do VM-like workloads with the overhead of a PaaS in the midst of all of that discussion.
@mttaggart @Viss @arichtman Docker was way less complicated to deploy than something like Eucalyptus or OpenStack and you could run it on your existing Linux servers instead of a proprietary PaaS or something awkward like Red Hat OpenShift 2 was.
Now you also had a way for a developer to actually ship what "works on my laptop" to the server with more assurances than before.