LiteLLM Python package compromised by supply-chain attack

https://github.com/BerriAI/litellm/issues/24512

[Security]: CRITICAL: Malicious litellm_init.pth in litellm 1.82.8 — credential stealer · Issue #24512 · BerriAI/litellm

[LITELLM TEAM] - For updates from the team, please see: #24518 [Security]: CRITICAL: Malicious litellm_init.pth in litellm 1.82.8 PyPI package — credential stealer Summary The litellm==1.82.8 wheel...

GitHub
We just can't trust dependencies and dev setups. I wanted to say "anymore" but we never could. Dev containers were never good enough, too clumsy and too little isolation. We need to start working in full sandboxes with defence in depth that have real guardrails and UIs like vm isolation + container primitives and allow lists, egress filters, seccomp, gvisor and more but with much better usability. Its the same requirements we have for agent runtimes, lets use this momentum to make our dev environments safer! In such an environment the container would crash, we see the violations, delete it and dont' have to worry about it. We should treat this as an everyday possibility not as an isolated security incident.
This is the security shortcuts of the past 50 years coming back to bite us. Software has historically been a world where we all just trust each other. I think that’s coming to an end very soon.
We need sandboxing for sure, but it’s much bigger than that. Entire security models need to be rethought.

This assumes that we can get a locked down, secure, stable bedrock system and sandbox that basically never changes except for tiny security updates that can be carefully inspected by many independent parties.

Which sounds great, but the way things work now tend to be the exact opposite of that, so there will be no trustable platform to run the untrusted code in. If the sandbox, or the operating system the sandbox runs in, will get breaking changes and force everyone to always be on a recent release (or worse, track main branch) then that will still be a huge supply chain risk in itself.

I've been thinking the same thing. And it's somewhat parallel to what happened to meditation vs. drugs. In the old world the dangerous insights required so many years of discipline that you could sort of trust that the person getting the insight would be ok. But then any idiot can get the insight by just eating some shrooms and oops, that's a problem. Mostly self-harm problem in that case. But the dynamic is somewhat similar to what's happening now with LLMs and coding.

Software people could (mostly) trust each other's OSS contributions because we could trust the discipline it took in the first place. Not any more.