RE: https://mstdn.social/@hkrn/116284264915152671
lol oh my god i feel **so fucking smug** right now, it's incredible. my whole body is tingling.
i was using this package in one of my projects. i found it had a bug, and when i went to maybe try to make a contribution to the open source repository, i found it to be a huge shitpile of vibe-coded mess. methods that were thousands of lines long with **hundreds** of arguments, it was impossible, and **very** alarming. it was clear to me that no one was watching the shop, so i immediately set about removing it from my project. and now, this. 🤗
there are **tons** of AI-related projects that use LiteLLM. it is a key part of the basic infrastructure of LLM-based development. if you use an LLM-based project, there is a good chance it uses LiteLLM.
(if you're curious, it does this very useful thing of standardizing LLM APIs into a single format. makes it easy for your app to switch between Anthropic, OpenAI, Google, z.ai, etc.)
this is actually a huge reason i have decided not to jump into LLM and AI agent-related development. the ecosystem is (as you would expect) run and maintained by people who are all-in on vibe coding, so a package you might like and include in your project could easily become a dangerous, unmaintainable mess within months. i don't know if people understand how brittle the whole thing is. everything is constantly, **constantly** changing.
like, it's moving **way** too fast for anyone to be able to tell if things are going to break or get injected with some malware. the whole thing is a house of cards built on top of a bomb.
let's see, who can i tag about this...
@davidgerard will definitely want to know.
@tante maybe. idk, tag your favorite cyber-security person. this might be the mother of all LLM supply chain attacks lol.
@briankrebsplenty of good chatter on Hacker News about it. https://news.ycombinator.com/item?id=47501729
looks grim!!
LiteLLM Python package compromised by supply-chain attack | Hacker News

Self-propagating malware poisons open source software and wipes Iran-based machines
Development houses: It's time to check your networks for infections.
Ars Technicapicking through the various bits and pieces of this story, i kind of think what really happened is the dev accounts got pwned, and then the attackers were able to push a bad version to PyPi and people pip installed it from there. so as far as a "supply chain" attack, LiteLLM is the part of the supply chain that got attacked, it's not like they accidentally vibe-coded something malicious into their project.
but this still goes back to what i was saying: this AI ecosystem is developing **way** too fast and without the kind of maturity that is naturally required when you have lots of people working on a thing. so with berri.ai, you had ~2 guys in their 20s building this thing at break-neck speed that became the linchpin to waaaaay too much of the "AI" ecosystem and now look what's happened.
uh-ohhhhhh!!! basically everyone that uses "agentic" workflows uses these libraries. they are a sprawling, impossible mess.
https://thehackernews.com/2026/03/langchain-langgraph-flaws-expose-files.html?m=1
LangChain, LangGraph Flaws Expose Files, Secrets, Databases in Widely Used AI Frameworks
Three LangChain flaws enable data theft across LLM apps, affecting millions of deployments, exposing secrets and files.
The Hacker News