RE: https://mstdn.social/@hkrn/116284264915152671

lol oh my god i feel **so fucking smug** right now, it's incredible. my whole body is tingling.

i was using this package in one of my projects. i found it had a bug, and when i went to maybe try to make a contribution to the open source repository, i found it to be a huge shitpile of vibe-coded mess. methods that were thousands of lines long with **hundreds** of arguments, it was impossible, and **very** alarming. it was clear to me that no one was watching the shop, so i immediately set about removing it from my project. and now, this. 🤗
there are **tons** of AI-related projects that use LiteLLM. it is a key part of the basic infrastructure of LLM-based development. if you use an LLM-based project, there is a good chance it uses LiteLLM.
(if you're curious, it does this very useful thing of standardizing LLM APIs into a single format. makes it easy for your app to switch between Anthropic, OpenAI, Google, z.ai, etc.)
this is actually a huge reason i have decided not to jump into LLM and AI agent-related development. the ecosystem is (as you would expect) run and maintained by people who are all-in on vibe coding, so a package you might like and include in your project could easily become a dangerous, unmaintainable mess within months. i don't know if people understand how brittle the whole thing is. everything is constantly, **constantly** changing.
like, it's moving **way** too fast for anyone to be able to tell if things are going to break or get injected with some malware. the whole thing is a house of cards built on top of a bomb.
oh my fucking god.
let's see, who can i tag about this... @davidgerard will definitely want to know. @tante maybe. idk, tag your favorite cyber-security person. this might be the mother of all LLM supply chain attacks lol. @briankrebs

plenty of good chatter on Hacker News about it. https://news.ycombinator.com/item?id=47501729

looks grim!!

LiteLLM Python package compromised by supply-chain attack | Hacker News

me right now
Self-propagating malware poisons open source software and wipes Iran-based machines

Development houses: It's time to check your networks for infections.

Ars Technica
picking through the various bits and pieces of this story, i kind of think what really happened is the dev accounts got pwned, and then the attackers were able to push a bad version to PyPi and people pip installed it from there. so as far as a "supply chain" attack, LiteLLM is the part of the supply chain that got attacked, it's not like they accidentally vibe-coded something malicious into their project.
but this still goes back to what i was saying: this AI ecosystem is developing **way** too fast and without the kind of maturity that is naturally required when you have lots of people working on a thing. so with berri.ai, you had ~2 guys in their 20s building this thing at break-neck speed that became the linchpin to waaaaay too much of the "AI" ecosystem and now look what's happened.
uh-ohhhhhh!!! basically everyone that uses "agentic" workflows uses these libraries. they are a sprawling, impossible mess. https://thehackernews.com/2026/03/langchain-langgraph-flaws-expose-files.html?m=1
LangChain, LangGraph Flaws Expose Files, Secrets, Databases in Widely Used AI Frameworks

Three LangChain flaws enable data theft across LLM apps, affecting millions of deployments, exposing secrets and files.

The Hacker News
i personally kind of think Python and open source are cooked until they figure out a way to fence themselves off from the goobers throwing AI slop into every repository. Anthropic can't collapse fast enough imo.
and computer science/data science as fields of academic research are especially cooked as far as whether you can trust new packages/libraries. i'm sure the science this person is referring to has to do with machine learning/data science. and this community is down-voting him for being cautious and professional.

i also appreciate how he low-key defines all software outside his area of expertise as "trivial" 😂

it's fine if **your** software gets larded up with unreliable slop, but **my** work is Too Important.

@peter personally I think Python was cooked when it became the weapon of choice of the whole “Learn to Code” movement. All AI slop changed was removing the limit of how many Hello World apps someone can create in their lifetime

@flyingsaceur as a person who just "learn to code"d with Python, how dare you!!

seriously tho, without AI, the non-dedicated people would have just stopped after "Hello World", and you can immediately tell what it is. now, they don't really like coding, but they're still all building MCP integrations and agent harnesses and tools for turning emails into podcasts or whatever.

@peter I too am the monkey puppet with the sidelong glance here

But even before AI the non-serious people stopped at Hello, World; but at least before LLMs they they were limited in the damage they could do by the hours in a day. Also they belittled the craft: “I don’t care about classes or tests I’m doing Real Work”

@peter gee it's almost like this is all a terrible idea
@peter like how OpenAI just hired the guy who "made" OpenClaw. but its not clear to me how much of that he truly designed and wrote himself (ie. like a real programmer or software engineer) vs how much was result of him prompting an LLM to spit it out. He appeared to have tons of repos and was a self-promoting YouTube Influencer type more than a real programmer.
@peter and @dangoodin sometimes hangs out here
Self-propagating malware poisons open source software and wipes Iran-based machines

Development houses: It's time to check your networks for infections.

Ars Technica
@gfitzp oh yeah, it's those guys!
@gfitzp oh nooooo
@peter Yup, I was like "didn't I just read about these guys like an hour ago??"
@gfitzp @peter lots of the crypto/blockchain bros jumped ship for AI/LLMs a few years ago, after Bitcoin price collapsed and tons of their mining hardware risked becoming worthless. but lots of that hardware could be repurposed from mining blocks to doing training/inference. not perfect fit but better than nothing
@peter I am, for one rare moment, actually glad to read the HN comments. The one from the dude complaining that blocking all downloads of the compromised package breaks all his setups because they're written to automatically pull a bunch of packages off the net every time they start was... :chefskiss:
@[email protected] lmao oh my god that one is amazing 😂
@wordshaper @peter my technical literacy is at the level where 90% of the discussion reads like "why would anybody be fnorbing the blatimatronic quindlewurble instead of pretarnishing the distro with spleem 2.037?" and yet the stupidity of that comment still shone through to me like the Beacon of Gondor.

@wordshaper @peter <whisper>people do that?</whisper>

(Who am I kidding? Of course people do that.)

@peter Love everyone reinventing security from first principles, although "maybe don't use the fucking slop extruder" is apparently not an option. I mean, the second top comment begins: "We just can't trust dependencies and dev setups."

You absolutely can trust dependencies, you just have to use ones that were not written by fucking amateur grifters!