The "Vibe Coding" Wall of Shame
The "Vibe Coding" Wall of Shame
Thought experiment here: What about the bugs that humans have wrote. (I'm not excusing or justifying to say AI Coding is better). At one point we shamed companies for producing and being sloppy with their engineering practices. All of the sudden in the last 10 years, we accepted company's excuses of "of well we don't care and we're garbage." (A lot of Amazon tone death documentation/surprise bugs/google's head scratching disconnect to the user, etc behaviors).
But I think this is a great thing to show that they're pushing to outsource coding to a bot and to shame them that their plan isn't working out so well as they're trying to force people to believe.
I think it may help if we start personalizing these trends with the people who are amplifying it. I.e. Jassyslop, Siemiatbot (Klarna CEO was bold to brag he dropped 80% of a role for AI) etc.
> AI tools are ubiquitous.
Only among people who don't value the quality of their output. There are, fortunately, many who do value quality and are not using AI tools until they get to the point where they can usefully contribute.
> Only among people who don't value the quality of their output.
I value the quality of my output and I make extensive use of AI tools.
That's why the original definition of "vibe coding" is useful: creating code with AI tools without reviewing or caring about the quality of that code.
It's also possible to use AI tools as part of a responsible engineering process that is intended to produce production quality software.
Have you used a state of the art tool (e.g. Claude Code) in the past 6 months? If you only tried free tools, or only tried 1 year ago last, you really need to check again.
AI tools can absolutely contribute usefully, I can't keep count of the times where an AI pointed to an edge case I didn't think about, then helped me write the fix and the test for the issue.
I'm not vibe coding, as I'm reviewing the code, but saying they can't be useful means you haven't taken the time to look at the state of them recently.
Isn't it odd that you wrote your comment with AI then!?
Ha, gotcha, AI slop poster!
I know you didn't, but this is where we'll end up if people just write off everything as 'bad because AI' instead of critically assessing the quality of something on its own merit rather than the (very ironic) 'vibe' that it was generated rather than written.
For CVE-2026-0755, that's a vulnerability in gemini-mcp-tool. gemini-mcp-tool's Github repo says "This is an unofficial, third-party tool and is not affiliated with, endorsed, or sponsored by Google." but this list shows the Google logo next to the vulnerability.
Also, it's not entirely obvious to me that the vulnerability was introduced by vibe coding.
https://github.com/jamubc/gemini-mcp-tool
Disclosure: I work at Google, but not on anything related to this.
>Also, it's not entirely obvious to me that the vulnerability was introduced by vibe coding.
IDK why people act as if vibe coding invented software bugs that lead to vulnerabilities, as if those weren't already a thing by human programmers.
You got that exactly the wrong way round.
Here's one set of numbers from the CATO institute: https://www.cato.org/policy-analysis/illegal-immigrant-murde...
The only way your statement holds up is if you treat the act of existing while undocumented as a crime for this comparison, in which case sure - it's a tautology.
The first link claims the 6-hour outage wiped 99% of order volume. I went to the "source" and found an (AI generated?) ad by a company that wants to sell a product, where I cannot find the 99% number.
This whole website and everything around it are almost ironic.
Why is the LiteLLM incident on there? The linked article for that one is a 404.
I didn't read any credible arguments suggesting that was caused by vibe coding. They had their PyPI publishing credentials stolen thanks to an attack against a CI tool they were using.
Plus the linked article for the Amazon outage is https://d3security.com/blog/amazon-lost-6-million-orders-vib... which appears to be some other vendor promoting their product without providing any details on what happened at Amazon.
My impression is that the first item on the website should be the site itself.
Barely anything on the site makes sense if you look at them closely.
We call that "slop", the last time I checked.
> Why is the LiteLLM incident on there? The linked article for that one is a 404.
-> [Endor Labs] https://www.endorlabs.com/learn/teampcp-isnt-done
-> On March 24, 2026, Endor Labs identified that litellm versions 1.82.7 and 1.82.8 on PyPI contain malicious code not present in the upstream GitHub repository. litellm is a widely used open source library with over 95 million month downloads. It lets developers route requests across LLM providers through a single API.

Two backdoored versions of litellm (1.82.7 and 1.82.8) shipped with a full credential harvester, Kubernetes lateral movement toolkit, and persistent backdoor.
Coding with AI is kind of like obesity in modernity: having tons of resources is the goal, but once you get there, you end up in a system you're not really adapted to.
Personally, I don't care that much about org incentives (even though they obviously matter for what OP posted) but more about what it does to my thinking. For me, actually writing code is what slows my brain down, helps me understand the problem, and helps me generate new ideas. As soon as I hand off implementation to an LLM (even if I first write a spec or model it in TLA+) my understanding drops off pretty quickly.