The "Vibe Coding" Wall of Shame

https://crackr.dev/vibe-coding-failures

Vibe Coding Failures: Documented AI Code Incidents

A curated directory of real-world incidents where AI-generated code failed in production. With authoritative citations.

Thought experiment here: What about the bugs that humans have wrote. (I'm not excusing or justifying to say AI Coding is better). At one point we shamed companies for producing and being sloppy with their engineering practices. All of the sudden in the last 10 years, we accepted company's excuses of "of well we don't care and we're garbage." (A lot of Amazon tone death documentation/surprise bugs/google's head scratching disconnect to the user, etc behaviors).

But I think this is a great thing to show that they're pushing to outsource coding to a bot and to shame them that their plan isn't working out so well as they're trying to force people to believe.

I think it may help if we start personalizing these trends with the people who are amplifying it. I.e. Jassyslop, Siemiatbot (Klarna CEO was bold to brag he dropped 80% of a role for AI) etc.

Honestly, we should shame companies for poor engineering whether humans are directly doing the work or handing it off to an LLM.
I agree with you. However, business individuals have decided that they're "a better judge" of our practices and they've used financial, legal, and coercion to get their way.
Everything is blameless, you can't do that to humans lol
“Vibe coded”? I doubt that there is the documentary evidence that the code in these systems was never touched by a human. At best this is a list of code where AI tools were used in development. To be honest if you just created a list of all outages in all companies and systems you’d probably have a better list since AI tools are ubiquitous.

> AI tools are ubiquitous.

Only among people who don't value the quality of their output. There are, fortunately, many who do value quality and are not using AI tools until they get to the point where they can usefully contribute.

> Only among people who don't value the quality of their output.

I value the quality of my output and I make extensive use of AI tools.

That's why the original definition of "vibe coding" is useful: creating code with AI tools without reviewing or caring about the quality of that code.

It's also possible to use AI tools as part of a responsible engineering process that is intended to produce production quality software.

Have you used a state of the art tool (e.g. Claude Code) in the past 6 months? If you only tried free tools, or only tried 1 year ago last, you really need to check again.

AI tools can absolutely contribute usefully, I can't keep count of the times where an AI pointed to an edge case I didn't think about, then helped me write the fix and the test for the issue.

I'm not vibe coding, as I'm reviewing the code, but saying they can't be useful means you haven't taken the time to look at the state of them recently.

Isn't it odd that you wrote your comment with AI then!?

Ha, gotcha, AI slop poster!

I know you didn't, but this is where we'll end up if people just write off everything as 'bad because AI' instead of critically assessing the quality of something on its own merit rather than the (very ironic) 'vibe' that it was generated rather than written.

For CVE-2026-0755, that's a vulnerability in gemini-mcp-tool. gemini-mcp-tool's Github repo says "This is an unofficial, third-party tool and is not affiliated with, endorsed, or sponsored by Google." but this list shows the Google logo next to the vulnerability.

Also, it's not entirely obvious to me that the vulnerability was introduced by vibe coding.

https://github.com/jamubc/gemini-mcp-tool

Disclosure: I work at Google, but not on anything related to this.

>Also, it's not entirely obvious to me that the vulnerability was introduced by vibe coding.

IDK why people act as if vibe coding invented software bugs that lead to vulnerabilities, as if those weren't already a thing by human programmers.

The same reason some use crime committed by illegal immigrants to push action, while ignoring the fact that citizens are more likely percentage-wise to commit those same crimes. It's confirmation bias at the least, and intellectual dishonesty at the worst, but either way, they want their worldview to be validated.
I know this is extremely off topic, but illegal immigrants are far more likely to commit crimes than citizens, not that this has anything to do with software bugs...

You got that exactly the wrong way round.

Here's one set of numbers from the CATO institute: https://www.cato.org/policy-analysis/illegal-immigrant-murde...

The only way your statement holds up is if you treat the act of existing while undocumented as a crime for this comparison, in which case sure - it's a tautology.

I probably won't comment further, since as you said this is very off-topic (I only meant to draw out an analogy as to why discussions about AI tend to be ideologically skewed), but every statistic I've seen shows far lower crime rates among illegal immigrants versus citizens (aside from the statutory crime of being in the country illegally).

The first link claims the 6-hour outage wiped 99% of order volume. I went to the "source" and found an (AI generated?) ad by a company that wants to sell a product, where I cannot find the 99% number.

This whole website and everything around it are almost ironic.

Yea, I was about to comment the same thing. I have noticed a lot of people weaponizing people's hatred of AI/slop and using rage baiting to drive views. No doubt someone would have looked at that entry of "Amazon lost 6M orders due to slop!" at face value and come away thinking it was true.

Why is the LiteLLM incident on there? The linked article for that one is a 404.

I didn't read any credible arguments suggesting that was caused by vibe coding. They had their PyPI publishing credentials stolen thanks to an attack against a CI tool they were using.

Plus the linked article for the Amazon outage is https://d3security.com/blog/amazon-lost-6-million-orders-vib... which appears to be some other vendor promoting their product without providing any details on what happened at Amazon.

Amazon Lost 6.3 Million Orders to Vibe Coding. Your SOC Is Next. | D3 Security

Amazon mandated AI coding tools and suffered a 6-hour outage costing 6.3 million orders. The same AI quality crisis now emerging in SOC operations.

D3 Security

My impression is that the first item on the website should be the site itself.

Barely anything on the site makes sense if you look at them closely.

We call that "slop", the last time I checked.

Indeed. The joke is that the website itself is vibe coded.

> Why is the LiteLLM incident on there? The linked article for that one is a 404.

-> [Endor Labs] https://www.endorlabs.com/learn/teampcp-isnt-done

-> On March 24, 2026, Endor Labs identified that litellm versions 1.82.7 and 1.82.8 on PyPI contain malicious code not present in the upstream GitHub repository. litellm is a widely used open source library with over 95 million month downloads. It lets developers route requests across LLM providers through a single API.

TeamPCP Isn't Done: Threat Actor Behind Trivy and KICS Compromises Now Hits LiteLLM's 95 Million Monthly Downloads on PyPI | Blog | Endor Labs

Two backdoored versions of litellm (1.82.7 and 1.82.8) shipped with a full credential harvester, Kubernetes lateral movement toolkit, and persistent backdoor.

That doesn't answer how stolen credentials are related to AI-assisted coding.
It seems like blogspam. It's curated according to an author's comment, but it treats ones verified by a security organization like Vite's just the same as ones like the blog post about Claude calling a Terraform command. And this is on a site which appears to sell other AI generated content for a subscription.

Coding with AI is kind of like obesity in modernity: having tons of resources is the goal, but once you get there, you end up in a system you're not really adapted to.

Personally, I don't care that much about org incentives (even though they obviously matter for what OP posted) but more about what it does to my thinking. For me, actually writing code is what slows my brain down, helps me understand the problem, and helps me generate new ideas. As soon as I hand off implementation to an LLM (even if I first write a spec or model it in TLA+) my understanding drops off pretty quickly.