The hidden beauty of vibe coding

"It passed all the unit tests, the shape of the code looks right," he said. It's 3.7x more lines of code that performs 2,000 times worse than the actual SQLite. Two thousand times worse for a database is a non-viable product. It's a dumpster fire. Throw it away. All that money you spent on it is worthless."

https://www.theregister.com/2026/03/17/ai_businesses_faking_it_reckoning_coming_codestrap/

AI still doesn't work very well, businesses are faking it, and a reckoning is coming

interview: Codestrap founders say we need to dial down the hype and sort through the mess

The Register

@gerrymcgovern This is an unusual article. It mixes truth and misconceptions in awkward ways.

For example:

Smiley pointed to a recent attempt to rewrite SQLite in Rust using AI

This isn't what happened. It was a C Compiler that was rewritten. A different tester then rebuilt SQLite using both the AI and the official one. The AI one did worse.

But it did worse for very specific reasons. The AI version was only tested for correctness. It was only given unit tests as a parameter for success. It failed on real world performance tests, because it was never actually given that as a requirement.

Lines of code, number of [pull requests], these are liabilities. These are not measures of engineering excellence."... Measures of engineering excellence, said Smiley, include metrics like deployment frequency, lead time to production, change failure rate, mean time to restore, and incident severity.

So these are famously known as the DORA metrics. And they don't measure engineering excellence, ... /1

@gerrymcgovern ... they measure the capabilities of the engineering platform along with the expertise of the people using that platform.

There are lots of companies with excellent engineers and crummy DORA scores because they don't have the institutional support to improve those metrics. Nor does the score mean the business is successful. You can have great DORA metrics and still lack for paying customers.

"The other challenge here is that the incentives are misaligned,"

But then he proceeds to list a bunch of examples for competing incentives. His examples of "misaligned" are really examples of "I would like to deliver less and get paid more"... /2

@gerrymcgovern ...

If there's an incentives problem here, it's that companies have been paying for a lot of BS rituals and they're discovering that the BS generating machine is undermining part of the ritual. Companies have also been getting away with under-specifying success in order to pad results as "good". But Gen AIs will "fill in" the under-specificity with made up data. Or they will fail to deliver anything into the gap that some human was hoping would be filled.

But none of this is "misaligned". It's intentional ambiguity designed to protect business units. The AI is just exposing the BS for what it is.

OP is kind of talking about that BS problem. But he's taking weird micro angles to view subsets of the problem without calling out the greater problem. He's not wrong, but he's also not really right either. 🤷🏻‍♂️ //