AI is making us write more code. That's the problem.

I analyzed research papers on AI-generated code quality. The findings:

→ 1.7x more issues than human-written code
→ 30-41% increase in technical debt
→ 39% increase in cognitive complexity
→ Initial speed gains disappear within a few months

We're building the wrong thing faster and calling it productivity.

@mlevison I use LLMs to help me with basic code writing tasks, generating the structural frameworks, saving me a lot of typing time. However, I never rely on that code out of the box, I always review it thoroughly and often just snip and prune. I would never attempt to give an LLM a complicated set of instructions, it's going to fail every time.
@mlevison Intellisense, pretti, etc. are all just tools for a smart developer.
@crackhappy @mlevison Jetbrains vanilla Intellisense was pretty good even before the latest epidemic of AI psychosis.
@thirstybear @mlevison I refuse to call what we currently have "Artificial Intelligence" because it is not. It's a fundamentally clever implementation of Markov chains with way way too much power applied.
@crackhappy Most of the time I call it GenAI. LLMS is a better choice, but I need to use the language of the audience. If I say LLM then I have to explain it, @thirstybear
@mlevison @thirstybear That's entirely valid, but thank you for putting the GenAI on the front. That makes it palatable for those Not In The Know.