People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

And every damned time, every damned time any of that code surfaces, like Anthropic's flagship offering just did, somehow it's exactly the pile of steaming technical debt and fifteen year old Stack Overflow snippets we were assured your careful oversight made sure it isn't.

Can someone please explain this to me? Is everyone but you simply prompting it wrong?

It's a good thing programmers aren't susceptible to hubris in any way, or this would have been so much worse.

You know, it isn't even that tools like this are useless. There are absolutely things they could be good at. I've personally seen Claude find stupid little bugs you'd spend an hour figuring out and hating yourself for afterwards with great efficiency. I tried the first iteration of Copilot, back when it was just an aggressive autocomplete, and while I had to stop using it because it was overconfidently trying to finish my programs for me without being asked, it was great for filling in boilerplate and maybe even a couple lines of real code for the basic stuff. We have models nowadays that are actually trained to find bugs and security issues in code rather than having the entire internets thrown at them to produce something Altman & Amodei can sell to the gullible as AGI.

But there's the problem. The technology has been around for a while, we have a good idea of what it's good for and, more importantly, what it's not. "Our revolutionary expert system for finding bugs in your code" isn't nearly as marketable to the general public, and the CEO class especially, as "our revolutionary PhD level sentient AI that will solve all the world's problems if you only give us another couple trillion dollars, and also wants to be your girlfriend." And so we get Claude and ChatGPT and RAM shortages and AI psychosis and accelerated climate change instead of smaller, focused models that are actually good at their specialist subjects. Because those don't produce as much shareholder value.

@bodil ”it was great for filling in boilerplate”

There’s your problem right there. Computer science should work towards getting rid of the need for boilerplate, not invent ways to write more of it.

Every piece of boilerplate is a failing of the language or library that you’re using, and is technical debt. Editing generated code doubly so.

@[email protected]

Boilerplate is a side effect of excessive abstraction.
Now think about it for a second. 😉

(btw, did you consider an April fool for Anthropic's leak? It would be great PR.. after.)

@[email protected] @[email protected]
@giacomo You would have to explain what you mean there, because it makes absolutely no sense. Boilerplate is used instead of abstraction.
@[email protected]

If you don't abstract your code only need to solve a pretty specific problem.

If you abstract your code can handle a variety of tasks, you need new code to connect your generalized code with the actual problem to solve.

The enormous amount of boilerplate required by "modern" frameworks just makes the tradeoff evident. Unfortunately, marketing and hype hide this obvious fact to most developers.

@giacomo @ahltorp

it's because modern frameworks use bad abstractions like "component" or "model" or "capacitator enabler" that generalize a very narrow subset of the problem domain rather than good old reasonable abstractions like functor or a monad transformer