People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

And every damned time, every damned time any of that code surfaces, like Anthropic's flagship offering just did, somehow it's exactly the pile of steaming technical debt and fifteen year old Stack Overflow snippets we were assured your careful oversight made sure it isn't.

Can someone please explain this to me? Is everyone but you simply prompting it wrong?

It's a good thing programmers aren't susceptible to hubris in any way, or this would have been so much worse.

@bodil i think this all seems to fit quite neatly within the framework which says that AI is primarily a political project aiming to discipline labour and undermine the idea that expertise is valuable, and the tech itself is kind of secondary. The “sound engineering practices” assurances are either disingenuous or they’re from people who have been fooled
@hdgarrood It's just that it's also coming from a lot of people I thought wouldn't be fooled, and it's making me very sad.

@bodil @hdgarrood This is because it's a cognitohazard and a lot of programmers' hubris leads them to believe they'll be immune to it when they try playing with the shiny thing.

The name Palantir really should have been saved for an AI company.

@dalias @bodil @hdgarrood every day I am further convinced that I have underestimated the degree to which it is a cognitohazard. "AI-induced psychosis" is just the most visible outcome.
@tedmielczarek @bodil @hdgarrood If you understand the degree to which the claims they're making are impossible, just the sudden stanning for "AI" is a huge red flag symptom indicating they've been messed up by the cognitohazard.