People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

And every damned time, every damned time any of that code surfaces, like Anthropic's flagship offering just did, somehow it's exactly the pile of steaming technical debt and fifteen year old Stack Overflow snippets we were assured your careful oversight made sure it isn't.

Can someone please explain this to me? Is everyone but you simply prompting it wrong?

It's a good thing programmers aren't susceptible to hubris in any way, or this would have been so much worse.

@bodil i think this all seems to fit quite neatly within the framework which says that AI is primarily a political project aiming to discipline labour and undermine the idea that expertise is valuable, and the tech itself is kind of secondary. The “sound engineering practices” assurances are either disingenuous or they’re from people who have been fooled
@hdgarrood It's just that it's also coming from a lot of people I thought wouldn't be fooled, and it's making me very sad.

@bodil @hdgarrood This is because it's a cognitohazard and a lot of programmers' hubris leads them to believe they'll be immune to it when they try playing with the shiny thing.

The name Palantir really should have been saved for an AI company.

@dalias @bodil @hdgarrood every day I am further convinced that I have underestimated the degree to which it is a cognitohazard. "AI-induced psychosis" is just the most visible outcome.
@tedmielczarek @bodil @hdgarrood If you understand the degree to which the claims they're making are impossible, just the sudden stanning for "AI" is a huge red flag symptom indicating they've been messed up by the cognitohazard.

@dalias @bodil @hdgarrood

This is actually one of the reasons that I personally avoid "agentic" and conversational LLM systems like the plague.

I have OCD. I spent the 2010s learning exactly how an ML-based recommender system absolutely couldn't tell the difference between a fun new interest and a severe compulsive episode. I strictly avoid *all* gambling-type activities because I know that I have the kind of brain that makes me highly vulnerable to becoming a zombie chained to a slot machine, and I'm almost entirely certain that I'm just as vulnerable to becoming a zombie chained to a slop machine. I absolutely dread the day my employer might decide to impose an AI mandate, or if the entire profession ends up becoming as dependent on LLMs as it currently looks like we're rushing headlong towards.

@datarama @bodil @hdgarrood The bubble has already started to pop. A lot of people outside the right circles to see this just don't know it yet.

If you're genuinely concerned about that, collect as much of the literature as you can and be prepared to take it to HR and/or a good lawyer. Nobody should be forced to expose themselves to cognitohazards or addiction threats in the workplace. An employer wouldn't get away with requiring employees to use ketamine to "enhance performance", and they shouldn't get away with requiring you to use "AI".

@dalias @bodil @hdgarrood I'm a union member; should I end up needing legal counsel I'm quite well-covered.
@datarama @dalias @bodil @hdgarrood one of the many benefits of union membership

@hdgarrood @bodil

... if you're serious and reasonably respectful, I'm your huckleberry.