People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

And every damned time, every damned time any of that code surfaces, like Anthropic's flagship offering just did, somehow it's exactly the pile of steaming technical debt and fifteen year old Stack Overflow snippets we were assured your careful oversight made sure it isn't.

Can someone please explain this to me? Is everyone but you simply prompting it wrong?

It's a good thing programmers aren't susceptible to hubris in any way, or this would have been so much worse.

@bodil I don't get it either. It completely baffles me that anyone can look at the generated output and think "this is how it should be" or look at the anthropic leak and say "this is great engineering"

And once someone has emotionally invested in LLMs being the future of their career it is really hard to get a honest conversation going.

And when I test it and it doesn't deliver It seems to always boil down to: you are doing it wrong ... you are stuck in your old ways ... pre AI thinking ...

@themipper @bodil this!

“once someone has emotionally invested in LLMs being the future of their career it is really hard to get a honest conversation going” 🎯

@irenerd @themipper @bodil Because it’s a cult. You can’t convince me otherwise. And the next hyped tech thing once AI bubble pops will also become a cult and anyone not bought in 110% will be told that they need to get on board or get left behind.