Anthropic's Claude Code's full source code leaked. Claude is seen by many to be the best coding LLM on the market with Anthropic proudly stating that Claude Code itself is mostly written by the LLM.

Now this sounds good as long as nobody can see the code which is quite the trash fire. Detecting "code sentiment" via regular expressions, variable and functions names containing prompt parts trying to influence the bot, a completely intransparent mess of a control flow that makes actual maintenance and debugging functionally impossible and the prompts ... of the prompts. All the begging and pleading to the chatbot not to do this or not to do that or please to do this.

It is fascinating but it is as far away from actual engineering as drunkenly pissing your name in the snow. Dunno what you call the people prompting software at Anthropic but "engineer" is not it.

Now it is fun to look at the currently hyped product striped bare and showing its pathetic quality but that is the future of software if we let those companies continue to undermine every good practice software engineering has tried establishing.

The software we have to use will be bad, insecure, unmaintainable, expensive with nobody having the skills or resources to build something better. As I wrote a few months ago: LLM based software production is equivalent to saying that fast fashion should be the only way to produce clothing. A tragic degeneration of the quality of the artefacts we rely on build for maximum profit on the backs of people in countries from the global majority.

@tante I often wonder how much of (especially proprietary) software already suffered from these issues before the whole LLM situation came about.
@Namnatulco @tante I maintain a codebase that I started 8 years ago. It works, but there are definite here be dragons bits that I dread ever having to change.
The difference being that I know to fear that, whereas an LLM will confidently produce a "fix" that will introduce subtle data corruption in certain cases.
@rupert @tante I was thinking of a different perspective - a lack of appreciation for quality in IT overall.
Obviously there are many people out there wo maintain their projects with care, even in commercial settings. But thinking back to when I was a teenager, the "have you tried turning it off and on again" approach was already the norm. Society already accepted IT is a mess and this makes it hard to see how the impact of AI on quality is harmful, even as it leads to more systematic problems.
@rupert @tante there's an obvious connection to a critique of capitalism here, but I don't think that's the whole story, since we're seeing lots of open source projects both using and doing LLM stuff. It's not really a clean cut between open source and closed source anymore (it probably never was, though..)

@Namnatulco @rupert @tante

Taking Microsoft as an example (they are definitely not the only ones):

Most of the time, when something goes wrong, the user will be confronted with an impersonal message, often accompanied by a long error code or hex value.

So instead of a "Sorry you likely lost work", you get a passive-aggressive "Have fun googling this code. Or just give up."

Half the non-technical users I know instantly blame themselves. The other half shrugs or vents their frustration.

So yeah, software has trained people pretty well to just expect low quality.