People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

And every damned time, every damned time any of that code surfaces, like Anthropic's flagship offering just did, somehow it's exactly the pile of steaming technical debt and fifteen year old Stack Overflow snippets we were assured your careful oversight made sure it isn't.

Can someone please explain this to me? Is everyone but you simply prompting it wrong?

It's a good thing programmers aren't susceptible to hubris in any way, or this would have been so much worse.

@bodil

> Can someone please explain this to me?

Sure: code with the job of managing a natural language LLM isn't going to look like procedural code you're used to.

If you have doubts whether coding assistants like https://antigravity.google are any use, download it, try it on your own code with your own choice of tasks and find out.

You can throw the changes away if you are worried about getting contaminated.

You can write about your experiment here. And, you will actually know.

Google Antigravity

Google Antigravity - Build the new way

Google Antigravity

@hopeless
Your explanation just restates the observation but it provides no reason for why it's supposed to be looking different.

@bodil

@Landa @bodil

> Your explanation just restates the observation

OP has a point and a question... the point is Anthropic's leak not looking like they expected. It's because its job is not what they are used to.

The question is "are LLMs useful for writing code". To which I encourage them to stop being passive-aggressive about it and actually find out, and write about it, like a human with agency.

Your response is "just" denial. Please let us know your experience with antigravity...