“The LLM generated what was described, not what was needed.”

https://blog.katanaquant.com/p/your-llm-doesnt-write-correct-code

Your LLM Doesn't Write Correct Code. It Writes Plausible Code.

One of the simplest tests you can run on a database:

Vagabond Research

@jack

There's one line there that would be valuable in a broader discussion (beyond coding):
"This is not a syntax error. It is a semantic bug:"

Given that LLMs don't even have a semantic component, this seems foreseeable. But outside the specific context of coding I see people struggling to formulate the point cogently.

This would be relevant to contexts like technical writing, etc., but I don't think the vocabulary exists for expressing it neatly in such cases.

@glc @jack

When people start out with LLMs, they ask for code.

But after a while, the users ask for spec.