"AI can make mistakes, always check the results"

I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720

@jenniferplusplus i agree, but I also think that LLMs being unreliable is part of the business model, if it gave acceptable answers first time you'd only ask one question, if it messes up slightly you type more stuff, you rephrase the prompt or rewrite the spec, all of which are more tokens that your org will actually pay for. its like builtin #enshittification from the start.