"AI can make mistakes, always check the results"

I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720

@jenniferplusplus

LLMs do not make mistakes on their own, you make mistakes using them

> "AI can make mistakes, always check the results"

> I fucking loathe this phrase and everything that goes into it.

Why? It is good advice and important when using LLMs.

I use LLMs every day in my coding practice, and they do make errors (thank you compiler)

LLMs are a tool, and must be wielded. When you use them you are responsible for the results