"AI can make mistakes, always check the results"

I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720

@jenniferplusplus this. The fact that we allowed companies to get away with "computer says no" for so long led to this point. If we'd beat them around the head a decade to two back, with "and who owns the computer?! Who programmed it?! A human is responsible for this somewhere" then this technology would not have taken off anywhere close to as well.

Can you imagine the liability insurance open AI would have to buy if you could sue them for incorrect results?

@emily_s @jenniferplusplus
As a computer programmer, yes. There is no such thing as a computer error. It is one or more of:
* programmer error
* documentation error
* user error (with a side-order of either documentation error or "user didn't bother to read the documentation")

@kerravonsen @emily_s @jenniferplusplus While Intel were clearly at fault, I think people on the receiving end of the Pentium FDIV bug could reasonably describe that as a computer error

(there are certainly hardware failures of a pernicious nature)

@flippac @emily_s @jenniferplusplus Fiiiiine, there are also hardware errors; but doesn't that again come back to the human who designed the hardware?
@flippac @emily_s @jenniferplusplus
See also the Year 2038 problem. https://en.wikipedia.org/wiki/Year_2038_problem -- is that a computer error or a programmer error?
Year 2038 problem - Wikipedia

@kerravonsen @emily_s @jenniferplusplus BCD existed: if I'm old enough to talk about FDIV I certainly remember the long buildup to Y2K (including everyone running into it while computing about the future)

@kerravonsen @emily_s @jenniferplusplus The Epochalypse specifically is worse, mind: it's an entirely reasonable (initially implicit-spec) "holy shit we did not build this to work for that long and you did it anyway" problem that originated when the relevant software wasn't a piece of critical infrastructure.

For banks and the like, Y2K was expected long-term maintenance.

The epochalypse is, realistically, user error.