"AI can make mistakes, always check the results"

I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720

@jenniferplusplus this. The fact that we allowed companies to get away with "computer says no" for so long led to this point. If we'd beat them around the head a decade to two back, with "and who owns the computer?! Who programmed it?! A human is responsible for this somewhere" then this technology would not have taken off anywhere close to as well.

Can you imagine the liability insurance open AI would have to buy if you could sue them for incorrect results?

@emily_s @jenniferplusplus We totally memory-holed all that stuff about machine learning algorithms (really the same thing as AI, but the branding was different back then) and all the hype about how they’d make unbiased decisions. How did that turn out?

Oh yeah. Garbage in, garbage out.

@MisuseCase @jenniferplusplus this isn't even that. This was companies setting up their systems so that when the computer says no that's it. They claim they can't do anything about it. Some how they got people to forget that someone programmed that computer to do that. It's not inevitable, it's not carved into the fabric of the universe, it's a few magnetic fields on a disk of rust that a human made and encoded. It can be changed. They just didn't want to and got away with it

@emily_s @MisuseCase @jenniferplusplus

I wouldn't actually blame computers for that; it's just one more iteration of the bureaucratic mindset: The Rules say so, and The Rules can't be changed.