"AI can make mistakes, always check the results"

I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720

@jenniferplusplus I believe it does not even make mistakes in the conventional sense, as mistakes require an ability to pursue truth.
@ozzelot @jenniferplusplus it's all "hallucination", sometimes it's incidentally correct

@pikesley @ozzelot @jenniferplusplus

and also they're not people so they don't hallucinate either. chatbots produce noise and the vc firms want that to be our fault.

@jenniferplusplus this. The fact that we allowed companies to get away with "computer says no" for so long led to this point. If we'd beat them around the head a decade to two back, with "and who owns the computer?! Who programmed it?! A human is responsible for this somewhere" then this technology would not have taken off anywhere close to as well.

Can you imagine the liability insurance open AI would have to buy if you could sue them for incorrect results?

@emily_s @jenniferplusplus We totally memory-holed all that stuff about machine learning algorithms (really the same thing as AI, but the branding was different back then) and all the hype about how they’d make unbiased decisions. How did that turn out?

Oh yeah. Garbage in, garbage out.

@MisuseCase @jenniferplusplus this isn't even that. This was companies setting up their systems so that when the computer says no that's it. They claim they can't do anything about it. Some how they got people to forget that someone programmed that computer to do that. It's not inevitable, it's not carved into the fabric of the universe, it's a few magnetic fields on a disk of rust that a human made and encoded. It can be changed. They just didn't want to and got away with it

@emily_s @MisuseCase @jenniferplusplus

I wouldn't actually blame computers for that; it's just one more iteration of the bureaucratic mindset: The Rules say so, and The Rules can't be changed.

@emily_s @jenniferplusplus
As a computer programmer, yes. There is no such thing as a computer error. It is one or more of:
* programmer error
* documentation error
* user error (with a side-order of either documentation error or "user didn't bother to read the documentation")

@kerravonsen @emily_s @jenniferplusplus While Intel were clearly at fault, I think people on the receiving end of the Pentium FDIV bug could reasonably describe that as a computer error

(there are certainly hardware failures of a pernicious nature)

@flippac @emily_s @jenniferplusplus Fiiiiine, there are also hardware errors; but doesn't that again come back to the human who designed the hardware?
@flippac @emily_s @jenniferplusplus
See also the Year 2038 problem. https://en.wikipedia.org/wiki/Year_2038_problem -- is that a computer error or a programmer error?
Year 2038 problem - Wikipedia

@kerravonsen @emily_s @jenniferplusplus BCD existed: if I'm old enough to talk about FDIV I certainly remember the long buildup to Y2K (including everyone running into it while computing about the future)

@kerravonsen @emily_s @jenniferplusplus The Epochalypse specifically is worse, mind: it's an entirely reasonable (initially implicit-spec) "holy shit we did not build this to work for that long and you did it anyway" problem that originated when the relevant software wasn't a piece of critical infrastructure.

For banks and the like, Y2K was expected long-term maintenance.

The epochalypse is, realistically, user error.

@kerravonsen @emily_s @jenniferplusplus Not always: sometimes it's being used outside the design spec, sometimes that's because the design spec wasn't communicated clearly but not always, etc etc.

"When someone says 'computer error' rather than something more specific they're probably full of it" I'm fine with, but one of the realities of computing machines as opposed to the mathematical abstraction of computing is that like all machines they have a non-zero failure rate - even if it's pretty damn tiny.

Now, the amount of shite practice out there re error tolerance/resilience? Sure, we can talk about that (or skip it, because neither of us are newbies here). But bitflips absolutely happen in the wild, especially if someone didn't realise what it really took to keep their machine cool enough.

@kerravonsen hey just to be clear, you're doing it right now. You're saying the computer is permitted to be wrong. The consequences will land on whoever was able to avoid them, and they will deserve it for not getting out of the way
@jenniferplusplus I am quite confused as to how you concluded that I said that, when I've been pointing out that it is human error
@jenniferplusplus The computer is wrongly permitted to be wrong. I thought I was agreeing with you.
@kerravonsen @emily_s @jenniferplusplus or a gamma ray and bit flip. But that should probably be caught.

@jenniferplusplus right?! What else would you buy if right on the lable it said "this may not be what we say it is" ??

So it may not be correct information, you don't know which part. You are using it to not have to do the legwork yourself. Do you
Take what it gave you, fingers crossed the wrong bits are not too bad
Or
Do legwork to figure out what is wrong defeating the purpose?
AND how do know your source is correct?

#Ai continuing to learn will keep reintroducing bogusness exponentially!?

@Crystal_Fish_Caves what would I buy? Very little.

But, a lot more people than we like to think are gambling addicts. This hits the same psychological exploit as trading card packs, blind boxes, and loot crates. And a lot of the people who are the most vigorous proponents are effectively playing with someone else's money

@Crystal_Fish_Caves @jenniferplusplus

This does remind me of this fucking weirdness when buying a house:

https://en.wikipedia.org/wiki/Title_insurance

A lot of the US does not have the government keep track of who owns what land so when you buy a place, you need to also buy insurance that says that you are actually buying it from someone able to sell it.

As far as I can tell every other country just has a department that you can ask "hey is this the owner" and trust the answer.

Title insurance - Wikipedia

@gbargoud @Crystal_Fish_Caves @jenniferplusplus if the American insurance industry can find a way to require insurance for something, they will
@jenniferplusplus Saying “AI can make mistakes” is exactly like saying “An adjustable rate mortgage can increase the interest rate at any time.” It’s not a question of “if”, but “how soon is it possible?”

@mighty_orbot @jenniferplusplus

I would really love to live in your world.

Humans around me fuck up all the time.
Most of the time they will won't even apologise when they are sprung on their "hallucination"

And they don't come with a warning sticker

@jenniferplusplus yeah it's a weak ass "CYA" for the AI vendors
When Using AI Leads to “Brain Fry”

As firms increasingly incentivize employees to build and oversee complex teams of agents—for example, by measuring and rewarding token consumption as a proxy for performance—people are finding themselves pushed to their cognitive limits. Participants in a recent study described a “buzzing” feeling or a mental fog with difficulty focusing, slower decision-making, and headaches. The authors call this phenomenon “AI brain fry,” defined as mental fatigue from excessive use or oversight of AI tools beyond one’s cognitive capacity. This AI-associated mental strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit. The findings also show how AI-driven workflows can be designed to diminish burnout and point toward specific manager, team, and organizational practices to avoid mental fatigue even as AI work intensifies.

Harvard Business Review

@jenniferplusplus

It's probably safer and easier to just do the job yourself...

@jenniferplusplus They sure came up with an ingenious solution to the trolley problem tho- hide the switch thrower behind a wall and blame the victims for being on the wrong tracks
@jenniferplusplus it's the all care, no responsibility clauses of software licences on speed.
Peak billionaire-hoarder techbro, really, not new, just distilled stench.
@jenniferplusplus Yes! Thanks for articulating this, I couldn't put my finger on what annoyed me about it.

@jenniferplusplus

You stated: <<What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not". Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on.>>

Way back in the early 2000s, there was a system called "Dragon Dictate". The goal was to eliminate #human #transcriptionists with automated speech-to-text (sound familiar?) The system had to be trained on your voice and vocabulary. Once properly trained it could do a pretty good job, I'll guess 95-98%. It was better suited to output that was stereotyped (mostly the same), and structured (such as radiology reports and operative notes).

Regardless of how the note/report was generated, the professional who spoke the words had a obligation to at least scan the output and sign it (yes, with an ink pen!). Once signed it became part of the "legal medical record" open to misinterpretation, copying, lawsuits, etc. etc.

Once Dragon Dictate became routine (and they fired all the transcriptionists) I started to notice this little #disclaimer at the bottom:

"If portions of this note are confusing or indecipherable please feel free to call me with questions or concerns." Sounds a lot like #AI to me! I polite way to summarize this is:

👉 They were trying to force me to be their copy-editor. 👈

It cast the entire content in doubt.

Consider for a moment the difference between saying "The scan does not show cancer." and "The scan does show cancer." That "not" is doing a lot of work, and is very easy to miss when you're talking fast and never intend to read your own note ever again.

More subtle is the grammatical error in the first sentence. "This note was #dictated using Dragon text to speech recognition software." Either they changed their product name to "Dragon Text", in which case the capitalization is off. Or they transposed words and it should read "speech to text" or "speech recognition" with no text.

👉 In other words, they didn't even proof-read their own disclaimer! 😱

#MedicalRecords #Medicine #SpeechToText #Liability #Risk #SignalToNoise

@jenniferplusplus

And if the LLM is so wrong, and I agree they are wrong a lot, also annoyingly right then suddenly massively wrong.

What does this say about the datasets they are trained on and the training methodology used to build the model.

@jenniferplusplus Also, I feel it just undermines LLM being actually useful if I had to manually search it up to verify it.

@matty @jenniferplusplus Yup! And I'm pretty sure human brains aren't actually built in ways that make it possible to process and pick out the errors in tonnes of nearly-right-but-not-quite information. Especially when slop is designed to look correct to humans.

The idea that you can solve LLM problems by checking the output enough is pure liability-washing

@jenniferplusplus

I think that being liable for the mistakes an AI that you use is only fair... They who live by the sword etc.

@Daniel_Blake @jenniferplusplus The problems start if you aren't using the AI because you want to, but because you got ordered to use it.

Cory Doctorow has written a lot about what he calls Reverse Centaurs - persons having to work for a machine instead of persons using a machine. For instance:
https://pluralistic.net/2025/12/05/pop-that-bubble/#u-washington

Pluralistic: The Reverse-Centaur’s Guide to Criticizing AI (05 Dec 2025) – Pluralistic: Daily links from Cory Doctorow

@soulsource @jenniferplusplus

Yes, of course: I was only thinking of voluntary use.

@jenniferplusplus i agree, but I also think that LLMs being unreliable is part of the business model, if it gave acceptable answers first time you'd only ask one question, if it messes up slightly you type more stuff, you rephrase the prompt or rewrite the spec, all of which are more tokens that your org will actually pay for. its like builtin #enshittification from the start.
@jenniferplusplus AI appears to "learn from its mistakes" and amplify them...
@jenniferplusplus
AI is a nifty tool, but blindly trusting its output is foolish. AI should not be treated as an unquestionable authority, which I've personally see happen in the workplace. The novelty of AI makes it enjoyable for now, yet companies rushing to replace human experience and expertise with AI will soon see quality erode and trust vanish altogether. When that happens these companies will learn that once quality and trust are lost, winning them back is far harder than maintaining them.
@jenniferplusplus ai is the intern on seven tabs of acid. He can no longer tell the difference between truth and fiction, and this will lead to lots of mistakes, most of which will lead to you staring at your monitor in confusion.
You must at a minimum verify the work, make sure it corresponds to reality, and get ready to wtf.
@jenniferplusplus Note well: whether “you” are actually liable for the errors made in the output produce by AI in response to your prompting depends entirely on whether “you” are someone privileged with impunity for your own errors in judgment or you instead are someone accountable for forced errors outside your own control.

@jenniferplusplus

LLMs do not make mistakes on their own, you make mistakes using them

> "AI can make mistakes, always check the results"

> I fucking loathe this phrase and everything that goes into it.

Why? It is good advice and important when using LLMs.

I use LLMs every day in my coding practice, and they do make errors (thank you compiler)

LLMs are a tool, and must be wielded. When you use them you are responsible for the results

@jenniferplusplus

AI *WILL* make mistakes. Do not use.

@jenniferplusplus There's a misunderstanding, an "AI can" is like a "worms can", that's the subject. Now it all makes sense.
@jenniferplusplus
They want us to pay for a service they won't stand behind. That should tell you everything you need to know.

@jenniferplusplus

"What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

Agreed... but also.. this is by design

"They" are intentionally designing a system that will (both intentionally & negligently) be used to inflict harms.. while also removing any "accountability" for the harms they inflict

A normal reasonable person sees that old slide deck from IBM about how:
"computers cannot be trusted to make decisions because computers can never be held accountable" as a dystopian warning

Tech Bros see it as:
"an opportunity to profit from 'Creating the Torment Nexus' while insulating themselves from any consequences for their own actions"

@jenniferplusplus

Or it means: People can make mistakes, always check the results.

After all, Buddha said that you should check everything before you accept it as your own.

@jenniferplusplus
If I need to check those results, I could have found them in the first place, without using A fucking I.