The use of “hallucinate” is a stroke of true evil genius in the AI world.

In ANY other context we’d just call them errors & the fail rate would be crystal clear.

Instead, “hallucinate” implies genuine sentience & the *absence* of real error.

Aw, this software isn’t shit! Boo’s just dreaming!

@Catvalente "Hallucinate" might not the best choice of words, but LLMs don't error out like traditional programs and apps. LLMs make mistakes, they misremember, they confabulate, they bullshit; which is something we're NOT used to seeing from computers; it may not (necessarily) infer consciousness, but it's something uncomfortably ... familiar.
Computer programs don't make mistakes, they may encounter scenarios not accounted for by their programing. Assuming there's no hardware issue or data corruption, a calculator app will NEVER make an arithmetic error. You can put in a calculation in a million times, and it will always give you the same answer.

@DanDan420 @Catvalente In other words, AI hallucinations are not errors, that’s system working as expected.

The error is in between the chair and the keyboard.

@slotos @Catvalente AI doesn't put out errors in the way we're used to computers putting out 404-type errors; instead they make mistakes.
In order for AI to work the way it does (to be able to read between the lines, and to organically learn from a dataset) it's programming requires a departure from strict digital binary logic of traditional computing; but that also invites ambiguity.
The user error comes in if you take an LLM at it's word when you need factual accuracy.