The use of “hallucinate” is a stroke of true evil genius in the AI world.

In ANY other context we’d just call them errors & the fail rate would be crystal clear.

Instead, “hallucinate” implies genuine sentience & the *absence* of real error.

Aw, this software isn’t shit! Boo’s just dreaming!

@Catvalente
Alternate interpretation:
LLMs do not actually "make mistakes". Every time, they do exactly what they are supposed to do, without error.
Problem is, what they are supposed to do is "hallucinating". Every single LLM output is a "hallucination". Sometimes, by random chance, the "hallucination" given matches objective reality, but that is not at all relevant for the LLM's function.

@painting_squirrel @Catvalente yes. this.

But worse: I wouldn't say "sometimes by random chance." The results do _frequently_ match reality because the statistics driving the output generation were based on text that matched reality. But when they don't match they still sound indistinguishable without checking against objective reality.

So it is a lull-you-and-kill-you game.

@poleguy @Catvalente
I'll concede your point but only because "sometimes" and "random chance" are very badly defined terms that are not very intuitively understood by humans (myself included).

To wit: only because something is statistically well tuned to give good results most of the time does not mean that the results are not inherently random. "random" != "everything has the same probability"