The use of “hallucinate” is a stroke of true evil genius in the AI world.

In ANY other context we’d just call them errors & the fail rate would be crystal clear.

Instead, “hallucinate” implies genuine sentience & the *absence* of real error.

Aw, this software isn’t shit! Boo’s just dreaming!

@Catvalente is it an actual error though? The programming says to give a result that looks like other results, looks genuine, not give a truthful factual result based on analysis. Error would be failure to give result as instructed, and that's not what the program does when hallucinations occur.
@Mimesatwork @Catvalente I'd agree with this - I mean, I certainly agree hallucinate is also the wrong word, but error to me implies something more fixable and programmatic, and less inherent and random, than what's going on.
@JubalBarca @Mimesatwork @Catvalente I tend to start with slot machine analogies myself, but there’s also the complex somewhat technically correct variant of an old saying in “infinite monkeys at typewriters with keys for each and every word Shakespeare ever wrote organized by how often he used them might sometimes get his plays right, how lucky”. Hallucinations are the ordinary state and what they are built to do, anything resembling reality is often luck and happenstance.
@Mimesatwork @Catvalente This is actually a really good point. The purpose of LLMs is not, and never has been, to give out real information, it has always been to approximate human language by way of statistical modeling. Even calling factual inaccuracies "failures" cedes ground to people peddling these things, however unintentionally.