The use of “hallucinate” is a stroke of true evil genius in the AI world.

In ANY other context we’d just call them errors & the fail rate would be crystal clear.

Instead, “hallucinate” implies genuine sentience & the *absence* of real error.

Aw, this software isn’t shit! Boo’s just dreaming!

@Catvalente is it an actual error though? The programming says to give a result that looks like other results, looks genuine, not give a truthful factual result based on analysis. Error would be failure to give result as instructed, and that's not what the program does when hallucinations occur.
@Mimesatwork @Catvalente This is actually a really good point. The purpose of LLMs is not, and never has been, to give out real information, it has always been to approximate human language by way of statistical modeling. Even calling factual inaccuracies "failures" cedes ground to people peddling these things, however unintentionally.