The use of “hallucinate” is a stroke of true evil genius in the AI world.

In ANY other context we’d just call them errors & the fail rate would be crystal clear.

Instead, “hallucinate” implies genuine sentience & the *absence* of real error.

Aw, this software isn’t shit! Boo’s just dreaming!

@Catvalente "Hallucinate" might not the best choice of words, but LLMs don't error out like traditional programs and apps. LLMs make mistakes, they misremember, they confabulate, they bullshit; which is something we're NOT used to seeing from computers; it may not (necessarily) infer consciousness, but it's something uncomfortably ... familiar.
Computer programs don't make mistakes, they may encounter scenarios not accounted for by their programing. Assuming there's no hardware issue or data corruption, a calculator app will NEVER make an arithmetic error. You can put in a calculation in a million times, and it will always give you the same answer.

@DanDan420 @Catvalente You're just describing the difference between a deterministic computer program and one with simulated probability (which are sometimes "models").

LLMs still run in the same way that other programs run on a computer...deterministically. However, the simulated randomness gives folks the impression that it is somehow different. It isn't different. If you play just about any computer game you'll have encountered what's going on here conceptually.

@avocado_toast

"However, the simulated randomness gives folks the impression that it is somehow different."

That is a really good point. I need to incorporate this when I talk with my students about LLMs.