The use of “hallucinate” is a stroke of true evil genius in the AI world.

In ANY other context we’d just call them errors & the fail rate would be crystal clear.

Instead, “hallucinate” implies genuine sentience & the *absence* of real error.

Aw, this software isn’t shit! Boo’s just dreaming!

@Catvalente "Hallucinate" might not the best choice of words, but LLMs don't error out like traditional programs and apps. LLMs make mistakes, they misremember, they confabulate, they bullshit; which is something we're NOT used to seeing from computers; it may not (necessarily) infer consciousness, but it's something uncomfortably ... familiar.
Computer programs don't make mistakes, they may encounter scenarios not accounted for by their programing. Assuming there's no hardware issue or data corruption, a calculator app will NEVER make an arithmetic error. You can put in a calculation in a million times, and it will always give you the same answer.

@DanDan420 @Catvalente You're just describing the difference between a deterministic computer program and one with simulated probability (which are sometimes "models").

LLMs still run in the same way that other programs run on a computer...deterministically. However, the simulated randomness gives folks the impression that it is somehow different. It isn't different. If you play just about any computer game you'll have encountered what's going on here conceptually.

@avocado_toast @Catvalente Yes, procedurally generated worlds and levels in video games have been around for a very long time, but they involve a top-down approach where a number of rules are explicitly hard-coded and used with random number generators to produce unique diversity.
AI, however, uses a bottom-up approach, where the "rules" are organically inferred from the dataset it's trained on, while the dataset itself is not stored within the model in any way that came be directly retrieved.
When an LLM "hallucinates" it has a sort of intuitive understanding of what the answer should look like, and fills in the blanks based on the data it's trained on.

@DanDan420 @Catvalente Forget generated worlds, and think of the fundamentals: a pseudo random dice roll in an RPG. It's still deterministic ultimately (the randomness isn't really random), but it's close enough to real that it might as well be.

This randomness makes things wavy when you interact with the models because it's incorporated all over. That waviness (or temperature as they call it in some APIs) is what gives you unpredictable results (hallucinations) instead of consistent errors.