"hallucination" is another misleading term that anthropomorphizes large language models and adds to the hype. just say "produces false/incorrect output"
@abebab 💣
@KimCrayton1 @abebab yes. These are still deterministic machines transforming input, only stochastic determinism obfuscates the relation to output.
@abebab Hmm. I usually use this term (not for LLM, but in computer vision context) for the cases, where the task is ill-posed, so the model tries its best, but there is no guaranteed way in principle to answer correct.
Examples: monocular depth estimation (no way to tell the scale of the photo), image super-resolution, etc.
@abebab That is different compared to the case when I would use "produce incorrect output", e.g. a mistake in image classification, when the correct output is possible in principle (and not by sheer luck).
@abebab This is good, well said, thanks.
@abebab It's funny, I actually came to use the term to refer to correct stuff it generates, as helper for my mind to not take it too seriously.
Antropomorphizing it still is not a good idea, but it is not a crime to use those words intentionally, when we say and know, it does not have those capabilities.
GPT is an automated bullshitter.
It's fun to call it's "reasoning successes" hallucinations, though.