No one should say that a chatbot "hallucinates". Chatbots do not have minds, they manipulate text. Hallucination requires not only consciousness but the physical brain to falsely perceive a sensation as real. Machine learning models have neither consciousness or physical form, and they never will.

#AIHype #mathymath

@annedrewhu It’s definitely an odd choice of language that implies some kind of victim status for the model and is considerably less clear than saying “outputs false information “

@louiseadennis @annedrewhu I really think we should use Frankfurter's terminology: Bullshit

https://en.m.wikipedia.org/wiki/On_Bullshit

On Bullshit - Wikipedia

@rrb @annedrewhu I'd say it depends upon context. Use of of bullshit in some contexts will encourage people to take you less seriously than the person calling it a hallucination. One of the clever things about using "hallucinate" about this phenomenon is it sounds both technical and mysterious and encourages the listener to view the speaker as cleverer/more knowledgeable than them.
@annedrewhu @louiseadennis @rrb I hadn’t heard the concept that LLM AI is like a machine hallucinating before, and I quite like it. (This is of course the problem with trying to suppress an idea by discussing it)
Why I like it: it makes clear that the machine isn’t lying, it just has no idea. It’s constructing a description of reality out of scraps of description, and each part fits into the next but there’s no consistency or holistic logic.