No one should say that a chatbot "hallucinates". Chatbots do not have minds, they manipulate text. Hallucination requires not only consciousness but the physical brain to falsely perceive a sensation as real. Machine learning models have neither consciousness or physical form, and they never will.

#AIHype #mathymath

@annedrewhu It’s definitely an odd choice of language that implies some kind of victim status for the model and is considerably less clear than saying “outputs false information “

@louiseadennis @annedrewhu I really think we should use Frankfurter's terminology: Bullshit

https://en.m.wikipedia.org/wiki/On_Bullshit

On Bullshit - Wikipedia

@rrb @annedrewhu In other contexts bullshit is clearly a good term for communicating the phenomenon since it succinctly communicates something about the process by which the output is generated beyond simply noting that the output is incorrect.

@louiseadennis @annedrewhu I like the term for two reasons. 1 it is accurate using Frankfurter's definition and 2 it gives the tech the gravitas it deserves.

A tool that spouts n'importe quoi just because it sounds credible is of minimal utility really. A bullshit generator