No one should say that a chatbot "hallucinates". Chatbots do not have minds, they manipulate text. Hallucination requires not only consciousness but the physical brain to falsely perceive a sensation as real. Machine learning models have neither consciousness or physical form, and they never will.

#AIHype #mathymath

@annedrewhu Agreed, the "hallucination" term is just another way to make the chatbot seem more "alive" than it actually is. It's yet another way to hype it up, even when it's broken.
Home

Myths, misconceptions & inaccuracies render AI systems opaque. Check out the resources we provide to tackle 8 of the most common myths about ā€˜artificial intelligence.’

@annedrewhu what term would you prefer? The other word that comes to mind is bullshitting but I suppose you'd suggest that also requires consciousness?

@annedrewhu Indeed. I found Ethan Mollick's article (below) useful and interesting, but took exception to his use of "lie" and "hallucinate." Anthropomorphism isn't helpful here.

(I won't say "never" to machine consciousness, but I'm certain we're a long, long, long way away from it and suspect the underlying tech will be fundamentally different.)

https://oneusefulthing.substack.com/p/how-to-get-an-ai-to-lie-to-you-in

How to Get an AI to Lie to You in Three Simple Steps

I keep getting fooled by AI, and it seems like others are, too.

One Useful Thing
@annedrewhu It’s definitely an odd choice of language that implies some kind of victim status for the model and is considerably less clear than saying ā€œoutputs false information ā€œ

@louiseadennis @annedrewhu I really think we should use Frankfurter's terminology: Bullshit

https://en.m.wikipedia.org/wiki/On_Bullshit

On Bullshit - Wikipedia

@rrb @annedrewhu I'd say it depends upon context. Use of of bullshit in some contexts will encourage people to take you less seriously than the person calling it a hallucination. One of the clever things about using "hallucinate" about this phenomenon is it sounds both technical and mysterious and encourages the listener to view the speaker as cleverer/more knowledgeable than them.
@annedrewhu @louiseadennis @rrb I hadn’t heard the concept that LLM AI is like a machine hallucinating before, and I quite like it. (This is of course the problem with trying to suppress an idea by discussing it)
Why I like it: it makes clear that the machine isn’t lying, it just has no idea. It’s constructing a description of reality out of scraps of description, and each part fits into the next but there’s no consistency or holistic logic.
@rrb @annedrewhu In other contexts bullshit is clearly a good term for communicating the phenomenon since it succinctly communicates something about the process by which the output is generated beyond simply noting that the output is incorrect.

@louiseadennis @annedrewhu I like the term for two reasons. 1 it is accurate using Frankfurter's definition and 2 it gives the tech the gravitas it deserves.

A tool that spouts n'importe quoi just because it sounds credible is of minimal utility really. A bullshit generator

@annedrewhu should the term confabulation be used instead? That describes the process better at least.

@annedrewhu Yes! It seems to me that everyone is submitting to using the term even if they are not convinced!

https://dair-community.social/@OmaymaS/110017912767830605

Omayma (@[email protected])

I have a feeling that the term "Hallucination" of Large Language Models stuck and many people will regret it for years to come like the term AI.

Distributed AI Research Community

@annedrewhu agreed with this 100%.

Saying the Model "hallucinated" assumes that it has some normal state that isn't hallucination... Which is of course wrong.

Using this term is basically us lying to ourselves.

@annedrewhu the amount of arguments on the bird app of people arguing whether AI (really dislike this broad term) is sentient. There’s arguments for, against and even loud voices bringing philosophical arguments that we can’t prove consciousness

Do humans have minds?

Do humans manipulate text?

@annedrewhu

@annedrewhu Although, like ā€œhallucinating,ā€ the word implies cognition that isn’t actually occurring, I kinda think we should call it ā€œlying.ā€
@annedrewhu Somebody finally agrees with you (and me): "Granting a chatbot the ability to hallucinate — even if it’s just in our own minds — is problematic. It’s nonsense. People hallucinate. Maybe some animals do. Computers do not. They use math to make things up. [...] the term ā€œhallucinateā€ obscures what’s really going on. It also serves to absolve the systems’ creators from taking responsibility for their products."
https://www.bloomberg.com/news/newsletters/2023-04-03/chatgpt-bing-and-bard-don-t-hallucinate-they-fabricate
AI Doesn’t Hallucinate. It Makes Things Up

There’s been so much talk about AI hallucinating that it’s making me feel like I’m hallucinating. But first…

Bloomberg