The use of “hallucinate” is a stroke of true evil genius in the AI world.

In ANY other context we’d just call them errors & the fail rate would be crystal clear.

Instead, “hallucinate” implies genuine sentience & the *absence* of real error.

Aw, this software isn’t shit! Boo’s just dreaming!

@Catvalente @jwz see also.

TL;DR given the way all outputs are generated, if one is a hallucination, they all are.

https://social.europlus.zone/@europlus/116191458412032034

europlus :autisminf: (@[email protected])

I’m sure many other have made this observation, but even just reading this post without reading the linked article made me realise (or remember) that... *All* LLM output is, in fact, a hallucination. Because the way it formulates a “hallucination” *is exactly the same* as how it formulates a response *we don’t consider* a hallucination. Same with “good” vs “bad” summaries (and whatever the relative occurrence of each is). #NoAI #HumanMade

the europlus zone