AI machines aren’t ‘hallucinating’. Their makers are | Naomi Klein

“These models are enclosure and appropriation machines, devouring and privatizing our individual lives as well as our collective intellectual and artistic inheritances. And their goal … was always to profit off mass immiseration, which, under capitalism, is the glaring and logical consequence of replacing human functions with bots.”

https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein

AI machines aren’t ‘hallucinating’. But their makers are

Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

The Guardian
@catherinecronin I refuse to use any verbs for AI beyond “generating” It does not think, hallucinate, lie, imagine, create, write, tease, sing, feel, dance, joke, smile, empathize, ache, wonder. It only generates regurgitated bits that imitate.
@cogdog @catherinecronin But it's accurate to say that it "makes up stuff" that doesn't exist, including completely fake citations. That's not just a regurgitation: something new has come about (for very wrong reasons).

@vahidm @catherinecronin Perhaps just semantic quibbling, but when I make up stuff its for a purpose (sarcasm? storytelling?), there is meaning and intent.

Just spawning new text is what random generators do through an algorithm. That's the beauty of the "stochastic" labeling by @emilymbender there is a difference between that and randomness https://en.wikipedia.org/wiki/Stochastic

Stochastic - Wikipedia

@cogdog @vahidm @emilymbender agree, Alan! conceptual models ascribing sentience to LLM are like weeds, sprouting everywhere – and require challenging. like you, I am grateful for the foundational, ongoing, critical work by Emily Bender, Timnit Gebru et al.
@cogdog @catherinecronin @emilymbender Considering the AI doesn't understand what it spews out, I guess it's the (human) readers that make sense of the output.
We (the humans) assign different values ie. between a useful/accurate summary of existing texts and complete fabrications. In other words, it's in the eye of the behoder?
Maybe it's a generational failure of the current AI that they can't identify the hallucinations, and future generations will self-correct before generating the output?