AI machines aren’t ‘hallucinating’. Their makers are | Naomi Klein

“These models are enclosure and appropriation machines, devouring and privatizing our individual lives as well as our collective intellectual and artistic inheritances. And their goal … was always to profit off mass immiseration, which, under capitalism, is the glaring and logical consequence of replacing human functions with bots.”

https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein

AI machines aren’t ‘hallucinating’. But their makers are

Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

The Guardian
@catherinecronin
'mass immiseration' - what a descriptor of grimdark times :(
@catherinecronin I refuse to use any verbs for AI beyond “generating” It does not think, hallucinate, lie, imagine, create, write, tease, sing, feel, dance, joke, smile, empathize, ache, wonder. It only generates regurgitated bits that imitate.
@cogdog @catherinecronin
i think even 'generating' is a step too far, suggesting some sort of, well, genesis. Locating, retrieving, gathering would be enough as descriptor for me for now
@magsamond @catherinecronin I might stick with “regurgitating” !
@cogdog @catherinecronin Generate is the word i've settled on too.
@davecormier @cogdog @catherinecronin I get the rhetorical move made in that article, but I believe “hallucinate” is jargon in the AI field, which is why I’m fine to use it. https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
Hallucination (artificial intelligence) - Wikipedia

@derekbruff @davecormier @cogdog yes, Derek - and it is precisely this usage that many of us are challenging. see the 'Opposition to terminology' in that Wikipedia article, including this from @emilymbender, with which I agree: "'Hallucinate' is a terrible word choice here, suggesting as it does that the language model has *experiences* and *perceives things*. (And on top of that, it's making light of a symptom of serious mental illness.)"

@catherinecronin @derekbruff @davecormier @cogdog @emilymbender

I understand the reticence to use verbs that anthropomorphize AI, and I don't disagree with it.

That said, it is not clear that the AI is doing anything fundamentally *different* from a human - it's just doing it much more poorly, with a limited and basic set of neurons, and with severely limited and uni-dimensional data input.

.../2

@catherinecronin @derekbruff @davecormier @cogdog @emilymbender

2

But a lot of people are saying the AI isn't 'intelligent' because it's *not* doing all the things our folk psychology says humans do - think, imagine, believe, conceptualize, etc. etc., and more, that in principle it can't.

There I don't agree. I don't think there are special, say, 'creating' or 'emphasizing' skills that only a human could have. Certainly, we have no idea how such skills would work, except as a neural net.

@catherinecronin @davecormier @cogdog @emilymbender As someone who works with academics of all disciplines, my go-to is to respect the jargon that is used in those disciplines. I didn’t realize “hallucinate” was jargon until recently, which changed my understanding of the term. And it’s helpful to know there is debate within AI about the term.

@cogdog @catherinecronin

Yes, regurgitate is the verb I've settled on too. Well, that and "parrot" in the verb form as in "to repeat what was said by others".

But I do like regurgitate. It also as the advantage of accurately describing the action that's happening, but it also provides a subtle clue as to the general quality and desirability of the result.

@cogdog @catherinecronin But it's accurate to say that it "makes up stuff" that doesn't exist, including completely fake citations. That's not just a regurgitation: something new has come about (for very wrong reasons).

@vahidm @catherinecronin Perhaps just semantic quibbling, but when I make up stuff its for a purpose (sarcasm? storytelling?), there is meaning and intent.

Just spawning new text is what random generators do through an algorithm. That's the beauty of the "stochastic" labeling by @emilymbender there is a difference between that and randomness https://en.wikipedia.org/wiki/Stochastic

Stochastic - Wikipedia

@cogdog @vahidm @emilymbender agree, Alan! conceptual models ascribing sentience to LLM are like weeds, sprouting everywhere – and require challenging. like you, I am grateful for the foundational, ongoing, critical work by Emily Bender, Timnit Gebru et al.
@cogdog @catherinecronin @emilymbender Considering the AI doesn't understand what it spews out, I guess it's the (human) readers that make sense of the output.
We (the humans) assign different values ie. between a useful/accurate summary of existing texts and complete fabrications. In other words, it's in the eye of the behoder?
Maybe it's a generational failure of the current AI that they can't identify the hallucinations, and future generations will self-correct before generating the output?