@maxleibman @VE3RWJ - The error, as you point out, is in anthropomorphizing AI.
However, if one insists on doing that, the best analogous human behavior is "Bullshitting".
Confidently giving an answer, without regard to correctness, by regurgitating stuff you've heard. [edit to add] Which is, of course, what it 's doing all the time; it's just that this time it happens to be factually incorrect.
So my best so far is "incorrect bullshitting."
@jmax @maxleibman @VE3RWJ This tech (as has happened many times before) is teaching us about the way our brains work
Even at our most methodical, there’s a level of “bullshitting” that we have to make when we’re performing a professional task. Eventually, fundamentally, we have to trust our senses and trust our memories. If we can replicate results — well, good: that sounds like a scientific method. It’s up to us to design procedures, and protocols around our actions, to prevent mistakes.
To err is human. And LLM’an.
@maxleibman @VE3RWJ Yes, it’s a (deliberately) difficult position!
I think part of the trickiness here is that the “hallucinations” aren’t materially different from what they do the rest of the time. It’s just that this response is so obviously wrong that we classify it as an error. But it’s not like something broke _that one time_. All responses are “hallucinations.” They vary by proximity to accuracy. The term is pure marketing.
@corners_plotted @maxleibman @VE3RWJ
It has some relationship to reality, a model that outputs false positives even though ground truth denies it; a bias to see patterns that don't exist.
But I agree with your assessment that it's not really something different than all the other output. It's just wrong. The AI makes EVERYTHING up, it's just that often it turns out to be similar to reality.
@maxleibman @corners_plotted @VE3RWJ
I had someone try to convince me in another thread that LLMs didn't work word-to-word, but composed answers hierarchically in paragraphs or whatever.. My understanding is that that's wrong, and they work only on the next word, but maybe my understanding is a year or two out of date?