@maxleibman @VE3RWJ Yes, it’s a (deliberately) difficult position!
I think part of the trickiness here is that the “hallucinations” aren’t materially different from what they do the rest of the time. It’s just that this response is so obviously wrong that we classify it as an error. But it’s not like something broke _that one time_. All responses are “hallucinations.” They vary by proximity to accuracy. The term is pure marketing.
@corners_plotted @maxleibman @VE3RWJ
It has some relationship to reality, a model that outputs false positives even though ground truth denies it; a bias to see patterns that don't exist.
But I agree with your assessment that it's not really something different than all the other output. It's just wrong. The AI makes EVERYTHING up, it's just that often it turns out to be similar to reality.
@maxleibman @corners_plotted @VE3RWJ
I had someone try to convince me in another thread that LLMs didn't work word-to-word, but composed answers hierarchically in paragraphs or whatever.. My understanding is that that's wrong, and they work only on the next word, but maybe my understanding is a year or two out of date?