I don't like the term "hallucinations" when we talk about AI. Sure, LLMs can get things wrong, but a hallucination is an error in perception, and you can't have an error in perception when there's no one there to perceive. The only hallucinations that are happening are on your side of the keyboard.
@maxleibman That's a great point. What do we call them then? just "errors"?
@VE3RWJ That I don’t have a good answer to.

@maxleibman @VE3RWJ - The error, as you point out, is in anthropomorphizing AI.

However, if one insists on doing that, the best analogous human behavior is "Bullshitting".

Confidently giving an answer, without regard to correctness, by regurgitating stuff you've heard. [edit to add] Which is, of course, what it 's doing all the time; it's just that this time it happens to be factually incorrect.

So my best so far is "incorrect bullshitting."

@jmax @maxleibman @VE3RWJ This tech (as has happened many times before) is teaching us about the way our brains work

Even at our most methodical, there’s a level of “bullshitting” that we have to make when we’re performing a professional task. Eventually, fundamentally, we have to trust our senses and trust our memories. If we can replicate results — well, good: that sounds like a scientific method. It’s up to us to design procedures, and protocols around our actions, to prevent mistakes.

To err is human. And LLM’an.

@whophd @maxleibman @VE3RWJ Stop shilling for con artists.