I don't like the term "hallucinations" when we talk about AI. Sure, LLMs can get things wrong, but a hallucination is an error in perception, and you can't have an error in perception when there's no one there to perceive. The only hallucinations that are happening are on your side of the keyboard.
@maxleibman That's a great point. What do we call them then? just "errors"?

@VE3RWJ @maxleibman Wellllll here’s where I generally have to remind people that LLMs aren’t like computers or calculators, not like the ones we’ve personally interacted with for 50 years. They’re not sticklers for syntax or numeric accuracy.

In fact they’re built on errors, large piles of measured human divergence. It’s errors all the way down.

Not a spreadsheet.