I don't like the term "hallucinations" when we talk about AI. Sure, LLMs can get things wrong, but a hallucination is an error in perception, and you can't have an error in perception when there's no one there to perceive. The only hallucinations that are happening are on your side of the keyboard.
@maxleibman I hate how people decided to use humanizing language to discuss LLMs.
@aleen @maxleibman it’s … well … the analogies hold strong, though