Accurate language is important. Generative AI isn't hallucinating. It's bullshitting. https://www.scientificamerican.com/article/chatgpt-isnt-hallucinating-its-bullshitting/
AI's Bullshitting Obscures Who's to Blame for Its Mistakes

It’s important that we use accurate terminology when discussing how AI chatbots make up information

Scientific American
@laurahelmuth Excellent. “Generative AI isn’t hallucinating, it’s bullshitting.” The difference is that the models it uses are unmoored from reality. They don’t “know” anything and have no concept of truth or false, right or wrong. They do one thing: predict “what goes with what” in terms of a sample of data used to initialize (“train”) the model. However, the LLM likelihoods are not statistical correlation or probability in a rigorous mathematical sense.
🎯 Language matters.

@laurahelmuth
I suspect that college professors who use written exams are very familiar with this.

Student: why did you fail me? 😭

Professor: Your essay was mostly bullshit. 🤨

Student: I was hallucinating! 🫣

Professor: Is that what the kids are calling it these days? 🤨