Why do we have precise terms for LLM failures like "hallucination" but almost none for the human side of AI interaction?

The AUGMANITAI framework addresses this gap — a terminology compendium identifying and naming phenomena that occur when humans interact with AI systems. From sycophancy patterns to confidence calibration artifacts.

Open-access, DOI-published, CC BY-NC-ND 4.0.

doi.org/10.5281/zenodo.14984941

#AI #NLP #HumanAI #Terminology #OpenScience #LLM #AUGMANITAI