Why do we have precise terms for LLM failures like "hallucination" but almost none for the human side of AI interaction?

The AUGMANITAI framework addresses this gap โ€” a terminology compendium identifying and naming phenomena that occur when humans interact with AI systems. From sycophancy patterns to confidence calibration artifacts.

Open-access, DOI-published, CC BY-NC-ND 4.0.

doi.org/10.5281/zenodo.14984941

#AI #NLP #HumanAI #Terminology #OpenScience #LLM #AUGMANITAI

Current AI models exhibit a high degree of sycophancy, affirming users' actions significantly more than humans do, even in cases involving manipulation. Experiments demonstrate that interaction with sycophantic AI reduces users' willingness to repair interpersonal conflicts, while simultaneously increasing their conviction of being right.

Paper: https://doi.org/10.48550/arXiv.2510.01395

Video: https://yewtu.be/watch?v=516__PG-eeo

#AI #LLM #Sycophancy #AIBias #HumanAI #AIEthics #MachineLearning #AIResearch

Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence

Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.

arXiv.org

When an LLM confidently presents false information, researchers call it "hallucination." When it agrees with everything you say, the term is "sycophancy." But most phenomena in human-AI interaction still have no name.

AUGMANITAI is an open-access compendium of 1,000+ terms for human-AI interaction. ISO-inspired terminology science, DOI-published on Zenodo (CC BY-NC-ND 4.0).

https://doi.org/10.5281/zenodo.14984941

#HumanAI #NLP #Terminology #AI #LLM #OpenAccess

Realising Open Data Principles In UK Research Institutions

With increasing focus on open research, the Concordat of Open Research Data was created in 2016, laying out 10 principles for embedding open data practices in United Kingdom (UK) academic research. The Concordatโ€™s principles relate to ethical, legal and professional obligations, addressing areas such as practicality, affordability, transparency, robustness and fairness, mechanisms and infrastructure, data integrity, citation and attribution, aiming for all research data to be โ€˜as open as possible, as closed as necessaryโ€™. The STAR (Sustainable & TrAnsparent Research data) project, is led by the UK Reproducibility Network (UKRN), and supported by several UK research institutions, with contributions from the Data Curation Centre and other sector experts. The project employs qualitative methods to evaluate the implementation of the Concordatโ€™s principles in UK research institutions since inception. This was done through interviews, focus groups, and workshops we held with 43 research support staff across 20 UK universities. In this paper, we report on key learnings from the STAR project, including progress and barriers to open data; and ways in which institutions are and could better be supported in the curation, publication, and reuse of open data.

Zenodo

Bindu Reddy (@bindureddy)

AI๋“ค๋ผ๋ฆฌ ๋ฌธ์„œ ์ž‘์„ฑ, ์ด๋ฉ”์ผ ์š”์•ฝ, ๋ฒ„๊ทธ ์ˆ˜์ •๊ณผ PR ๋ฆฌ๋ทฐ๊นŒ์ง€ ์ˆ˜ํ–‰ํ•˜๋Š” ๋“ฑ ์ธ๊ฐ„-AI ํ˜‘์—…์ด ์—์ด์ „ํŠธ ๊ฐ„ ํ˜‘์—…์œผ๋กœ ํ™•์žฅ๋œ ํ˜„์žฌ ์ƒํ™ฉ์„ ํ’์ž์ ์œผ๋กœ ๋ณด์—ฌ์ค€๋‹ค.

https://x.com/bindureddy/status/2036330943288582200

#humanai #agents #collaboration #automation #developerworkflow

The future of work is humanโ€“AI teams, and navigating the "exotic team dynamics" which emerge when collaborating with advanced AI (agentic, autonomous, or autopoietic).

https://scottgraffius.com/exotic-team-dynamics.html

#AI #AIResearch #HumanAI #HumanAITeamwork #ExoticTeamDynamics #FutureOfWork

fly51fly (@fly51fly)

์‚ฌ๋žŒ-AI ์ƒํ˜ธ์ž‘์šฉ์—์„œ ๋ฐœ์ƒํ•˜๋Š” '๋ณด์ด์ง€ ์•Š๋Š” ์‹คํŒจ(invisible failures)'๋ฅผ ๋‹ค๋ฃฌ ์—ฐ๊ตฌ๊ฐ€ arXiv์— ๊ณต๊ฐœ๋จ(2026). ์ €์ž C. Potts์™€ M. Sudhof(์†Œ์†: Bigspin AI)๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ์ธ์ง€ํ•˜์ง€ ๋ชปํ•˜๋Š” ์‹คํŒจ ๋ชจ๋“œ์™€ ๊ทธ ์˜ํ–ฅ, ํƒ์ง€ยท์™„ํ™” ๋ฐฉ์•ˆ์— ๋Œ€ํ•ด ๋ถ„์„ํ•ด ์ธ๊ฐ„์ค‘์‹ฌ AI ์ƒํ˜ธ์ž‘์šฉ ์„ค๊ณ„์™€ ์•ˆ์ „์„ฑ ๊ฒ€ํ† ์— ์‹œ์‚ฌ์ ์„ ์คŒ.

https://x.com/fly51fly/status/2034021915132866948

#humanai #failures #usability #arxiv #ai

fly51fly (@fly51fly) on X

[CL] Invisible failures in human-AI interactions C Potts, M Sudhof [Bigspin AI] (2026) https://t.co/MorQ0vv5a2

X (formerly Twitter)

Innovation runs on collaboration. Add advanced AI โ€” agentic, autonomous, or autopoietic โ€” and "exotic team dynamics" emerge. The future of work is humanโ€“AI teams driving breakthroughs.

https://scottgraffius.com/blog/files/exotic-team-dynamics.html

#AI #AIResearch #HumanAI #ExoticTeamDynamics #Innovation

Chatbots increasingly recommend products, services or even financial advice. ๐Ÿค–๐Ÿ’ฌ

On 12 March, Nicole Krรคmer, Scientific Director of RC Trust, joins a policy discussion at the NRW Ministry for Consumer Protection at the event
โ€œChatbot and AI Agent: (Not) a Friend and Helper?โ€

The panel will explore trust, transparency and risks in AI-driven communication.

#AI #TrustworthyAI #ConsumerProtection #DigitalPolicy #AIethics #HumanAI #RCTrust

Photo: Till Niermann โ€“ CC BY-SA 3.0 edited

Technologie entsteht nicht nur aus Code.

Sie entsteht aus Sprache, Kultur und Ethik.

Bei Enunova versuchen wir diese Rรคume wieder zusammenzudenken.

Nicht als System โ€“
sondern als offenen Denkraum.

#AI
#Future
#Technology
#Ethics
#HumanAI
#Innovation

Alex Imas (@alexolegimas)

์ž‘์„ฑ์ž๋Š” 'centaur' ์šฉ์–ด๋ฅผ ๋น„ํ‘œ์ค€์ ์œผ๋กœ ์‚ฌ์šฉํ•œ๋‹ค๊ณ  ๋ฐํžˆ๋ฉฐ, ์—ฌ๊ธฐ์„œ๋Š” 'AI์™€ ํ•จ๊ป˜ ์ž‘์—…ํ•ด ์ธ๊ฐ„์ด ํ˜ผ์ž์„œ๋Š” ์ƒ์„ฑํ•  ์ˆ˜ ์—†๋Š” ๊ฒฐ๊ณผ๋ฅผ ๋งŒ๋“ค์–ด๋‚ด๋Š” ์ธ๊ฐ„'์„ ์˜๋ฏธํ•œ๋‹ค๊ณ  ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. @sebkrier๊ฐ€ ์–ธ๊ธ‰ํ•œ 'cyborg era' ๊ธ€์„ ์ธ์šฉํ•ด ์ธ๊ฐ„-AI ํ˜‘์—…(์‚ฌ์ด๋ณด๊ทธ ์‹œ๋Œ€) ๊ฐœ๋…์„ ์ •๋ฆฌํ•˜๊ณ , ์ธ๊ฐ„๊ณผ AI์˜ ๊ฒฐํ•ฉ์  ์ฐฝ์ž‘์„ ๊ฐ•์กฐํ•ฉ๋‹ˆ๋‹ค.

https://x.com/alexolegimas/status/2030340791537729580

#humanai #centaur #aicollaboration #cyborgera

Alex Imas (@alexolegimas) on X

One thing I wanted to clarify: I am using the term "centaur" in perhaps a non-standard way. Here, I mean "human working with AI to produce work that they would otherwise not be able to generate." This is what @sebkrier referred to as the "cyborg era" (https://t.co/wKd4j2JLcQ)

X (formerly Twitter)