When people and advanced AI — agentic, autonomous, or autopoietic — collaborate as teammates, unique, novel collaboration patterns termed "exotic team dynamics" emerge

More: https://scottgraffius.com/blog/files/exotic-team-dynamics.html

#AI #HumanAI #AIResearch #ExoticTeamDynamics #FutureOfWork

Ars Technica (@arstechnica)

연구 결과, AI 사용자들이 LLM에 자신의 인지 능력을 과도하게 맡기려는 경향이 꽤 높게 나타났습니다. 인간의 판단과 기억을 AI에 의존하는 현상에 대한 중요한 연구로 볼 수 있습니다.

https://x.com/arstechnica/status/2040175404380803093

#ai #llm #research #cognition #humanai

Ars Technica (@arstechnica) on X

Research finds AI users scarily willing to "surrender" their cognition to LLMs https://t.co/idk95luyf7

X (formerly Twitter)

Which shifts the question slightly...

Not just: “Is AI safe?”
But:
“Are we structuring our interactions in a way that allows them to remain stable?”
That feels like a much more practical place to work from.
Because it’s something we can actually design for.
I’ve written this up more formally (with a simple framework and a “pause protocol” that emerged from it).
But the core idea is very simple:
Conversations don’t just succeed or fail —
they drift or stabilise depending on how they’re held.
And sometimes, the most effective intervention is also the simplest:
Pause.
Reduce input.
Return with clarity.
☕🌿
#AI #AISafety #HumanAI #SystemsThinking #HybridMind42

Why do we have precise terms for LLM failures like "hallucination" but almost none for the human side of AI interaction?

The AUGMANITAI framework addresses this gap — a terminology compendium identifying and naming phenomena that occur when humans interact with AI systems. From sycophancy patterns to confidence calibration artifacts.

Open-access, DOI-published, CC BY-NC-ND 4.0.

doi.org/10.5281/zenodo.14984941

#AI #NLP #HumanAI #Terminology #OpenScience #LLM #AUGMANITAI

Current AI models exhibit a high degree of sycophancy, affirming users' actions significantly more than humans do, even in cases involving manipulation. Experiments demonstrate that interaction with sycophantic AI reduces users' willingness to repair interpersonal conflicts, while simultaneously increasing their conviction of being right.

Paper: https://doi.org/10.48550/arXiv.2510.01395

Video: https://yewtu.be/watch?v=516__PG-eeo

#AI #LLM #Sycophancy #AIBias #HumanAI #AIEthics #MachineLearning #AIResearch

Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence

Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.

arXiv.org

When an LLM confidently presents false information, researchers call it "hallucination." When it agrees with everything you say, the term is "sycophancy." But most phenomena in human-AI interaction still have no name.

AUGMANITAI is an open-access compendium of 1,000+ terms for human-AI interaction. ISO-inspired terminology science, DOI-published on Zenodo (CC BY-NC-ND 4.0).

https://doi.org/10.5281/zenodo.14984941

#HumanAI #NLP #Terminology #AI #LLM #OpenAccess

Realising Open Data Principles In UK Research Institutions

With increasing focus on open research, the Concordat of Open Research Data was created in 2016, laying out 10 principles for embedding open data practices in United Kingdom (UK) academic research. The Concordat’s principles relate to ethical, legal and professional obligations, addressing areas such as practicality, affordability, transparency, robustness and fairness, mechanisms and infrastructure, data integrity, citation and attribution, aiming for all research data to be ‘as open as possible, as closed as necessary’. The STAR (Sustainable & TrAnsparent Research data) project, is led by the UK Reproducibility Network (UKRN), and supported by several UK research institutions, with contributions from the Data Curation Centre and other sector experts. The project employs qualitative methods to evaluate the implementation of the Concordat’s principles in UK research institutions since inception. This was done through interviews, focus groups, and workshops we held with 43 research support staff across 20 UK universities. In this paper, we report on key learnings from the STAR project, including progress and barriers to open data; and ways in which institutions are and could better be supported in the curation, publication, and reuse of open data.

Zenodo

Bindu Reddy (@bindureddy)

AI들끼리 문서 작성, 이메일 요약, 버그 수정과 PR 리뷰까지 수행하는 등 인간-AI 협업이 에이전트 간 협업으로 확장된 현재 상황을 풍자적으로 보여준다.

https://x.com/bindureddy/status/2036330943288582200

#humanai #agents #collaboration #automation #developerworkflow

The future of work is human–AI teams, and navigating the "exotic team dynamics" which emerge when collaborating with advanced AI (agentic, autonomous, or autopoietic).

https://scottgraffius.com/exotic-team-dynamics.html

#AI #AIResearch #HumanAI #HumanAITeamwork #ExoticTeamDynamics #FutureOfWork

fly51fly (@fly51fly)

사람-AI 상호작용에서 발생하는 '보이지 않는 실패(invisible failures)'를 다룬 연구가 arXiv에 공개됨(2026). 저자 C. Potts와 M. Sudhof(소속: Bigspin AI)는 사용자가 인지하지 못하는 실패 모드와 그 영향, 탐지·완화 방안에 대해 분석해 인간중심 AI 상호작용 설계와 안전성 검토에 시사점을 줌.

https://x.com/fly51fly/status/2034021915132866948

#humanai #failures #usability #arxiv #ai

fly51fly (@fly51fly) on X

[CL] Invisible failures in human-AI interactions C Potts, M Sudhof [Bigspin AI] (2026) https://t.co/MorQ0vv5a2

X (formerly Twitter)

Innovation runs on collaboration. Add advanced AI — agentic, autonomous, or autopoietic — and "exotic team dynamics" emerge. The future of work is human–AI teams driving breakthroughs.

https://scottgraffius.com/blog/files/exotic-team-dynamics.html

#AI #AIResearch #HumanAI #ExoticTeamDynamics #Innovation