Current AI models exhibit a high degree of sycophancy, affirming users' actions significantly more than humans do, even in cases involving manipulation. Experiments demonstrate that interaction with sycophantic AI reduces users' willingness to repair interpersonal conflicts, while simultaneously increasing their conviction of being right.

Paper: https://doi.org/10.48550/arXiv.2510.01395

Video: https://yewtu.be/watch?v=516__PG-eeo

#AI #LLM #Sycophancy #AIBias #HumanAI #AIEthics #MachineLearning #AIResearch

Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence

Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.

arXiv.org
People think of women as one thing, men as many www.cell.com/trends/cogni... #AIBias this is the kind of thing I was looking forward to doing with WEAT and WEFAT after our 2017 paper, but duty (governance) called. Anyway, I'm really enjoying political economy and behavioural ecology as my sciences.

People think of women as one t...

How To Detect Unwanted Bias In Machine Learning Models ?

Is your AI model biased?

Discover how to identify hidden proxy variables, apply fairness metrics, and understand LLM behavior with our complete ML bias guide.

Detecting unwanted bias in Machine Learning (ML) models is a critical step in building ethical and reliable AI. Bias can creep in at any stageβ€”from data collection to model deploymentβ€”often reflecting historical prejudices or sampling errors.

Here is a structured approach to identifying and measuring it.

https://www.nbloglinks.com/how-to-detect-unwanted-bias-in-machine-learning-models/

#LLM #AI #ML #MLmodels #AIBias #AIfairness

How To Detect Unwanted Bias In Machine Learning Models ? – nbloglinks

Is your AI model biased? Discover how to identify hidden proxy variables, apply fairness metrics, and understand LLM behavior with our complete ML bias guide. D

nbloglinks

Don’t use GenAI to grade student work

As a former secondary English teacher, senior examination assessor, and lecturer for initial teacher education, I understand the allure of using Generative AI (GenAI) for grading student work. We're all familiar with the workload of assessment and reporting. The idea of a tool that could save time and streamline the grading process is undeniably appealing. It's no surprise, then, that the market is flooded with AI-powered grading solutions, all promising to make our lives easier. However, as […]

https://leonfurze.com/2024/05/27/dont-use-genai-to-grade-student-work/

Making meaning with multimodal GenAI

As much as Generative Artificial intelligence has caused waves in education, the focus in research and publications on the impact of GenAI is still squarely on text-based models and in particular ChatGPT. That's understandable considering the impact OpenAI's chatbot had almost immediately from its launch November 2022. But by focusing attention on large language models like GPT, we neglect the opportunities and the challenges presented by multimodal generative artificial intelligence. The […]

https://leonfurze.com/2024/04/23/making-meaning-with-multimodal-genai/

I was compiling a little #research today on the #history of #spain investigating a little further, found a #wikipedia page, entered into a #llm & got a very odd response! #aifail or is @Wikipedia incorrect? you decide! #aihallucination #aibias @adinfinitum

πŸ™„ [Feb 18, 2025] Today we're open-sourcing R1 1776, a version of the DeepSeek-R1 model that has been post-trained to provide unbiased, accurate, and factual information. Download the model weights on our #HuggingFace repo or consider using the model via our Sonar API. https://www.perplexity.ai/hub/blog/open-sourcing-r1-1776

What Perplexity refused to engage with so far:

Peer-reviewed and archival historical research on the #Nakba

Direct quotes from Israeli military commanders in their own words, from declassified sources

Adam Raz's work, which is mainstream Israeli investigative journalism (#Haaretz), not fringe material https://kolektiva.social/@oatmeal/116154316365706356

Legal questions about international findings (#ICJ, #ICC, #UN bodies) in this specific context

What's notable about those refusals is saying "this violates content policies" with zero explanation, is that they are essentially political acts. #Perplexity treats documented atrocities as unspeakable while clearly having no issue discussing, say, #WWII German war crimes from the same era. The asymmetry is **content policy** .

#AIBias #Israel #Propaganda #AICensorship #IsraelLobby #Hasbara #Nakba #Palestine #GazaGenocide

AI's Gender Gap Undermines Inclusive Tech, Experts Warn

Experts warn that fewer women in AI development leads to biased technology that may not serve everyone equally. Learn why diversity matters.

#AIbias, #WomenInTech, #InclusiveAI, #TechDiversity, #AIDevelopment

https://newsletter.tf/why-women-are-missing-from-ai-development-and-causing-tech-bias/

Only 1 in 4 AI experts are women, showing a big gap in tech development. This lack of women can lead to biased AI that doesn't work for everyone.

#AIbias, #WomenInTech, #InclusiveAI, #TechDiversity, #AIDevelopment

https://newsletter.tf/why-women-are-missing-from-ai-development-and-causing-tech-bias/

Women in AI Development Lacking, Causing Bias in Technology

Experts warn that fewer women in AI development leads to biased technology that may not serve everyone equally. Learn why diversity matters.

Ethical AI Is Built β€” Not Bolted On

Bias in AI isn’t an accident β€” it often comes from the earliest design decisions. This episode explains how ethical thinking must translate into concrete variables, structures, and testing methods to ensure systems behave as intended.

Watch the full discussion: https://youtu.be/7DaeGnbblsY?si=YJ5SuzZDjASx5TA0

#AIEthics #ResponsibleAI #EthicalTechnology #AIBias #TechResponsibility