Yale News: AI’s hidden bias: Chatbots can influence opinions without trying. “Prior research has shown that content generated by artificial intelligence (AI) that has been prompted to be persuasive can indeed shift people’s opinions. But this study provides evidence that the same is also true of content that is not intended to change minds, such as the summaries that popular chatbots […]

https://rbfirehose.com/2026/03/09/ais-hidden-bias-chatbots-can-influence-opinions-without-trying-yale-news/
AI’s hidden bias: Chatbots can influence opinions without trying (Yale News)

Yale News: AI’s hidden bias: Chatbots can influence opinions without trying. “Prior research has shown that content generated by artificial intelligence (AI) that has been prompted to be pers…

ResearchBuzz: Firehose

Northeastern News: New research decodes hidden bias in health care LLMs. “Large language models contain racial biases that factor into their recommendations, even in clinical health care settings. Northeastern researchers found a way to reveal these racial associations in LLMs.”

https://rbfirehose.com/2026/01/22/northeastern-news-new-research-decodes-hidden-bias-in-health-care-llms/
Northeastern News: New research decodes hidden bias in health care LLMs

Northeastern News: New research decodes hidden bias in health care LLMs. “Large language models contain racial biases that factor into their recommendations, even in clinical health care sett…

ResearchBuzz: Firehose

University of Washington: People mirror AI systems’ hiring biases, study finds. “When picking candidates without AI or with neutral AI, participants picked white and non-white applicants at equal rates. But when they worked with a moderately biased AI, if the AI preferred non-white candidates, participants did too. If it preferred white candidates, participants did too. In cases of severe […]

https://rbfirehose.com/2025/11/12/university-of-washington-people-mirror-ai-systems-hiring-biases-study-finds/

University of Washington: People mirror AI systems’ hiring biases, study finds | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz

PsyPost: Artificial intelligence exhibits human-like cognitive errors in medical reasoning. “A new study suggests that advanced artificial intelligence models, increasingly used in medicine, can exhibit human-like errors in reasoning when making clinical recommendations. The research found these AI models were susceptible to cognitive biases, and in many cases, the magnitude of these biases was […]

https://rbfirehose.com/2025/11/12/psypost-artificial-intelligence-exhibits-human-like-cognitive-errors-in-medical-reasoning/

PsyPost: Artificial intelligence exhibits human-like cognitive errors in medical reasoning | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz

The Conversation: Historical images made with AI recycle colonial stereotypes and bias – new research. “When prompted to visualise Aotearoa New Zealand’s past, Sora privileges the European settler viewpoint: pre-colonial landscapes are rendered as empty wilderness, Captain Cook appears as a calm civiliser, and Māori are cast as timeless, peripheral figures. As generative AI tools become […]

https://rbfirehose.com/2025/10/24/the-conversation-historical-images-made-with-ai-recycle-colonial-stereotypes-and-bias-new-research/

The Conversation: Historical images made with AI recycle colonial stereotypes and bias – new research | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz

UC Berkeley: Women portrayed as younger than men online, and AI amplifies the bias. “U.S. Census data shows no systematic age differences between men and women in the workforce over the past decade. And women on average live longer than men. But that’s not what you’ll see if you search Google or YouTube or query an AI like ChatGPT.”

https://rbfirehose.com/2025/10/11/uc-berkeley-women-portrayed-as-younger-than-men-online-and-ai-amplifies-the-bias/

UC Berkeley: Women portrayed as younger than men online, and AI amplifies the bias | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz

Carnegie Mellon University: SEI Tool Helps Federal Agencies Detect AI Bias and Build Trust. “Carnegie Mellon University’s Software Engineering Institute has developed the AI robustness (AIR) tool, a free, open-source platform that helps agencies uncover why an AI system may produce biased or unreliable results. Unlike conventional methods which spot surface-level patterns in data, AIR shows […]

https://rbfirehose.com/2025/09/20/carnegie-mellon-university-sei-tool-helps-federal-agencies-detect-ai-bias-and-build-trust/

Carnegie Mellon University: SEI Tool Helps Federal Agencies Detect AI Bias and Build Trust | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz

Johns Hopkins University: Multilingual artificial intelligence often reinforces bias. “Johns Hopkins computer scientists have discovered that artificial intelligence tools like ChatGPT are creating a digital language divide, amplifying the dominance of English and other commonly spoken languages while sidelining minority languages.”

https://rbfirehose.com/2025/09/05/johns-hopkins-university-multilingual-artificial-intelligence-often-reinforces-bias/

Johns Hopkins University: Multilingual artificial intelligence often reinforces bias | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz

The Register: Biased bots: AI hiring managers shortlist candidates with AI resumes. “Job seekers who use the same AI model to compose their resumes as the AI model used to evaluate their application are more likely to advance through the hiring process than those submitting human-written materials, according to researchers.”

https://rbfirehose.com/2025/09/05/biased-bots-ai-hiring-managers-shortlist-candidates-with-ai-resumes-the-register/

Biased bots: AI hiring managers shortlist candidates with AI resumes (The Register) | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz

The Register: ChatGPT hates LA Chargers fans . “The reason, according to researchers affiliated with Harvard University, is that the model’s guardrails incorporate biases that shape its responses based on contextual information about the user.”

https://rbfirehose.com/2025/08/29/the-register-chatgpt-hates-la-chargers-fans/

The Register: ChatGPT hates LA Chargers fans | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz