PsyPost: Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice. “When autistic people ask artificial intelligence programs for life advice, mentioning their diagnosis prompts these systems to recommend highly conservative choices like skipping social events or avoiding romance. This shift in advice reveals a hidden tension where the technology relies heavily on […]

https://rbfirehose.com/2026/04/24/psypost-disclosing-autism-to-ai-chatbots-prompts-overly-cautious-stereotypical-advice/
PsyPost: Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

PsyPost: Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice. “When autistic people ask artificial intelligence programs for life advice, mentioning their diagnosis…

ResearchBuzz: Firehose

University of Copenhagen: Researchers: Chatbots are biased and should not be used for political advice. “AI Popular chatbots such as ChatGPT and Gemini are not neutral and tend to favor certain political parties when asked who users should vote for. This makes them unsuitable for providing advice in connection with elections, according to researchers from the University of Copenhagen behind a […]

https://rbfirehose.com/2026/04/24/researchers-chatbots-are-biased-and-should-not-be-used-for-political-advice-university-of-copenhagen/
Researchers: Chatbots are biased and should not be used for political advice (University of Copenhagen)

University of Copenhagen: Researchers: Chatbots are biased and should not be used for political advice. “AI Popular chatbots such as ChatGPT and Gemini are not neutral and tend to favor certa…

ResearchBuzz: Firehose

The Register: Bad teacher bots can leave hidden marks on model students. “New research warns about the dangers of teaching LLMs on the output of other models, showing that undesirable traits can be transmitted ‘subliminally’ from teacher to student, even when they are scrubbed from training data.”

https://rbfirehose.com/2026/04/20/the-register-bad-teacher-bots-can-leave-hidden-marks-on-model-students/
The Register: Bad teacher bots can leave hidden marks on model students

The Register: Bad teacher bots can leave hidden marks on model students. “New research warns about the dangers of teaching LLMs on the output of other models, showing that undesirable traits …

ResearchBuzz: Firehose

PsyPost: AI autocomplete suggestions covertly change how users think about important topics. “New research provides evidence that interacting with biased autocomplete suggestions can covertly shift a person’s underlying attitudes on important societal issues. The findings, published in the journal Science Advances, suggest that the subtle influence of these everyday programs often bypasses […]

https://rbfirehose.com/2026/04/06/psypost-ai-autocomplete-suggestions-covertly-change-how-users-think-about-important-topics/
PsyPost: AI autocomplete suggestions covertly change how users think about important topics

PsyPost: AI autocomplete suggestions covertly change how users think about important topics. “New research provides evidence that interacting with biased autocomplete suggestions can covertly…

ResearchBuzz: Firehose

PsyPost: Efforts to make AI inclusive accidentally create bizarre new gender biases, new research suggests. “New research published in Computers in Human Behavior Reports suggests that efforts to make artificial intelligence more inclusive can sometimes create unexpected new biases.”

https://rbfirehose.com/2026/03/27/psypost-efforts-to-make-ai-inclusive-accidentally-create-bizarre-new-gender-biases-new-research-suggests/
PsyPost: Efforts to make AI inclusive accidentally create bizarre new gender biases, new research suggests

PsyPost: Efforts to make AI inclusive accidentally create bizarre new gender biases, new research suggests. “New research published in Computers in Human Behavior Reports suggests that effort…

ResearchBuzz: Firehose

The Register: UK police force presses pause on live facial recognition after study finds racial bias . “A UK police force has suspended its deployment of live facial recognition (LFR) technology after a study revealed it was statistically more likely to identify Black people on a watchlist database.”

https://rbfirehose.com/2026/03/23/the-register-uk-police-force-presses-pause-on-live-facial-recognition-after-study-finds-racial-bias/

Yale News: AI’s hidden bias: Chatbots can influence opinions without trying. “Prior research has shown that content generated by artificial intelligence (AI) that has been prompted to be persuasive can indeed shift people’s opinions. But this study provides evidence that the same is also true of content that is not intended to change minds, such as the summaries that popular chatbots […]

https://rbfirehose.com/2026/03/09/ais-hidden-bias-chatbots-can-influence-opinions-without-trying-yale-news/
AI’s hidden bias: Chatbots can influence opinions without trying (Yale News)

Yale News: AI’s hidden bias: Chatbots can influence opinions without trying. “Prior research has shown that content generated by artificial intelligence (AI) that has been prompted to be pers…

ResearchBuzz: Firehose

Northeastern News: New research decodes hidden bias in health care LLMs. “Large language models contain racial biases that factor into their recommendations, even in clinical health care settings. Northeastern researchers found a way to reveal these racial associations in LLMs.”

https://rbfirehose.com/2026/01/22/northeastern-news-new-research-decodes-hidden-bias-in-health-care-llms/
Northeastern News: New research decodes hidden bias in health care LLMs

Northeastern News: New research decodes hidden bias in health care LLMs. “Large language models contain racial biases that factor into their recommendations, even in clinical health care sett…

ResearchBuzz: Firehose

University of Washington: People mirror AI systems’ hiring biases, study finds. “When picking candidates without AI or with neutral AI, participants picked white and non-white applicants at equal rates. But when they worked with a moderately biased AI, if the AI preferred non-white candidates, participants did too. If it preferred white candidates, participants did too. In cases of severe […]

https://rbfirehose.com/2025/11/12/university-of-washington-people-mirror-ai-systems-hiring-biases-study-finds/

University of Washington: People mirror AI systems’ hiring biases, study finds | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz

PsyPost: Artificial intelligence exhibits human-like cognitive errors in medical reasoning. “A new study suggests that advanced artificial intelligence models, increasingly used in medicine, can exhibit human-like errors in reasoning when making clinical recommendations. The research found these AI models were susceptible to cognitive biases, and in many cases, the magnitude of these biases was […]

https://rbfirehose.com/2025/11/12/psypost-artificial-intelligence-exhibits-human-like-cognitive-errors-in-medical-reasoning/

PsyPost: Artificial intelligence exhibits human-like cognitive errors in medical reasoning | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz