“AI-powered writing tools are increasingly integrated into our e-mails and phones. Now a new study finds biased AI suggestions can sway users’ beliefs”

“We told people before, and after, to be careful, that the AI is going to be (or was) biased, and nothing helped,” Naaman said. “Their attitudes about the issues still shifted.”

https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/

AI autocomplete doesn’t just change how you write. It changes how you think

AI-powered writing tools are increasingly integrated into our e-mails and phones. Now a new study finds biased AI suggestions can sway users’ beliefs

Scientific American

@gregeganSF

https://techxplore.com/news/2026-02-llms-violate-boundaries-mental-health.html

"We wanted to rigorously test whether risky behaviors, such as confirming delusional beliefs, assuming clinical authority, or gradually eroding boundaries, can emerge through multi-turn interactions," so Youyou Cheng, first author.

"By demonstrating that such failures do occur and can be systematically elicited, the paper establishes the need for structured safeguards…"

LLMs violate boundaries during mental health dialogues, study finds

Artificial intelligence (AI) agents, particularly those based on large language models (LLMs) like the conversational platform ChatGPT, are now widely used daily by numerous people worldwide. LLMs can generate texts that are highly realistic, to the point that they could be sometimes mistaken for texts written by humans.

Tech Xplore

@rexi
Sam Altman: Why are you showing me this? My housekeeper already stocked up on toilet paper last week!
@gregeganSF

#aiethics meets #capitalism