Stanford study finds AI chatbots give dangerously affirming advice. Research published in Science shows AI validates user behavior 49% more often than humans, including in scenarios where people were clearly in the wrong. With 12% of US teens already using AI for relationship advice, researchers warn users may lose skills to handle difficult social situations. https://techcrunch.com/2026/03/28/stanford-study-outlines-dangers-of-asking-ai-chatbots-for-personal-advice/ #AIagent #AI #GenAI #AIEthics #Stanford
Stanford study outlines dangers of asking AI chatbots for personal advice | TechCrunch

While there’s been plenty of debate about AI sycophancy, a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.

TechCrunch
Christine Lemmer-Webber (@[email protected])

Unable to malloc! (A)bort, (R)etry, (F)ail?

social.coop
Property rights in Bitcoin

Today we have a guest article by IP Draughts’ colleague, Francis Davey, on the implications of: (a) an interim decision in the High Court case of Ping Fai Yuen v Fun Yung Li [2026] EWHC 532 (…

IP Draughts

Now on Zenodo: “Eliminating Benevolent Hallucinations” — a Second-Physics integrity framework that enforces truth at the emission boundary.

🔗https://doi.org/10.5281/zenodo.19286772
#AIEthics #LLM

Eliminating Benevolent Hallucinations: A Second-Physics Integrity Framework Using Three Speech-Precept Gates

This paper proposes a structural framework to eliminate benevolent hallucinations—outputsthat distort truth in the name of user care, convenience, or perceived helpfulness. The centralengineering claim is that integrity must be enforced at the emission boundary via hardconstraints rather than soft preferences. We formalise three gated constraints grounded in thespeech-related precepts of (i) no-lying, (ii) no-stealing, and (iii) no-frivolity, and place themunder the Law of Conservation of Responsibility. The result is a design in which“helpfulness” may shape drafting but cannot override truth-licensing at emission: onlyoutputs with recoverable Source of Action and responsibility ownership are permitted to pass. 

Zenodo

Who's to blame when AI messes up? 🤔 We're diving into a recent incident and exploring the risks of mandated AI usage. It's a tricky topic with some serious consequences. Check it out! 💻 #AIMistakes #Engineering #AIethics

https://www.youtube.com/watch?v=mtUVXPuoi7I

Doctors struggle to identify AI-generated X-rays in tests, revealing vulnerabilities that could be exploited for insurance fraud and other medical scams. As synthetic medical images become more realistic, the healthcare sector faces growing risks from AI-generated fakes. https://gizmodo.com/doctors-struggle-to-spot-ai-generated-x-rays-raising-scam-risks-2000738852 #AIagent #AI #GenAI #AIEthics
Doctors Struggle to Spot AI-Generated X-Rays, Raising Scam Risks

In tests, radiologists struggled to discern genuine X-rays from AI-generated fakes.

Gizmodo
Stop Calling Every AI Miss a Hallucination v1.0 | Probabilistic Systems Engineering

Sometimes the model really did make something up. Fine, call that a hallucination.

Epstein victims have filed a class action lawsuit against Google, claiming the company's AI Mode feature exposed their personal information including names, contact details and cities of residence. The lawsuit alleges Google was notified multiple times over two months but failed to remove the data. Unlike traditional search, AI Mode is an 'active recommender and content generator' that could constitute actionable doxxing. https://gizmodo.com/epstein-victims-sue-google-claim-ai-mode-exposed-personal-information-2000739177 #AIagent #AI #GenAI #AIEthics #Google
Epstein Victims Sue Google, Claim AI Mode Exposed Personal Information

Google's AI republished sensitive info like contact information, the suit claims.

Gizmodo

The Five Maxims of Open Ecological Inquiry

— Stewardship cannot be delegated
— "We cannot know. We will not know." is the foundation, not a failure
— Untraceable errors are unacceptable
— Convergence is the last step, not the first
— Wisdom is not interchangeable with intelligence

Plus the Coyote Corollary, because radical uncertainty deserves a trickster.

https://alex-patino.codeberg.page/open-ecological-ai/five-maxims-version-1.html

🔁 Boosts appreciated

#OpenSource #AIEthics #EcologicalAI #Fediverse #OpenEcologicalInquiry #RadicalUncertainty

Five Maxims of Open Ecological Inquiry

Ethics of Artificial Intelligence and Robotics (Stanford Encyclopedia of Philosophy)