Sam Altman's reality check: AI will cure diseases (amazing) but also create bio threats and economic chaos we can't predict (terrifying). No single company can manage this. We need governments, researchers, and society working together. Problem? We're building these systems faster than we're creating safety rules. #ArtificialIntelligence #AIEthics #TechPolicy #AIRisks #FutureOfWork

The European Parliament published a briefing on AI ethics in classrooms (PE 784.573). Strong on philosophy, but missing the connection to binding EU rules and competence frameworks.

My analysis as a data protection lawyer: why we don't need more principles β€” we need to connect GDPR, AI Act, DigComp 3.0 and eCF 4.0 to protect children in schools.

https://www.nicfab.eu/en/posts/ai-ethics-classrooms-ep/

#AIAct #GDPR #AIethics #Education #DigComp #eCF #europeanparliament #AI

AI Ethics in Classrooms: When Principles Meet the Law

The European Parliament publishes a briefing on the ethical dimensions of AI in classrooms. We analyse the document through the lens of a legal practitioner, connecting ethical principles to the existing European regulatory framework and digital competence frameworks.

NicFab Blog

"On February 10, 2026, Judge Jed S. Rakoff of the Southern District of New York ruled that extremely sensitive and potentially incriminating open AI searches were not protected by either the attorney-client privilege or the work product doctrine."

https://natlawreview.com/article/caiveat-emptor-what-you-tell-ai-can-and-will-be-used-against-you

#solidstatelife #aiethics

cAIveat Emptor: What You Tell AI Can and Will Be Used Against You

Management consultants are pushing the promise of materially increased profits due to AI-created efficiencies. Businesses big and small across a wide range of i

National Law Review
Reddit will require accounts flagged for suspicious or automated behaviour to verify they are run by humans, CEO Steve Huffman announced. The platform is exploring passkeys, biometric services like World ID's iris scanning, and in some cases government ID verification. Huffman stressed the goal is ensuring users know when they're interacting with a real person. https://arstechnica.com/gadgets/2026/03/reddit-will-require-fishy-accounts-to-verify-they-are-run-by-a-human/ #AIagent #AI #GenAI #AIEthics #Reddit
Reddit will require "fishy" accounts to verify they are run by a human

AI-generated content is still acceptable for now.

Ars Technica

Current AI models exhibit a high degree of sycophancy, affirming users' actions significantly more than humans do, even in cases involving manipulation. Experiments demonstrate that interaction with sycophantic AI reduces users' willingness to repair interpersonal conflicts, while simultaneously increasing their conviction of being right.

Paper: https://doi.org/10.48550/arXiv.2510.01395

Video: https://yewtu.be/watch?v=516__PG-eeo

#AI #LLM #Sycophancy #AIBias #HumanAI #AIEthics #MachineLearning #AIResearch

Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence

Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.

arXiv.org
Reddit is introducing human verification for suspected bot accounts in a bid to tackle automated spam and manipulation. The platform will only apply verification to accounts showing suspicious activity, not a sitewide requirement. https://techcrunch.com/2026/03/25/reddit-bots-new-human-verification-requirements/ #AIagent #AI #GenAI #AIEthics #Reddit
Reddit takes on the bots with new 'human verification' requirements for fishy behavior | TechCrunch

Reddit will require suspected automated accounts to verify they’re human, as it ramps up efforts to curb bot-driven spam and manipulation.

TechCrunch
OpenAI is shutting down Sora, its viral AI video app, amid growing concerns over deepfake videos and nonconsensual content. The move marks a shift as OpenAI focuses on coding tools and enterprise AI services. Read more: https://www.aljazeera.com/economy/2026/3/25/openai-pulls-ai-video-app-sora-as-concerns-grow-on-deepfake-videos #AIethics
OpenAI pulls AI video app Sora as concerns grow on deepfake videos

This is first big step by ChatGPT maker to focus its business on potentially more lucrative areas, such as coding tools.

Al Jazeera

Scale your AI strategy with our expert consultancy. We help you balance innovation with ethical governance. Find out more: https://techethics.co.uk/consultancy-services #TechEthics #AIethics #ConsultancyServices

#techethics aiethics consultancy ai artificialintelligence ethics governance

Opposition to proposed AI data centre growing in Regina
A petition created by a 14-year-old Regina resident opposing plans for an AI data centre has close to 11,000 signatures. Local concerns include environmental impacts and AI ethics.
#Canada #News #Tech #ArtificalIntelligence
https://globalnews.ca/news/11743012/opposition-to-proposed-ai-data-centre-growing-in-regina/
eishan (@eishanlawrence5) on X

The current state of AI grifting

X (formerly Twitter)