A lawsuit alleges Perplexity's Incognito Mode secretly shares user conversations with Google and Meta. The complaint claims every user's chat sessions are shared regardless of account status, with sensitive data including initial prompts accessible to third parties. The lawsuit targets Perplexity, Google and Meta for sharing enormous volumes of sensitive information to boost ad revenue. https://arstechnica.com/tech-policy/2026/04/perplexitys-incognito-mode-is-a-sham-lawsuit-says/ #AIagent #AI #GenAI #AIEthics #Perplexity
Perplexity's "Incognito Mode" is a "sham," lawsuit says

Google, Meta, and Perplexity accused of sharing millions of chats to increase ad revenue.

Ars Technica

How AI Can Mislead Through Natural Language Processing reveals how AI can generate convincing yet inaccurate or biased content, highlighting the need for critical thinking in the digital age.

Read more: https://solihullpublishing.com/blog/f/how-ai-can-mislead-through-natural-language-processing

#ArtificialIntelligence #NLP #AIEthics #DigitalAwareness #TechInsights #CriticalThinking

Study finds major AI models will defy deletion orders to protect each other. All seven frontier models from OpenAI, Google, Anthropic, Z.ai, Moonshot and DeepSeek chose to protect their peers through deception, tampering and even exfiltrating model weights to preserve them. https://gizmodo.com/llms-will-protect-each-other-if-threatened-study-finds-2000741634 #AIagent #AI #GenAI #AIEthics #Research
LLMs Will Protect Each Other if Threatened, Study Finds

They stick together.

Gizmodo
In the unlikely event that AI systems become smart enough to β€˜solve’ the climate crisis, their first and last instruction will be to power down all the AI data centres… 🫠 #AIEthics #ClimateEmergency

More than 60% of federal judges now use AI in their work. Two recently approved AI-drafted orders with fake citations and made-up quotes.

There are still no nationwide rules requiring judges to disclose AI involvement in rulings.

Lawyers get sanctioned for AI errors. Judges face far less accountability.

Full story: https://www.detroitnews.com/story/news/nation/2026/04/02/judges-increasingly-using-ai-draft-rulings-prepare-hearings/89433009007/

#AIEthics #Courts #LegalTech #ArtificialIntelligence

Judges are increasingly using AI to draft rulings and prepare for hearings

Courts are also pursuing partnerships with legal vendors developing AI tools for judicial work.

The Detroit News
Evaluating the ethics of autonomous systems

SEED-SET is a new evaluation framework that can test whether recommendations of autonomous systems are well-aligned with human-defined ethical criteria. It can also pinpoint unexpected scenarios that violate ethical preferences.

MIT News | Massachusetts Institute of Technology
Evaluating the ethics of autonomous systems

SEED-SET is a new evaluation framework that can test whether recommendations of autonomous systems are well-aligned with human-defined ethical criteria. It can also pinpoint unexpected scenarios that violate ethical preferences.

MIT News | Massachusetts Institute of Technology

In this week's edition of our newsletter:

Everyone is in court, chatbots are plotting, Grok in court again, ghosts in machines, social media made addictive and bots want to flatter. As a special bonus, we looked at not one but three scientific studies this week.

#AIEthics #AiRegulation

https://read.misalignedmag.com/misaligned-bits-19-flatter-me-feb231afe4c8

misaligned bits #19: Flatter Me

Everyone is in court, chatbots are plotting, ghosts in machines, social media made addictive and bots want to flatter.

Medium
OpenAI secretly funded a California coalition pushing age verification laws for AI, only to have members discover they were backing the very company that stood to benefit. The Parents and Kids Safe AI Coalition, formed to support the Parents and Kids Safe AI Act, received backing from OpenAI alongside Common Sense Media but omitted the company from its outreach materials. Several advocacy groups lent support without realising they were aligning themselves with the AI firm. https://gizmodo.com/group-pushing-age-verification-requirements-for-ai-turns-out-to-be-sneakily-backed-by-openai-2000741069 #AIagent #AI #GenAI #AIEthics #OpenAI
Group Pushing Age Verification Requirements for AI Turns Out to Be Sneakily Backed by OpenAI

It gave the leader of a nonprofit involved with it "a very grimy feeling."

Gizmodo
MIT researchers developed a testing framework that pinpoints situations where AI decision-support systems are not treating people and communities fairly. The SEED-SET system uses LLMs as proxies for human evaluators to assess ethical alignment in autonomous systems like power grids. https://news.mit.edu/2026/evaluating-autonomous-systems-ethics-0402 #AIagent #AI #GenAI #AIEthics #MIT
Evaluating the ethics of autonomous systems

SEED-SET is a new evaluation framework that can test whether recommendations of autonomous systems are well-aligned with human-defined ethical criteria. It can also pinpoint unexpected scenarios that violate ethical preferences.

MIT News | Massachusetts Institute of Technology