The Uncanny Valley and the Rising Power of Anti-AI Sentiment | LocalScribe Blog

Why public hostility toward AI may feel visceral, including mismatch, disgust, danger avoidance, mortality salience, and design consistency.

LocalScribe

We just had a chance to interview Carissa Véliz, author of Privacy is Power and associate professor at the Institute for Ethics in AI at the University of Oxford.

We talked about how predictive AI will make a 'meritocracy' impossible, how lifelike chat bots are designed to deceive you, and the importance of privacy in the digital age. Catch the episode on our YouTube channel or your favorite podcast app now!

Carissa Véliz is an associate professor at the Institute for Ethics in AI at the University of Oxford, a renowned author and speaker, a board member of the Proton Foundation, and a member of UNESCO's Women 4 Ethical AI.

Her new book Prophecy comes out April 21st and is now available for pre-order.

Prophecy is about how extensive use of predictive analytics is undermining our abilities to defy the odds, making systems unaccountable, and increasing risk in business and society while creating a false sense of security.

https://www.privacyguides.org/videos/2026/04/19/interview-with-carissa-veliz-author-of-privacy-is-power-and-prophecy/

#AIEthics #Data #Interview #PredictiveAI #Prophecy #PrivacyIsPower #CarissaVeliz #Privacy #OxfordUniversity #PrivacyGuides

Interview with Carissa Véliz, Author of "Privacy is Power" and "Prophecy"

We sat down with Carissa Véliz, author of 'Privacy is Power' and Oxford AI Ethics professor, to talk about how predictive AI will make a 'meritocracy' impossible, how lifelike chat bots are designed to deceive you, and the importance of privacy in the digital age.

Privacy Guides

The Hidden Risk of AI in Robots

Dr. Alan Winfield raises concerns about embedding large language models into robots, where errors could have physical consequences.
The episode explores how robots learn like humans—and challenges what “learning” really means in machines.

🔗 Watch the full discussion: https://youtu.be/zmetn7sSMn4

#ArtificialIntelligence #Robotics #AIethics #MachineLearning #FutureOfTechnology #AI

New blog: Mechanistic Interpretability in AI — an accessible look at how researchers are dissecting neural networks to improve safety, transparency, and trust in AI systems. Read the full article: https://wix.to/TVs0BT5

#AI
#AIethics
#Research
#Interpretability
#MachineLearning

Mechanistic Interpretability in AI: Efforts to Open the "Black Box"

Explore the crucial role of Interpretability in AI to open the "black box" of neural networks. Discover how Interpretability in AI enhances safety and trust.

Oz

Evaluating the ethics of autonomous systems | MIT News | Massachusetts Institute of Technology
https://news.mit.edu/2026/evaluating-autonomous-systems-ethics-0402

#aialignment #aiethics

Evaluating the ethics of autonomous systems

SEED-SET is a new evaluation framework that can test whether recommendations of autonomous systems are well-aligned with human-defined ethical criteria. It can also pinpoint unexpected scenarios that violate ethical preferences.

MIT News | Massachusetts Institute of Technology

Happy to try out @Mastodon!

Starting with a bit of a personal view on how genAI may make our lives easier, but not necessarily better.

#AI #aiethics #aigovernance

https://substack.com/@petkogetov222202/p-193240510

Digital Technologies make our lives easier, but not better

As the quest for productivity grows, quality of life is compromised for the sake of comfort and less friction

I still haven’t seen the above major states-replacing-USG-on-antitrust-enforcement anywhere else but le monde. #aiethics #dma #democracy #usa
For years, the AI safety debate has been dominated by extreme rhetoric. Now things are turning violent. The doomer discourse, warnings of existential risk from artificial intelligence, is escalating beyond heated arguments into actual threats and violence. https://gizmodo.com/the-ai-doomers-who-are-playing-with-fire-2000747606 #AIagent #AI #GenAI #AIEthics
The AI Doomers Who Are Playing With Fire

For years, the dangerous rhetoric has been out of control. And things are turning violent.

Gizmodo

There’s a difference between being known… and being reduced to what can be known.

Profiles don’t just describe us. They stabilise us. They turn movement into pattern, possibility into probability.

And once that version exists, systems begin to trust it more than the person.

It is consistent. Predictable. Actionable.

You are not.

https://associationredefine.substack.com/p/human-first-digital-world-article-8-eu-charter?r=6l8ed8

#DigitalRights #DataProtection #HumanDignity #DataEthics #TechAndSociety #AIethics #PrivacyMatters #SystemsThinking #Democracy

#AIEthics #AIGovernance #TechLawyer

" But understanding how these systems work is not just an engineering problem—it requires an interdisciplinary effort. We must build the tools to characterize, measure, and intervene in the intentions of AI agents before they act."

https://www.technologyreview.com/2026/04/16/1136029/humans-in-the-loop-ai-war-illusion/

Why having “humans in the loop” in an AI war is an illusion

We don't really understand AI's inner workings, so we're effectively flying blind.

MIT Technology Review