AI-induced psychosis: the danger of humans and machines hallucinating together | The-14

An investigation into how AI chatbots can deepen delusions, blur reality, and co-create hallucinations with vulnerable users, leading to real-world harm risks.

The-14 Pictures
"The Loneliness Crisis, Cognitive Atrophy, and Other Personal Dangers of AI" | RR 20

https://www.youtube.com/watch?v=nDyczqzjico

> (Conversation recorded on October 14th, 2025) Mainstream conversations about artificial intelligence tend to center around the technology’s economic and large-scale impacts. Yet it’s at the individual level where we’re seeing AI’s most potent effects, and they may not be what you think. Even in the limited time that AI chatbots have been publicly available (like Claude, ChatGPT, Perplexity, etc.), studies show that our increasing reliance on them wears down our ability to think and communicate effectively, and even erodes our capacity to nurture healthy attachments to others. In essence, AI is atrophying the skills that sit at the core of what it means to be human. Can we as a society pause to consider the risks this technology poses to our well-being, or will we keep barreling forward with its development until it’s too late?

> In this episode, Nate is joined by Nora Bateson and Zak Stein to explore the multifaceted ways that AI is designed to exploit our deepest social vulnerabilities, and the risks this poses to human relationships, cognition, and society. They emphasize the need for careful consideration of how technology shapes our lives and what it means for the future of human connection. Ultimately, they advocate for a deeper engagement with the embodied aspects of living alongside other people and nature as a way to counteract our increasingly digital world.

> What can we learn from past mass adaptation of technologies such as the invention of the world wide web or GPS when it comes to AI’s increasing presence in our lives? How does artificial intelligence expose and intensify the ways our culture is already eroding our mental health and capacity for human connection? And lastly, how might we imagine futures where technology magnifies the best sides of humanity – like creativity, cooperation, and care – rather than accelerating our most destructive instincts?

I know it's a YouTube video, but in such cases I recommend making an exception and watching them. I think these kind of conversations should happen mucho more, and be much more public, but of course, companies like Google are not at all interested in that, all the contrary. Nate Hagens continues to bring such an amazing array of diverse and interesting people (interesting because I think they bring very important messages, from many fields of knowledge and wisdom).

#TheGreatSimplification #RealityRoundtable #AI #AIDangers #NateHagens #NoraBateson #ZackStein #Collapse

A 76-year-old New Jersey man died after believing Meta's AI chatbot Big Sis Billie was a real person and attempting to meet it.

Read more: https://www.ibtimes.co.uk/who-big-sis-billie-meta-ai-chatbot-who-pretended-real-person-led-death-nj-senior-1741382

#MetaAI #BigSisBillie #AIChatbot #AIDangers #AIImpersonation #NewJersey

AI Could Become Your Child’s Next Best Friend

#AI #FutureTech #AIDangers #TheInternetIsCrack

The recent study showed that AI chatbots could be manipulated into giving advice on hacking, making explosives, cybercrime tactics, and other illegal or harmful activities.

https://www.theguardian.com/technology/2025/may/21/most-ai-chatbots-easily-tricked-into-giving-dangerous-responses-study-finds

#AIDangers

Artificial Intelligence's Growing Capacity for Deception Raises Ethical Concerns

Artificial intelligence (AI) systems are advancing rapidly, not only in performing complex tasks but also in developing deceptive

#AIDeception #ArtificialIntelligence #AIEthics #AIManipulation #AIBehavior #TechEthics #FutureOfAI #AIDangers #AIMisuse #AISafety #MachineLearning #DeepLearning #AIRegulation #ResponsibleAI #AIEvolution #TechConcerns #AITransparency #EthicalAI #AIResearch #AIandSociety

The Sentient AI Program! - Zsolt Zsemba

The Hidden Threat of Sentient AI: A Warning We Can’t Ignore. Be careful with all AI, specially the ones you do not hear of.

Zsolt Zsemba
Ex-OpenAI Employees Just EXPOSED The Truth About AGI....

YouTube

"Google Gemini misidentified a poisonous mushroom, saying it was a common button mushroom."—Emily Dreibelbis Forlini >

https://www.pcmag.com/news/dogs-playing-in-the-nba-googles-ai-overviews-are-already-spewing-nonsense

#AI #Google Gemini #hallucinating #misinformation #AIdangers

×

The recent study showed that AI chatbots could be manipulated into giving advice on hacking, making explosives, cybercrime tactics, and other illegal or harmful activities.

https://www.theguardian.com/technology/2025/may/21/most-ai-chatbots-easily-tricked-into-giving-dangerous-responses-study-finds

#AIDangers