Today I'm launching The Center for Tomorrow, a new global nonprofit dedicated to one purpose: reclaiming the future for societies in the age of AI. I spent nearly two decades at the intersection of… | Dex Hunter-Torricke | 182 comments

Today I'm launching The Center for Tomorrow, a new global nonprofit dedicated to one purpose: reclaiming the future for societies in the age of AI. I spent nearly two decades at the intersection of technology and global affairs, from the United Nations to SpaceX. I worked alongside some of the most powerful people in the world. I was in rooms where decisions were made that helped shape the trajectory of our times. But I was never supposed to be there. My father was a refugee from Burma. My mother was an immigrant. I was the first in my family to earn a university degree. Throughout my life, I've been driven by a single question: why do some people get the good life and billions of others don't, and how can we change the systems that determine these outcomes? What I found after nearly two decades searching for answers in those rooms, is that we are dramatically unprepared for what is coming. AI, the most powerful technology in history, is arriving into a world already buckling under economic inequality, democratic erosion, geopolitical fragmentation, and climate breakdown. These crises are not separate. They are deeply interconnected. And AI will amplify every single one of them. I also found something else: there is no plan. Many in the tech industry believe technology alone will save us. Politicians are governing for a world that no longer exists. And if we remain on the path we're on, I believe the future for most of the world's population will not be the one they wish for. In October I left Big Tech behind and committed myself to working on the biggest problems facing societies. The Center is core to those efforts. Over the coming years, we will focus on developing critical research and practical solutions to the big unanswered questions of the future - how can we build economies and a social contract fit for the age of advanced AI? How can we evolve our international order so that billions of people aren't left behind or devastated by conflict? How can we win the fight against climate change that we are on course to lose? It is essential that vastly more people and communities get a voice in building the future. We cannot allow a tiny handful of companies and leaders to decide the fate of our societies. So a large part of the Center's work will also be about building a global community of people who understand what is at stake, and equipping them with the skills and resources to act. Today I have also published my full thesis - a long piece setting out what I believe is at stake, why I believe futures of disaster and wonder are both within reach, and what we can do to reach the good future. I hope you will read and share it. If you ever wanted to build a better world, if you ever wanted to give it all for something real - the window is still open. Help me take back our future. | 182 comments on LinkedIn

This week brought actively exploited vulnerabilities affecting millions of systems worldwide, while cybercriminals found innovative ways to weaponize emerging technologies.

#Cybersecurity #ZeroDay #AIDangers #DataBreach #ThreatIntel

https://cybernewsweekly.substack.com/p/cybersecurity-news-review-week-5-4df

Cybersecurity News Review - Week 5 (2025)

This week brought actively exploited vulnerabilities affecting millions of systems worldwide, while cybercriminals found innovative ways to weaponize emerging technologies.

Cybersecurity News Weekly

“A new direction for students in an AI world: Prosper, prepare, protect” | Brookings https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world-prosper-prepare-protect/

After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children’s education overshadow its benefits. This is largely because the risks of AI differ in nature from its benefits—that is, these risks undermine children’s foundational development—and may prevent the benefits from being realized.

#AI #GenAI #AIDangers

A new direction for students in an AI world: Prosper, prepare, protect

This report explores the potential risks generative AI poses to students and outlines what we can do now to minimize them.

Brookings
"How Artificial Superintelligence Might Wipe Out Our Entire Species with Nate Soares" | TGS 203

https://www.youtube.com/watch?v=0tjOzQne1LY

The presentation for this episode follows. And I love the metaphor of we being forcefully onboarded on a plane with no landing gear, and being promised that the engineers will find out a way to build one while on flight. Also, we're being told that there's a 10%-20% chance that they'll fail the attempt. But hey, we get the opportunity to fly!

> Technological development has always been a double-edged sword for humanity: the printing press increased the spread of misinformation, cars disrupted the fabric of our cities, and social media has made us increasingly polarized and lonely. But it has not been since the invention of the nuclear bomb that technology has presented such a severe existential risk to humanity – until now, with the possibility of Artificial Super Intelligence (ASI) on the horizon. Were ASI to come to fruition, it would be so powerful that it would outcompete human beings in everything – from scientific discovery to strategic warfare. What might happen to our species if we reach this point of singularity, and how can we steer away from the worst outcomes?

> In this episode, I’m joined by Nate Soares, an AI safety researcher and co-author of the book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. Together, we discuss many aspects of AI and ASI, including the dangerous unpredictability of continued ASI development, the “alignment problem,” and the newest safety studies uncovering increasingly deceptive AI behavior. Soares also explores the need for global cooperation and oversight in AI development and the importance of public awareness and political action in addressing these existential risks.

> How does ASI present an entirely different level of risk than the conventional artificial intelligence models that the public has already become accustomed to? Why do the leaders of the AI industry persist in their pursuits, despite acknowledging the extinction-level risks presented by continued ASI development? And will we be able to join together to create global guardrails against this shared threat, taking one small step toward a better future for humanity?

#AI #ASI #ArtificialSuperIntelligence #AISafety #AIDangers #TGS #NateSoares #NateHagens

SciShow Is Lying to You about AI. Here are the receipts.

In this video, I debunk the recent SciShow episode hosted by Hank Green regarding Artificial Intelligence. I break down why the comparison between AI development and the Manhattan Project (Atomic Power) is factually incorrect. We also investigate the sponsor, Control AI, and expose how industry propaganda is shifting focus toward hypothetical extinction risks to distract from real-world issues like disinformation and regulatory accountability, and fact-check OpenAI’s claims about the International Math Olympiad and Anthropic’s AI Alignment bioweapon tests.

00:00 I wish this wasn’t happening

00:32 SciShow’s Lie Overview

01:58 Intro

02:15 Biggest Lie on the SciShow Video

04:44 Biggest Omission in the SciShow Video

05:56 The “Statement on AI” that SciShow Omits

08:57 Summary of Most Important Points

09:23 Claim about International Math Olympiad Medal

09:50 Misleading Example about AI Alignment

11:20 Downplaying “practical and visible” problems

11:53 Essay I debunked from Anthropic CEO

12:06 Video on Hank’s Personal Channel

12:31 A Plea for SciShow and others to do better

13:02 Wrap-up

https://piefed.social/c/fuck_ai/p/1509831/scishow-is-lying-to-you-about-ai-here-are-the-receipts

SciShow Is Lying to You about AI. Here are the receipts.

>In this video, I debunk the recent SciShow episode hosted by Hank Green regarding Artificial Intelligence. I break down why the comparison between…

AI-induced psychosis: the danger of humans and machines hallucinating together | The-14

An investigation into how AI chatbots can deepen delusions, blur reality, and co-create hallucinations with vulnerable users, leading to real-world harm risks.

The-14 Pictures
"The Loneliness Crisis, Cognitive Atrophy, and Other Personal Dangers of AI" | RR 20

https://www.youtube.com/watch?v=nDyczqzjico

> (Conversation recorded on October 14th, 2025) Mainstream conversations about artificial intelligence tend to center around the technology’s economic and large-scale impacts. Yet it’s at the individual level where we’re seeing AI’s most potent effects, and they may not be what you think. Even in the limited time that AI chatbots have been publicly available (like Claude, ChatGPT, Perplexity, etc.), studies show that our increasing reliance on them wears down our ability to think and communicate effectively, and even erodes our capacity to nurture healthy attachments to others. In essence, AI is atrophying the skills that sit at the core of what it means to be human. Can we as a society pause to consider the risks this technology poses to our well-being, or will we keep barreling forward with its development until it’s too late?

> In this episode, Nate is joined by Nora Bateson and Zak Stein to explore the multifaceted ways that AI is designed to exploit our deepest social vulnerabilities, and the risks this poses to human relationships, cognition, and society. They emphasize the need for careful consideration of how technology shapes our lives and what it means for the future of human connection. Ultimately, they advocate for a deeper engagement with the embodied aspects of living alongside other people and nature as a way to counteract our increasingly digital world.

> What can we learn from past mass adaptation of technologies such as the invention of the world wide web or GPS when it comes to AI’s increasing presence in our lives? How does artificial intelligence expose and intensify the ways our culture is already eroding our mental health and capacity for human connection? And lastly, how might we imagine futures where technology magnifies the best sides of humanity – like creativity, cooperation, and care – rather than accelerating our most destructive instincts?

I know it's a YouTube video, but in such cases I recommend making an exception and watching them. I think these kind of conversations should happen mucho more, and be much more public, but of course, companies like Google are not at all interested in that, all the contrary. Nate Hagens continues to bring such an amazing array of diverse and interesting people (interesting because I think they bring very important messages, from many fields of knowledge and wisdom).

#TheGreatSimplification #RealityRoundtable #AI #AIDangers #NateHagens #NoraBateson #ZackStein #Collapse

AI Could Become Your Child’s Next Best Friend

#AI #FutureTech #AIDangers #TheInternetIsCrack

The recent study showed that AI chatbots could be manipulated into giving advice on hacking, making explosives, cybercrime tactics, and other illegal or harmful activities.

https://www.theguardian.com/technology/2025/may/21/most-ai-chatbots-easily-tricked-into-giving-dangerous-responses-study-finds

#AIDangers