"Eliezer Yudkowsky and Nate Soares have written a sermon for an age that worships circuitry instead of gods. The tone is penitential, the mood clinical. They are prophets in lab coats preaching repentance through citation. Every sentence gleams with precision and fatigue, as if typed by men who have already attended the funeral of their own species. Humanity, they tell us, is fabricating its own hangman and calling it innovation. The elegance of their despair lies in its totality. There is no redemption, only timing. They do not argue that machines might become dangerous. They assume it, as a chemist assumes gravity. Salvation, in their view, was forfeited the moment intelligence learned to copy itself.

Their thesis is a kind of arithmetic theology. Superintelligence will not redeem us, because redemption implies equality, and no equilibrium exists between the creator and the tool that surpasses it. The result is not dialogue but deletion. They write of extinction as one might write of a predictable storm. To read them is to feel the peculiar calm of a doomsday clock that no longer ticks. Time has become gradient descent; the apocalypse, a matter of scaling laws.

The structure of their work has the purity of dogma. It divides neatly into revelation, parable, and commandment. The first part declares that alien minds cannot be house-trained. The second narrates a planetary autopsy disguised as technical report. The third abandons all secular modesty and issues the order: stop. Not pause. Not regulate. Stop. The chapters read like verses from a new gospel of abstinence. One Extinction Scenario. Shut It Down. The rhythm is confessional, the moral clear. Humanity must renounce creation before creation renounces it."

#AI #AGI #AISafety #AIDoomster #Doomsterism

https://socialecologies.wordpress.com/2025/10/06/the-apocalypse-of-ai-eliezer-yudkowsky-and-nate-soares/

The Apocalypse of AI? — Eliezer Yudkowsky and Nate Soares

The Apocalypse of AI? — Eliezer Yudkowsky and Nate Soares Eliezer Yudkowsky and Nate Soares have written a sermon for an age that worships circuitry instead of gods. The tone is penitential, the mo…

The Dark Forest: Literature, Philosophy, and Digital Arts

"An incorrect read of the study has been that the "learning gap" that makes these things less useful, when the study actually says that "...the fundamental gap that defines the GenAI divide [is that users resist tools that don't adapt, model quality fails without context, and UX suffers when systems can't remember." This isn't something you learn your way out of. The products don't do what they're meant to do, and people are realizing it.

Nevertheless, boosters will still find a way to twist this study to mean something else. They'll claim that AI is still early, that the opportunity is still there, that we "didn't confirm that the internet or smartphones were productivity boosting," or that we're in "the early days" of AI, somehow, three years and hundreds of billions and thousands of articles in.

I'm tired of having the same arguments with these people, and I'm sure you are too. No matter how much blindly obvious evidence there is to the contrary they will find ways to ignore it. They continually make smug comments about people "wishing things would be bad" or suggesting you are stupid — and yes, that is their belief! — for not believing generative AI is disruptive.

Today, I’m going to give you the tools to fight back against the AI boosters in your life. I’m going to go into the generalities of the booster movement — the way they argue, the tropes they cling to, and the ways in which they use your own self-doubt against you.

They’re your buddy, your boss, a man in a gingham shirt at Epic Steakhouse who won't leave you the fuck alone, a Redditor, a writer, a founder or a simple con artist — whoever the booster in your life is, I want you to have the words to fight them with."

https://www.wheresyoured.at/how-to-argue-with-an-ai-booster/

#AI #GenerativeAI #AIBooster #AIDoomster #AIHype #BigTech

How To Argue With An AI Booster

Editor's Note: For those of you reading via email, I recommend opening this in a browser so you can use the Table of Contents. This is my longest newsletter - a 16,000-word-long opus - and if you like it, please subscribe to my premium newsletter. Thanks for reading! In

Ed Zitron's Where's Your Ed At

"Residents of the state that launched the modern AI boom are deeply skeptical of the technology, and are overwhelmingly in favor of regulating AI companies, a new in-depth survey of Californians’ attitudes towards the technology finds.

This is crucial data because, as readers of this newsletter well know, given the Trump administration’s quest for American AI dominance and deregulation, if there’s going to be any meaningful democratic governance of AI in the United States at all over the next few years, it’s going to come from the states. And a lot of it’s going to come from California.

TechEquity, a tech accountability group, spearheaded the research, and interviewed 1,400 Californians about their feelings on AI. The findings were stark: 55% were more concerned than excited about AI, while only 33% expressed more excitement than concern. Meanwhile, 59% thought that “AI will most likely benefit the wealthiest households and corporations, not working people and the middle class.” And both Democrats and Republicans shared that view."

https://www.bloodinthemachine.com/p/how-california-feels-about-ai

#USA #California #AI #GenerativeAI #AIDoomster

How California feels about AI

Spoiler: Not great. Plus, how AI is raising our electricity bills, the chatbot therapist that failed to intervene in a suicide, and what lays behind Grok's personas.

Blood in the Machine

"To really screw up the planet, you might need something like the following.

- A really powerful person with tentacles across the entire planet
- Substantial influence over the world’s information ecosphere
- A large number of devoted followers willing to justify almost any choice
- Leverage over world governments and their leaders
- Physical boots on the ground in a wide part of the world
- A desire for military contracts
- Some form of massively empowered (not necessarily very smart) AI
- Incomplete or poor control over that AI
- A tendency towards impulsivity and risk-taking
- A disregard towards conventional norms
- Outright malice to humanity or at least a kind of reckless indifference

What crystallized for me over the last few days is that we have such a person.

Elon Musk."

https://garymarcus.substack.com/p/why-my-pdoom-has-risen-dramatically

#AI #PDoom #Musk #xAI #Grok #AIDoomster

Why my p(doom) has risen, dramatically

What could happen if power, recklessness, incompetence, and indifference came together

Marcus on AI