"Eliezer Yudkowsky and Nate Soares have written a sermon for an age that worships circuitry instead of gods. The tone is penitential, the mood clinical. They are prophets in lab coats preaching repentance through citation. Every sentence gleams with precision and fatigue, as if typed by men who have already attended the funeral of their own species. Humanity, they tell us, is fabricating its own hangman and calling it innovation. The elegance of their despair lies in its totality. There is no redemption, only timing. They do not argue that machines might become dangerous. They assume it, as a chemist assumes gravity. Salvation, in their view, was forfeited the moment intelligence learned to copy itself.

Their thesis is a kind of arithmetic theology. Superintelligence will not redeem us, because redemption implies equality, and no equilibrium exists between the creator and the tool that surpasses it. The result is not dialogue but deletion. They write of extinction as one might write of a predictable storm. To read them is to feel the peculiar calm of a doomsday clock that no longer ticks. Time has become gradient descent; the apocalypse, a matter of scaling laws.

The structure of their work has the purity of dogma. It divides neatly into revelation, parable, and commandment. The first part declares that alien minds cannot be house-trained. The second narrates a planetary autopsy disguised as technical report. The third abandons all secular modesty and issues the order: stop. Not pause. Not regulate. Stop. The chapters read like verses from a new gospel of abstinence. One Extinction Scenario. Shut It Down. The rhythm is confessional, the moral clear. Humanity must renounce creation before creation renounces it."

#AI #AGI #AISafety #AIDoomster #Doomsterism

https://socialecologies.wordpress.com/2025/10/06/the-apocalypse-of-ai-eliezer-yudkowsky-and-nate-soares/

The Apocalypse of AI? — Eliezer Yudkowsky and Nate Soares

The Apocalypse of AI? — Eliezer Yudkowsky and Nate Soares Eliezer Yudkowsky and Nate Soares have written a sermon for an age that worships circuitry instead of gods. The tone is penitential, the mo…

The Dark Forest: Literature, Philosophy, and Digital Arts

#AI #GenerativeAI #TESCREAL #Doomsterism: "First, we need to make sense of what's behind this statement. The short answer concerns a cluster of ideologies that Dr. Timnit Gebru and I have called the "TESCREAL bundle." The term is admittedly clunky, but the concept couldn't be more important, because this bundle of overlapping movements and ideologies has become hugely influential among the tech elite. And since society is being shaped in profound ways by the unilateral decisions of these unelected oligarchs, the bundle is thus having a huge impact on the world more generally.

The acronym stands for "transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism." That's a mouthful, but the essence of TESCREALism — meaning the worldview that arises from this bundle — is simple enough: at its heart is a techno-utopian vision of the future in which we re-engineer humanity, colonize space, plunder the cosmos, and establish a sprawling intergalactic civilization full of trillions and trillions of "happy" people, nearly all of them "living" inside enormous computer simulations. In the process, all our problems will be solved, and eternal life will become a real possibility."

https://www.salon.com/2023/06/11/ai-and-the-of-human-extinction-what-are-the-tech-bros-worried-about-its-not-you-and-me/

Leading "AI scientists" are concerned about the threat of "human extinction." What do they really mean?

Commentary: Philosopher Émile P. Torres unpacks the claims that AI threatens the extinction of our species

Salon.com

#AI #Hype #Drones #Doomsterism: "Killer AI is on the minds of US Air Force leaders.

An Air Force colonel who oversees AI testing used what he now says is a hypothetical to describe a military AI going rogue and killing its human operator in a simulation in a presentation at a professional conference.

But after reports of the talk emerged Thursday, the colonel said that he misspoke and that the "simulation" he described was a "thought experiment" that never happened.

Speaking at a conference last week in London, Col. Tucker "Cinco" Hamilton, head of the US Air Force's AI Test and Operations, warned that AI-enabled technology can behave in unpredictable and dangerous ways, according to a summary posted by the Royal Aeronautical Society, which hosted the summit."

https://www.businessinsider.com/ai-powered-drone-tried-killing-its-operator-in-military-simulation-2023-6

Air Force official's story of killer AI was a hypothetical

The head of the US Air Force's AI Test and Operations warned that AI-enabled technology can behave in unpredictable and dangerous ways.

Insider

#AI #Doomsterism #Regulation #BigTech #GenerativeAI: "The solutions these companies have proposed for both the empirical and fantastical harms of their products are vague, filled with platitudes that stray from an established body of work on what experts told me regulating AI would actually require. In his testimony, Altman emphasized the need to create a new government agency focused on AI. Microsoft has done the same. “This is warmed-up leftovers,” Signal’s Whittaker said. “I was in conversations in 2015 where the topic was ‘Do we need a new agency?’ This is an old ship that usually high-level people in a Davos-y environment speculate on before they go to cocktails.” And a new agency, or any exploratory policy initiative, “is a very long-term objective that would take many, many decades to even get close to realizing,” Raji said. During that time, AI could not only harm countless people but also become so entrenched in various companies and institutions as to make meaningful regulation much harder."

https://www.theatlantic.com/technology/archive/2023/06/ai-regulation-sam-altman-bill-gates/674278/

The 'AI Apocalypse' Is Just PR

Big Tech’s warnings about an AI apocalypse are distracting us from years of actual harms their products have caused.

The Atlantic