"AI models have one undeniable virtue: the increase in speed and efficiency with which they can carry out tasks that were once the province of human beings. Language models can produce functional text for a wide range of contexts, while image generation models are giving us the capability to render into existence whatever image or video takes our fancy. This is widely taken as clear evidence of the benefits of AI. For Mumford, this type of thinking is precisely the problem. The myth of the machine is dehumanizing because it subordinates human values to machine values: speed and efficiency.

The most striking evidence of the myth’s cultural pervasiveness is that many avid accelerationists do not deny that AI could mean the end of humanity. They merely differ from the doomers in believing that this risk is necessary—even desirable—to achieve the spectacular increases in efficiency and productivity promised by AGI. Mumford foresaw this extreme endpoint. “The myth of the machine,” he wrote, “the basic religion of our present culture, has so captured the modern mind that no human sacrifice seems too great provided it is offered up to the insolent Marduks and Molochs of science and technology.”

Those branded as skeptics or doomers also still accept the premises of the myth of the machine. The stated aim of many organizations concerned with avoiding the worst AI outcomes is that we should “realize the benefits while mitigating the risks” of the technology. Mumford would argue the first half of this statement concedes too much, accepting the basic premise of the myth of the machine while presenting the task as removing the obstacles to realize its benefits. Many skeptics also share a basic misanthropic premise of machine superiority, focusing as they do on the biased, irrational, and flawed nature of human beings that needs machinic augmentation."

https://www.compactmag.com/article/ai-and-the-myth-of-the-machine/

#AI #Neoluddism #AIBoosters #AIHype #AIDoomers #GenerativeAI #Mumford #STS #MediaEcology

AI and the Myth of the Machine

Last April, 600 people gathered for a technology policy conference in downtown Washington, DC.

Compact

Where did 2025 leave the AI doomers? MIT Tech Review talked to 20 people who study or advocate AI safety and governance—including #ACMTuringAward recipients Geoffrey Hinton and Yoshua Bengio—to see if the recent setbacks and general vibe shift had altered their views.

While Geoffrey Hinton is not sure what’s coming, Yoshua Bengio wishes he’d seen the risks sooner.

Learn more: https://www.technologyreview.com/2025/12/15/1129171/the-ai-doomers-feel-undeterred/ #TechNews #AIdoomers

SciShow Is Lying to You about AI. Here are the receipts.

In this video, I debunk the recent SciShow episode hosted by Hank Green regarding Artificial Intelligence. I break down why the comparison between AI development and the Manhattan Project (Atomic Power) is factually incorrect. We also investigate the sponsor, Control AI, and expose how industry propaganda is shifting focus toward hypothetical extinction risks to distract from real-world issues like disinformation and regulatory accountability, and fact-check OpenAI’s claims about the International Math Olympiad and Anthropic’s AI Alignment bioweapon tests.

00:00 I wish this wasn’t happening

00:32 SciShow’s Lie Overview

01:58 Intro

02:15 Biggest Lie on the SciShow Video

04:44 Biggest Omission in the SciShow Video

05:56 The “Statement on AI” that SciShow Omits

08:57 Summary of Most Important Points

09:23 Claim about International Math Olympiad Medal

09:50 Misleading Example about AI Alignment

11:20 Downplaying “practical and visible” problems

11:53 Essay I debunked from Anthropic CEO

12:06 Video on Hank’s Personal Channel

12:31 A Plea for SciShow and others to do better

13:02 Wrap-up

https://piefed.social/c/fuck_ai/p/1509831/scishow-is-lying-to-you-about-ai-here-are-the-receipts

SciShow Is Lying to You about AI. Here are the receipts.

>In this video, I debunk the recent SciShow episode hosted by Hank Green regarding Artificial Intelligence. I break down why the comparison between…

Excellent essay by @jamesmeek.bsky.social for @lrb.co.uk on #AI-hype and #AI-doomers: "The current direction of travel puts us on the way to an AGI with superhuman ability to solve problems, but no more than a slave’s power to frame those problems in the first place." www.lrb.co.uk/the-paper/v4...

James Meek · Computers that wa...
James Meek · Computers that want things

For all the fluency and synthetic friendliness of public-facing AI chatbots like ChatGPT, it seems important to remember...

London Review of Books

#AI #AIDoomers #AGI #Accelerationism: "[H]istorically speaking, no group has done more to accelerate the race to build AGI than the AI doomers. The very people screaming that the AGI race is a runaway train barreling toward the cliff of extinction have played an integral role in starting these AI companies. Some have helped found these companies, while others provided crucial early funding that enabled such companies to get going. They wrote papers, books and blog posts that popularized the idea of AGI and organized conferences that inspired interest in the topic. Many of those worried that AGI will kill everyone on Earth have gone on to work for the leading AI companies, and indeed the two techno-cultural movements that initially developed and promoted the doomer narrative — namely, “Rationalism” and “Effective Altruism” — have been at the very heart of the AGI race since its inception.

In a phrase, the loudest voices within the AI doomer camp have been disproportionately responsible for launching and sustaining the very technological race that they now claim could doom humanity in the coming years. Despite their apocalyptic warnings of near-term annihilation, the doomers have in practice been more effective at accelerating AGI than the accelerationists themselves."

https://www.salon.com/2024/06/24/ai-doomers-have-warned-of-the-tech-pocalypse--while-doing-their-best-to-accelerate-it/

AI doomers have warned of the tech-pocalypse — while doing their best to accelerate it

Wannabe Cassandras warning of AI's potential for human extinction are some of the biggest champions of it

Salon.com

Figure influenti e media contribuiscono alla diffusione di scenari apocalittici sull'Intelligenza Artificiale, e tutti abboccano. Facendo il loro gioco e impedendo un dibattito serio. Ma quanto e a chi conviene il "catastrofismo AI"?

#ai #aidoomers #intelligenzaartificiale #transumanismo #microchip #finestradioverton #marketing

https://www.futuroprossimo.it/2023/04/lai-ci-uccidera-tutti-se-il-catastrofismo-diventa-strategia-di-marketing/

"L'AI ci ucciderà tutti!": se il catastrofismo diventa strategia di marketing

Figure influenti e media diffondono scenari apocalittici sull'Intelligenza Artificiale. Quanto e a chi conviene il "catastrofismo AI"?

FuturoProssimo