"AI models have one undeniable virtue: the increase in speed and efficiency with which they can carry out tasks that were once the province of human beings. Language models can produce functional text for a wide range of contexts, while image generation models are giving us the capability to render into existence whatever image or video takes our fancy. This is widely taken as clear evidence of the benefits of AI. For Mumford, this type of thinking is precisely the problem. The myth of the machine is dehumanizing because it subordinates human values to machine values: speed and efficiency.

The most striking evidence of the myth’s cultural pervasiveness is that many avid accelerationists do not deny that AI could mean the end of humanity. They merely differ from the doomers in believing that this risk is necessary—even desirable—to achieve the spectacular increases in efficiency and productivity promised by AGI. Mumford foresaw this extreme endpoint. “The myth of the machine,” he wrote, “the basic religion of our present culture, has so captured the modern mind that no human sacrifice seems too great provided it is offered up to the insolent Marduks and Molochs of science and technology.”

Those branded as skeptics or doomers also still accept the premises of the myth of the machine. The stated aim of many organizations concerned with avoiding the worst AI outcomes is that we should “realize the benefits while mitigating the risks” of the technology. Mumford would argue the first half of this statement concedes too much, accepting the basic premise of the myth of the machine while presenting the task as removing the obstacles to realize its benefits. Many skeptics also share a basic misanthropic premise of machine superiority, focusing as they do on the biased, irrational, and flawed nature of human beings that needs machinic augmentation."

https://www.compactmag.com/article/ai-and-the-myth-of-the-machine/

#AI #Neoluddism #AIBoosters #AIHype #AIDoomers #GenerativeAI #Mumford #STS #MediaEcology

AI and the Myth of the Machine

Last April, 600 people gathered for a technology policy conference in downtown Washington, DC.

Compact

Well, some coworkers who have spent two years humoring #AIBoosters in the company have now realized that there is no way to placate extremists.

#AIBubble

The ugly pathologisation of ‘AI boosters’

A trend I’m noticing increasingly in the online critical discourse about LLMs is increasingly vitriolic accounts of ‘AI boosters’. Consider this recent instance from Audrey Watters, whose work I’m otherwise a huge fan of:

Ed’s piece is titled “How to Argue with an AI Booster,” but honestly (and contrary to what some people seem to believe about me), I’m not interested in arguing with these people. Frankly I don’t think there’s anything that one can say to change their minds. It’s like arguing with addicts or cultists — what’s the point?! Boosters will hear none of it — no surprise, since they’re spending their days basking in the sycophancy and comfort of their machine-oracles.

“Addicts or cultists”… I’ll just leave that line to sit there. This is probably the most explicit example I’ve encountered but I’ve seen increasing amounts of this. It was one of many reasons I got sick of Bluesky and deactivated my account. Ed Zitron offers a quite specific account of what constitutes a booster:

So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs, the less likely they are to emphatically argue that I’m missing out by not doing so myself.

No, the AI booster is symbolically aligned with generative AI. They are fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat they can find, their Sundays living and dying by the success of the team, except even fans of the Dallas Cowboys have a tighter grasp on reality.

However my fear is that distinctions are getting flattened here, so that ‘AI booster’ will start to slide into anyone who doesn’t entirely share my critique of LLMs or even anyone who willingly uses LLMs. There’s a terminally online character to the definition here (i.e. many of Zitron’s points ultimately relate to how people relate to him on social media) which suggests how these fault lines are inflected through the argumentative dynamics of social media. I’m sympathetic to Zitron’s post at some points but at others it feels like it’s one step away from “how to TOTALLY DESTROY AI boosters” in the worst YouTube style. I think he’s explicitly drawing a distinction where ‘AI boosters’ are a specific group but he’s talking about how you recognise a booster in a way which as a much wider scope in practice.

#AIBoosters #AiCritics #AudreyWattesr #edZitron #SocialMedia

The Booster Shot

A couple of weeks ago, Ed Zitron published one of his epic rants -- the kind that, as he warned newsletter readers, is probably better read on the web than via email: it’s 16,000 words long; so long that he added a Table of Contents to aid navigation.

Second Breakfast