"AI models have one undeniable virtue: the increase in speed and efficiency with which they can carry out tasks that were once the province of human beings. Language models can produce functional text for a wide range of contexts, while image generation models are giving us the capability to render into existence whatever image or video takes our fancy. This is widely taken as clear evidence of the benefits of AI. For Mumford, this type of thinking is precisely the problem. The myth of the machine is dehumanizing because it subordinates human values to machine values: speed and efficiency.

The most striking evidence of the myth’s cultural pervasiveness is that many avid accelerationists do not deny that AI could mean the end of humanity. They merely differ from the doomers in believing that this risk is necessary—even desirable—to achieve the spectacular increases in efficiency and productivity promised by AGI. Mumford foresaw this extreme endpoint. “The myth of the machine,” he wrote, “the basic religion of our present culture, has so captured the modern mind that no human sacrifice seems too great provided it is offered up to the insolent Marduks and Molochs of science and technology.”

Those branded as skeptics or doomers also still accept the premises of the myth of the machine. The stated aim of many organizations concerned with avoiding the worst AI outcomes is that we should “realize the benefits while mitigating the risks” of the technology. Mumford would argue the first half of this statement concedes too much, accepting the basic premise of the myth of the machine while presenting the task as removing the obstacles to realize its benefits. Many skeptics also share a basic misanthropic premise of machine superiority, focusing as they do on the biased, irrational, and flawed nature of human beings that needs machinic augmentation."

https://www.compactmag.com/article/ai-and-the-myth-of-the-machine/

#AI #Neoluddism #AIBoosters #AIHype #AIDoomers #GenerativeAI #Mumford #STS #MediaEcology

AI and the Myth of the Machine

Last April, 600 people gathered for a technology policy conference in downtown Washington, DC.

Compact
@remixtures That quoted passage goes wrong from the first sentence. Mechanisation and automation have so far been used to increase speed and efficiency of laborious human tasks. We're being asked to believe that the current tranche of AI models (GenAI, mostly) will do the same for tasks involving reasoning and creativity. I've not seen any real evidence of this. There are certainly further things that can be automated; but how much "speed and efficiency" we will get remains to be seen.
@remixtures Part of the difficulty is that there is clearly already some economic slack in western society, even globally, where productivity is well in excess of the resource demands of the working population, at least in many areas. How to make use of these surpluses was a question long before "AI" came along. Do we distribute them fairly and let everyone have an easier life, or do we hoard them and give the majority of the population barely enough resource to stay alive? What is the aim?
@kbm0 That I do agree with, and it has to do with the need for reducing the average number of working hours per week. But that is something that would require going beyond capitalism. What has been the norm for the last decades has been understaffing (https://prospect.org/2026/03/19/understaff-workplace-business-covid-cvs-pharmacies-hotels-grocery-stores/), but it's very difficult to solve that problem when all labour has to generate profits for some entity.
Not Enough Workers for the Job

Understaffing has become an epidemic in American workplaces of all kinds.

The American Prospect
@remixtures I guess my suggestion is that this is the coup, the large scale confidence trick that is being attempted: We have got to a point where "technology" can be introduced that is in the end little more than social engineering. Cryptocurrencies give us a template for this: Profiligately wasteful and totally useless, so why do they exist at all and why are they seen as something that can generate wealth? Insane.
@kbm0 I do agree that cryptocurrenciess are totally useless. However, having worked with AI-based tools for more than three years now, all I can say is that there are definitely revolutionary effects in terms of what this technology enables for human productivity.
@kbm0 Speaking for myself, personally, using LLMs allows me to be way more productive than before using them. The fallacy is in believing that before the introduction of LLMs, "white-collar" labor required lots of reasoning and creativity. I really don't believe that. On the contrary.
@remixtures Bizarre. I'm curious what type of work you do and how you find your productivity is increased. Sorry if you've posted in detail about this already, I should go and look! 🙂

@kbm0 I'm a technical writer. Based on my experience, what I see is that using one LLM alone results in mediocre quality. Using two LLMs results in (sometimes very) good quality. When I use three LLMs and fact-check the final output, I normally am able to achieve great/outstanding results. Something that probably would require three or four people to achieve while taking three ou four times more.

Of course, when I say LLMs, I also include AI routers, SLMs, etc.

@remixtures I think all human communication has a high level of data redundancy and this may be one reason why people seem to find GenAI to be useful in this area. Even if they continue to produce quality content, there is a large volume of pro-forma fluff that they have to weave in with it. I'd ask you though, not to forget the huge environmental cost of the mass deployment of AI models, and the mass plagiarism that has built them. So these ethical considerations remain regardless.

@kbm0 Whenever a new technology emerges there are always tradeoffs. I wrote about this here: https://www.linkedin.com/pulse/ai-autopilot-knowledge-work-miguel-caetano-ptwee/

Regarding plagiarism, I am of the opinion that all great ideas come from imitating several other already existing ideas. So, honestly, I don't see a problem with that. Besides, hen you put three different AI tools reviewing each other's ouput with a human in the loop, I don't really think we can speak of mere plagiarism anymore.

AI as Autopilot for Knowledge Work

Clearly, people have the right to refuse to work for companies that force their employees to use Artificial Intelligence (AI) tools. Similarly, it's clear that these companies are making a mistake if they need to force their employees to use AI.

@remixtures To me this brings to mind Newton's "standing on the shoulders of giants". As a society, we collectively build on the knowledge and creativity left to us by previous generations. People make a living off this social contract in many diverse ways. But GenAI is an attempt by a small class of broligarchs to carpet bag that whole process and obsolete it. They will henceforth "own" all human knowledge and creativity in the sense of being able to undercut any human endeavour. So they hope.
@remixtures Ultimately, as I stated earlier, I think they are on shaky ground and they are hoping to get everyone hooked on their flawed models in the same way as some of the crypto bros hope to supplant fiat currency. It may be that the models will improve substantially over time. But longterm they will have to contend with the fact that they are going to be eating their own faeces as the canon of human literature and knowledge fills up with AI slop.
@remixtures this was an excellent read. Thanks for sharing.