Writing this up again so I can pin it: AI is literally a fascist project. Friends don't let friends use it.

Before I go into this, there are two types of responses to this that I have taken seriously so far.

One I'll call HashTagNotAllAI, which yields the obligatory "sure", but has the same smell. I'll leave it at that.

The other is that an anti AI stance also throws some assistive technology under the bus, making such a stance intrinsically ableistic. The easy thing to do is to refer...

... to HashTagNotAllAI above, sans the smell, but I don't think that is fair. To be clear, I don't want those tools to disappear.

My anti AI stance isn't about tech, or not primarily about tech.

I don't like that it swallows up rainforests and produces unreliable results. Those are valid criticisms, but I actually agree that those are - in principle at least - solvable problems.

A knife is technology. I can use a knife to cut out a cancer, or to disembowel someone. This makes a knife...

... neither good nor bad; it's the usage of the tool that counts.

That same argument cannot be transferred to a gun. The entire point of a gun is to hurt and kill; its intrinsic purpose is evil. That it can be used to hurt, kill, and potentially deter "baddies" doesn't change that. It may justify the use in highly select circumstances, but doesn't magically absolve it.

Generally, tools are neutral. A weapon is a kind of tool that is intrinsically evil.

Back to AI.

AI is a tool. Even...

... so, the balance of cost vs. benefit must be considered. Clearly the benefit of AI used in assistive tech is worth a much higher cost than when applied in many other areas.

But even a high cost doesn't make a tool evil. It just raises the importance of asking questions about the cost/benefit tradeoff.

The thing that bothers me is that some AI is a weapon, and it's a weapon of fascism.

I suppose it's much fairer to restrict this to generative AI/GenAI, but I resist such a restriction,...

... because I just don't know what other AI use will come around the corner with the same issues. At the same time, it's the pattern that matters more than the tech, so it should be more broadly applied than just to AI.

"AI is evil" and "AI is a fascist project", things you'll see me write, are shorthands for this.

What makes GenAI evil?

The intent of GenAI, both implicitly and explicitly, is to replace humans.

Implicitly, because anything that automates does so. This is the more complex...

... part, but not all that complex, either. Automation is great when it automates boring, repetitive or dangerous tasks. It is useful when things need to be replicated precisely over and over.

The problems with GenAI approaches here are a) that they never seem to target the boring, repetitive or dangerous tasks. Generative art? No, that's literally taking the fun out of life.

And b) they're not precise. The whole point of GenAI is that it's a statistical parrot, it produces *likely* results.

Precision simply is not part of the job description, as it were.

So what this does is replace parts of the human experience that should not be replaced, and leaves parts intact that really should go, at least over time.

This should already be enough to make it evil. But what about it is fascist?

Other than the financing? Well, it's how it fits into politics.

A decade or so ago, some folk published popular science book called "The Dictator's Handbook" (ISBN-13: ‎ 978-1610391849). While this..

... gained some immediate notoriety, what fell by the wayside is that it's actually just the popular science *summary* of much deeper work, and based on a thorough analysis of as many forms of government across the globe and history as the researchers could manage.

The picture that emerges is this: natural resources beget tyrannies; lack of natural resources cause democracy.

This is, of course, a summary of a summary, and shouldn't be taken without comment. But this here is also a social...

... media thread, so I'll skip the fuller explanation, and just provide a brief summary.

No ruler exists without support, and support is essentially bought. This means that the question of who is in power largely relates to where they can raise money from, and how much they need to spend to raise more.

When there exist natural resources, the amount of people needed to extract them is relatively low. You clearly need to pay those people well, as well as the military. The rest of the...

... population is of lesser importance.

When you do not have natural resources, the only sensible source of income is taxation, for which you need a large population earning well, so that the percentage you skim off the top is enough to pay for essential support.

Lack of natural resources tends to make this service economies, which means the population also needs to be healthy, well fed, able to travel, and well educated.

When your population is well educated, it tends to want a say in how...

... things are done, so spending on individual people or groups of people is significantly less effective than spending on the population at large.

The result is that democracies and service oriented economies go hand in hand, and support each other rather than work in opposition.

Marx would not have used the words "service economy", but would have said "labour". Both are synonyms for "people".

Now cryptocurrencies and AI have one thing in common, other than using insane amounts of resources.

They're supported by the same investors. But actually, that's the same as using insane amounts of resources.

I'll explain.

The thing is this: natural resources in themselves do not matter. Yes, history is clear in where the patterns lie. But "air" is also a natural resource, and so far, there isn't much monetization of that. (Man was Spaceballs prescient: https://spaceballs.fandom.com/wiki/Perri-Air).

What makes a natural resource monetizable is scarcity. Cryptocurrencies are explicitly systems of artificial...

Perri-Air

Perri-Air was a brand of canned, naturally sparkling salt-free air from Druidia. It was sold in an aluminum can with a sticker on it reading "Perri-Air, Canned in Druidia." Due to an air shortage, it was one of the few sources of air available to Spaceball City and its residents.[1] Planet Spaceball President Skroob kept several cans in his desk. After he ended a phone interview with a reporter, he took one out of his desk and inhaled it, ironically after denying that Planet Spaceball had an...

Spaceballs: The Wiki

... scarcity, in which - by whichever proof scheme - those who participate early in the system benefit off those who come later (aka pyramid schemes). The proof algorithm guarantees scarcity; it's the whole point of blockchain vs. any other distributed system that there is a chokehold on resource creation somewhere.

AI is doing much the same thing, but it doesn't advertise this artificial scarcity as part of the solution. Instead, it simply guarantees that those who already own the most...

... compute resources have the edge. And that is not you or me.

In short, AI is a system which a) aims to replace human labour, while b) shifting the means of production into the hands of the few.

This would be "fine" if nobody used it. What matters for this to succeed is that everyone depends on it. At that point, "means of production" becomes the digital equivalent of a "natural resource".

Marx matters, folk.

You can still argue that this makes AI a weapon of capitalism or tyranny, but...

... not outright fascism.

Technically, that's kind of true. But it's also missing an important part of the picture. As the infamous Chad C. Mulligan wrote, "COINCIDENCE: You weren't paying attention to the other half of what was going on."

First, note how Hitler's extermination camps were inspired by Henry Ford's assembly line. Capitalism and fascism always had a close relationship, and it's not really possible to separate the two. It's no coincidence that the Jews of the time were also...

... associated with the Bolsheviks, in order to justify the application of means for dealing with one supposed threat to the other.

But more importantly, Peter Thiel is a literal fascist, strong promoter and heavy investor in AI. The ties are there, right here, right now, and who benefits - and it's not just Thiel, but all of his Epstein Ilk" - from an AI takeover is abundantly clear.

It's also well documented. This isn't some vague conspiracy shit. They're saying this quiet part out loud.

In short, *as a system* rather than a technology, AI is without any doubt a deeply fascist project. It is a weapon aimed straight at the world population at large.

Caveats that the tech itself can be seen as neutral, and definitely has good applications remain unaffected by this.

The survival of our democracies - or sufficiently democratic systems around the world - is the thing that concerns me, though. (Also the environment, but arguably less so overall.)

So as an update/addition:

If proprietary software is the most capitalist-faschist expression of software, then free software is (or should be?) the most democratic/anarchistic expression.

In other words, it's not actually much of a matter of preference whether one uses LLMs in FLOSS: you cannot square the circle and make anything free or libre with it.

Consider also that one of the effects of AI usage is *deskilling*. Whereas on a personal level, it's an individual choice whether one...

... likes vibe coding, as a commons oriented community, deskilling is a direct threat against the commons' sustainability. No FLOSS volunteers, no FLOSS.

The linked paper calls this an impediment to "capacity cultivation".

Now ask: why do corporations "donate" LLMs to the FLOSS community?

This is Embrace, Extend, Extinguish, the 2020s edition.

And you're actively aiding and abetting this when you let AI agents into your code base.

You only deserve derision from me.

https://link.springer.com/article/10.1007/s00146-025-02686-z

AI deskilling is a structural problem - AI & SOCIETY

Many artificial intelligence tools replace or stand to replace human activity, via automated decision-making, recommender systems and content generation. The more artificial intelligence (AI) replaces valuable human activity, the more it risks deskilling humans of their human capacities. This paper argues for applying a structural perspective to this phenomenon. It introduces the concept of ‘capacity-hostile environments’ to identify instances where AI mediation impedes human capacity cultivation. The analysis moves beyond individual responsibility that agents have to cultivate their human capacities, demonstrating how AI’s influence creates systemic conditions that could inhibit the development and exercise of human capacities by undermining the process of capacity cultivation. Drawing on the philosophy of skill as well as social epistemology, this paper argues that capacity cultivation (skilling) includes acquiring agential control over the capacities, inculcated through a long, gradual process of habituation. Habituation, in turn, depends on learning from others: the ‘know how’ of the skill, as well as a shared understanding of the value of the skill. AI mediation risks undermining the quality of the conditions for capacity habituation, leading to capacity impoverishment. By exploring the role of AI in mediating human activity, the paper highlights the need to evaluate AI applications based on their conduciveness or hostility to capacity cultivation. Ultimately, it calls for a critical reflection on the values inherent in AI socio-technical systems and emphasizes the societal obligation to foster capacity-conducive environments in the age of AI.

SpringerLink
@jens I feel like this should be a post/essay.