Writing this up again so I can pin it: AI is literally a fascist project. Friends don't let friends use it.

Before I go into this, there are two types of responses to this that I have taken seriously so far.

One I'll call HashTagNotAllAI, which yields the obligatory "sure", but has the same smell. I'll leave it at that.

The other is that an anti AI stance also throws some assistive technology under the bus, making such a stance intrinsically ableistic. The easy thing to do is to refer...

... to HashTagNotAllAI above, sans the smell, but I don't think that is fair. To be clear, I don't want those tools to disappear.

My anti AI stance isn't about tech, or not primarily about tech.

I don't like that it swallows up rainforests and produces unreliable results. Those are valid criticisms, but I actually agree that those are - in principle at least - solvable problems.

A knife is technology. I can use a knife to cut out a cancer, or to disembowel someone. This makes a knife...

... neither good nor bad; it's the usage of the tool that counts.

That same argument cannot be transferred to a gun. The entire point of a gun is to hurt and kill; its intrinsic purpose is evil. That it can be used to hurt, kill, and potentially deter "baddies" doesn't change that. It may justify the use in highly select circumstances, but doesn't magically absolve it.

Generally, tools are neutral. A weapon is a kind of tool that is intrinsically evil.

Back to AI.

AI is a tool. Even...

... so, the balance of cost vs. benefit must be considered. Clearly the benefit of AI used in assistive tech is worth a much higher cost than when applied in many other areas.

But even a high cost doesn't make a tool evil. It just raises the importance of asking questions about the cost/benefit tradeoff.

The thing that bothers me is that some AI is a weapon, and it's a weapon of fascism.

I suppose it's much fairer to restrict this to generative AI/GenAI, but I resist such a restriction,...

... because I just don't know what other AI use will come around the corner with the same issues. At the same time, it's the pattern that matters more than the tech, so it should be more broadly applied than just to AI.

"AI is evil" and "AI is a fascist project", things you'll see me write, are shorthands for this.

What makes GenAI evil?

The intent of GenAI, both implicitly and explicitly, is to replace humans.

Implicitly, because anything that automates does so. This is the more complex...

... part, but not all that complex, either. Automation is great when it automates boring, repetitive or dangerous tasks. It is useful when things need to be replicated precisely over and over.

The problems with GenAI approaches here are a) that they never seem to target the boring, repetitive or dangerous tasks. Generative art? No, that's literally taking the fun out of life.

And b) they're not precise. The whole point of GenAI is that it's a statistical parrot, it produces *likely* results.

Precision simply is not part of the job description, as it were.

So what this does is replace parts of the human experience that should not be replaced, and leaves parts intact that really should go, at least over time.

This should already be enough to make it evil. But what about it is fascist?

Other than the financing? Well, it's how it fits into politics.

A decade or so ago, some folk published popular science book called "The Dictator's Handbook" (ISBN-13: ‎ 978-1610391849). While this..

... gained some immediate notoriety, what fell by the wayside is that it's actually just the popular science *summary* of much deeper work, and based on a thorough analysis of as many forms of government across the globe and history as the researchers could manage.

The picture that emerges is this: natural resources beget tyrannies; lack of natural resources cause democracy.

This is, of course, a summary of a summary, and shouldn't be taken without comment. But this here is also a social...

... media thread, so I'll skip the fuller explanation, and just provide a brief summary.

No ruler exists without support, and support is essentially bought. This means that the question of who is in power largely relates to where they can raise money from, and how much they need to spend to raise more.

When there exist natural resources, the amount of people needed to extract them is relatively low. You clearly need to pay those people well, as well as the military. The rest of the...

... population is of lesser importance.

When you do not have natural resources, the only sensible source of income is taxation, for which you need a large population earning well, so that the percentage you skim off the top is enough to pay for essential support.

Lack of natural resources tends to make this service economies, which means the population also needs to be healthy, well fed, able to travel, and well educated.

When your population is well educated, it tends to want a say in how...

... things are done, so spending on individual people or groups of people is significantly less effective than spending on the population at large.

The result is that democracies and service oriented economies go hand in hand, and support each other rather than work in opposition.

Marx would not have used the words "service economy", but would have said "labour". Both are synonyms for "people".

Now cryptocurrencies and AI have one thing in common, other than using insane amounts of resources.

There's an aside here that I sometimes found worth pointing out: "replacing people" doesn't necessarily mean firing people.

It may simply mean lowering their "worth" in salary negotiations, because you can use the threat of replacement with AI.

Sometimes chains of logic are as simple as "A because B", and sometimes there are several intermediary steps.

You can do a step further: even if YOUR job is not threatened by AI takeover, if the average salary drops (locally), you're also affected.