I've been seeing people say the harms of "AI" comes from shitty people, not the tech and that the shitty people will go away after the bubble pops. While the first part is true in the strictest sense (contrary to claims, these are not autonomous systems) it's not really true in a practical sense. The harms are designed in and the "shitty human beings" won't go away.
Generative models and automated decision-making systems are political projects. They are tools for fencing off sectors of our society for rent, for cutting back on education and healthcare for the poor, for removing accountability. They are inherently tools for removing humans from the equation. They are not neutral in their design. Their existence has a political purpose

They were built for this goal—fewer humans, less accountability—and will always be problematic no matter who is in charge and always always be a risk as long as the "shitty people" have power

In the modern history the "shitty human beings" have always retained power and influence after bubbles pop, from Reagan onwards. The bubble builds up wealth and power, they keep it after it pops, and use their influence to get in on the ground floor on the next one.

There is no rebuilding, no constructive potential for "AI" without political reform, both in the US and in Europe. The motivation behind the tech still remains: powerful people want to take things away from society

The people who funded and drove today's "AI" will have the resources to figure out how to make the tech affordable after the financial bubble pops. They're the ones who will have the resources to figure things out about LLMs, not us.

There is also not much to figure out for the rest of us. The technology is purpose-designed to remove people from the equation, much like a handgun is purpose-designed to remove people from existence. Any "figuring out" about either tech will only result in variations on their purpose

What we need to do is strip back the tech, go back to the drawing board, and figure out how to reinvent it practically from scratch to be more human. /fin

@baldur Wardley had an interesting take on another way this is true, I think it was in a LinkedIn post: "AI" chatbots function as a transfer of values, normalizing what the designing tech bro class think should be normalized.

A sycophantic, encouraging, but subtly opinionated automaton, that slowly tugs at the opinions, values, principles of those who interact with it.

@nielsa Ah, yeah. "transfer of values" is a useful way of describing it.

@baldur i am infinitely puzzled by the implicit assumption that if the shitty people will somehow vanish once the bubble pops, there appears to be a sort of system that spontaneously spawns and disappears shitty people once a suitably expoitable technology comes along.

as if it was the exploitable technology that has its own shitty people as a sort of epiphenomenon, not shitty people exploiting technology because they are shitty.

@baldur
agreeing and adding to your comment - Epstein was very interested in computer games as a gateway to fascism. While Maxwell was the main driver behind Alt Right and 4Chan "come for the memes stay for the fascism".

#AI
@baldur This is not dissimilar to about a century's worth of outsourcing fiscal and economic decisions to technocrats in central banks and treasuries. That too was to remove accountability (no pun intended) and make part of policymaking less democratic in order to safeguard the capital order.

@baldur Look at how these things are built and marketed. Stolen artistic and intellectual production for the training set. Exploited African labor for the fine-tuning. Maximally wasteful training processes to shut out competition. Misleading packaging into first-person chatbots.

It’s layer upon layer of total contempt for workers, women, the environment, the customer. You can’t make something decent out of this unless you throw it all away and start over from first principles.

The slopvendors have built a *product* which they are falsely marketing as a *technology*.

@baldur "Guns don't kill people, people kill people."

It's true? Even gun enthusiasts thought it was stupid though.

@baldur even when open ai go bankrupt we're still going to have shitty people trying to profit off LLMs. The tech is just too attractive to a certain sort of amoral grifter
@baldur I've already seen some quite ok people turn into shitty people because of the AI, so...

@baldur that is the core issue.

Our systems incentivize and promote traits we do not recognize as human.