The LLM tech bros all talk about “frontier models”

and it strikes me that this use of settler colonialist language is surprisingly appropriate.

They’re enclosing the commons of our knowledge and our culture. They’re putting up barbed wire, so they can make us pay rent for everything.

@slothrop The language they use tells you exactly who they are.

I wish I had understood and been able to get more people to understand this when Homesteading the Noosphere was published.

@dalias @slothrop The problem is not AI but who owns it. I think intellectual property is an abomination anyway, everything immaterial that can be infinitely copied at marginal cost needs to be free for everybody to use, copy, and distribute however they like. All the nonconfidential data should be in the public domain, freely available, all software should be free and open source, all designs for all material goods should be in the public domain so anybody can build a copy of anything without having to obtain a license. In fact, private property should not exist, everything should belong to everybody, no rich or poor people anymore, no countries or borders, just a planet of eight billion equals. The very idea that anybody can own anything they can't carry around in a bag needs to die. The very Earth on which we live doesn't belong to any of us, we belong to her, we're part of her, just like all the other organisms since the very first cell.
The invention of property in the late Stone Age was a step in the wrong direction. Intellectual property is where it all becomes completely absurd, how can you own anything that anybody can copy? The current AI hype is just a bunch of techbros trying to seize all the I.P. and make it theirs forever in order to become the Cyberlords of Technofeudalism since the end of Capitalism is already on the horizon. Using their own laws to defend the I.P. of citizens against the corporations is not going to work since I.P. has always been a tool of the capitalists. Instead, we need to dismantle even physical property, nobody gets to own anything but a few personal possessions, everything else belongs to everybody, we need to restore the commons, abandon forced competition, and create cultures of cooperation and sharing where everything is free, both free as in speech and free as in beer, where money or markets don't rule our lives anymore.
There aren't any moderate futures left, the only futures still open to us are radical.
@LordCaramac @slothrop I don't read textwalls, but no the problem is not just who owns it. The problem is fundamental. "AI" is a machine for deception and destruction of knowledge. That's literally the precise optimization problem it's solving: how to rapidly generate content that humans will be likely to misinterpret as the product of intelligence and creativity. I do not want a world where this is ok because the "good guys" are the ones doing it. I want anyone doing it to be perceived as the scum they are.
@dalias @slothrop AI is just a research field of computer science, the part where we try to solve complex problems for which humans need their intelligence with computers. There has been a lot of progress in a small part of AI research lately, Machine Learning (ML), not because of any fundamentally new breakthroughs but because computers are finally powerful and available data collections finally big enough. All those huge proprietary models are quite impressive but actually not that useful, but there are much smaller, often more specialised, open source AI projects that anybody can run on a big PC or workstation at home or in the office without the need for a computing centre. Even AI training doesn't necessarily need a big computing centre, it's possible to use systems like BOINC instead where anybody who's interested in seeing a certain open source AI model finished could just donate CPU and GPU cycles, running a tiny part of the training process on their own computer.

@dalias @slothrop What's currently marketed as "AI" is just a tiny fraction of what computer scientists and software engineers are doing with ML, and ML is just a tiny fraction of the entire research field of AI. There is also symbolic AI, which is human-readable right from the start, like systems designed for natural language processing using carefully built knowledge graphs instead of automatic data mining. There's the field of evolutionary algorithms, self-modifying code that gets selected for how good it is at solving problems. Also, autonomic robotics. All that kind of stuff.
Generative AI, like diffusion based models and GANs (e.g., image generators, 3D model generators, video generators, audio generators) or LLMs (chatbots etc.) can also be quite useful for many things. It's just that the sheer scale at which they're pushed into anything is completely bonkers, and that's just because a whole bunch of investors has sunk immense sums of money into it thinking they were going to get immensely rich, and now they're desperate to sell it.

We just need to take the big money out of AI research and take the AI models out of the hands of the rich. And we will have to live with the fact that it is just as impossible to trust anything you haven't witnessed yourself anymore just as it was before the invention of photography and audio recording, but I totally expected something like that to happen sooner or later ever since I got my first multimedia upgrade for my 286 back in 1989. It used to be hard and take a lot of work and expertise to modify or fabricate natural looking digital images, audio, and video, and now it's all automated, and that technology isn't going to vanish as long as there are computers. Everything a large computing centre can do within seconds, a desktop PC can do within minutes to hours, just like back in the day my 286 could render a photorealistic 3D scene in a day or two. Nobody can squeeze toothpaste back into the tube.

@LordCaramac @dalias You would just like to interject for a moment, wouldn't you?

Bye.