The LLM tech bros all talk about “frontier models”

and it strikes me that this use of settler colonialist language is surprisingly appropriate.

They’re enclosing the commons of our knowledge and our culture. They’re putting up barbed wire, so they can make us pay rent for everything.

@slothrop We mustn't let them!

@slothrop The language they use tells you exactly who they are.

I wish I had understood and been able to get more people to understand this when Homesteading the Noosphere was published.

@dalias @slothrop The problem is not AI but who owns it. I think intellectual property is an abomination anyway, everything immaterial that can be infinitely copied at marginal cost needs to be free for everybody to use, copy, and distribute however they like. All the nonconfidential data should be in the public domain, freely available, all software should be free and open source, all designs for all material goods should be in the public domain so anybody can build a copy of anything without having to obtain a license. In fact, private property should not exist, everything should belong to everybody, no rich or poor people anymore, no countries or borders, just a planet of eight billion equals. The very idea that anybody can own anything they can't carry around in a bag needs to die. The very Earth on which we live doesn't belong to any of us, we belong to her, we're part of her, just like all the other organisms since the very first cell.
The invention of property in the late Stone Age was a step in the wrong direction. Intellectual property is where it all becomes completely absurd, how can you own anything that anybody can copy? The current AI hype is just a bunch of techbros trying to seize all the I.P. and make it theirs forever in order to become the Cyberlords of Technofeudalism since the end of Capitalism is already on the horizon. Using their own laws to defend the I.P. of citizens against the corporations is not going to work since I.P. has always been a tool of the capitalists. Instead, we need to dismantle even physical property, nobody gets to own anything but a few personal possessions, everything else belongs to everybody, we need to restore the commons, abandon forced competition, and create cultures of cooperation and sharing where everything is free, both free as in speech and free as in beer, where money or markets don't rule our lives anymore.
There aren't any moderate futures left, the only futures still open to us are radical.
@LordCaramac @slothrop I don't read textwalls, but no the problem is not just who owns it. The problem is fundamental. "AI" is a machine for deception and destruction of knowledge. That's literally the precise optimization problem it's solving: how to rapidly generate content that humans will be likely to misinterpret as the product of intelligence and creativity. I do not want a world where this is ok because the "good guys" are the ones doing it. I want anyone doing it to be perceived as the scum they are.
@dalias @slothrop AI is just a research field of computer science, the part where we try to solve complex problems for which humans need their intelligence with computers. There has been a lot of progress in a small part of AI research lately, Machine Learning (ML), not because of any fundamentally new breakthroughs but because computers are finally powerful and available data collections finally big enough. All those huge proprietary models are quite impressive but actually not that useful, but there are much smaller, often more specialised, open source AI projects that anybody can run on a big PC or workstation at home or in the office without the need for a computing centre. Even AI training doesn't necessarily need a big computing centre, it's possible to use systems like BOINC instead where anybody who's interested in seeing a certain open source AI model finished could just donate CPU and GPU cycles, running a tiny part of the training process on their own computer.
@LordCaramac @slothrop At best you're an obnoxious mansplainer and likely you're copy&pasting slop emited from the stuff you're simping for. Not reading that. Bye.

@dalias @slothrop What's currently marketed as "AI" is just a tiny fraction of what computer scientists and software engineers are doing with ML, and ML is just a tiny fraction of the entire research field of AI. There is also symbolic AI, which is human-readable right from the start, like systems designed for natural language processing using carefully built knowledge graphs instead of automatic data mining. There's the field of evolutionary algorithms, self-modifying code that gets selected for how good it is at solving problems. Also, autonomic robotics. All that kind of stuff.
Generative AI, like diffusion based models and GANs (e.g., image generators, 3D model generators, video generators, audio generators) or LLMs (chatbots etc.) can also be quite useful for many things. It's just that the sheer scale at which they're pushed into anything is completely bonkers, and that's just because a whole bunch of investors has sunk immense sums of money into it thinking they were going to get immensely rich, and now they're desperate to sell it.

We just need to take the big money out of AI research and take the AI models out of the hands of the rich. And we will have to live with the fact that it is just as impossible to trust anything you haven't witnessed yourself anymore just as it was before the invention of photography and audio recording, but I totally expected something like that to happen sooner or later ever since I got my first multimedia upgrade for my 286 back in 1989. It used to be hard and take a lot of work and expertise to modify or fabricate natural looking digital images, audio, and video, and now it's all automated, and that technology isn't going to vanish as long as there are computers. Everything a large computing centre can do within seconds, a desktop PC can do within minutes to hours, just like back in the day my 286 could render a photorealistic 3D scene in a day or two. Nobody can squeeze toothpaste back into the tube.

@LordCaramac @dalias You would just like to interject for a moment, wouldn't you?

Bye.

@LordCaramac @dalias I'm going to need a very good reason to read anything longer than 500 chars, and you're not giving me one.
@slothrop I agree about enclosure, same with the walled gardens and app stores. The internet should be free to publish, free to consume, free to share. It is absolutely digital commons and we should protect it. I see AI as a way to poison the well, anything they can't control in a walled garden they pollute to the point of not being usable. It's really gross and also quite scary but I am optimistic that there are ways to circumvent it.
@tiny_m @slothrop Ironically, the tool might end up being more AI: recognizers keyed to identify "AI-like" output to act as filters.
@mark @slothrop perhaps, I also think there is and will be a greater desire for things which are genuinely human made, or which only a human can create. Perhaps a rise in people wanting more real world interaction, or things which you can only experience in a way that isn't automated. I don't know. It's hard to be optimistic but I think it's also important to imagine positive narratives and part of the way out of the dystopia is thinking of ways to make the world better.
@tiny_m @slothrop Agreed. I suspect we're going to see a new version of the wood-paneling fad from the '70s: "We could have made this with a fully-automated process, but this one was hand-carved by a chainsaw artist in Smalltown, Iowa..."

@slothrop Not just extract rent: They're building slot machines where you pay by the pull:

https://social.bau-ha.us/@raganwald/114710551392685418

Reg Braithwaite 🍓 (@raganwald@social.bau-ha.us)

Shot: Everything about “The Man Who Killed Google Search” is fantastic except for the title. Under blamelessness, we recognize that given the systemic incentives and affordances, if it wasn’t Prabhakar Raghavan, it would have been someone else: https://www.wheresyoured.at/the-men-who-killed-google/ Given Google and Capitalism the enshittification of search to incrementally goose revenues was inevitable. 👇🏽

mastodon@bau-ha.us
@slothrop How does one put up barbed wire around the primary source training material?
Metaphor - Wikipedia

@slothrop Sure, but I don't get the metaphor. Settlers took commons and parceled it up for exclusive use; "You can't use these woods, they're my woods now." How does one do that with LLMs when the primary data is still there, reachable the way they reached it? The fact Anthropic read a bunch of books didn't make the books disappear, nor did image generators being trained on DeviantArt's contents make DeviantArt go away.

I don't get what the fence represents here.

(Sidebar: wow, but how timely and appropriate is that image Wikipedia has right now to visualize "metaphor?" Democratic party can't even catch a break historically!)

@slothrop

Accurate and insightful.

Clearing the commons.

@slothrop > Space: the final frontier. These are the voyages of the starship Enterprise.

Star Trek brainwashed nerds into being colonizers
@slothrop they shoot the buffalos and enslave the natives that are already there.
@slothrop That's the techbro way, carve up the commons and sell it back to the people at a price.
@slothrop someone wrote an entire book with this as the premise lately they've gone on every podcast to promote it. so i think you got scooped

@slothrop

Agreed — which is why we need to actively resist the enclosure. Here's a solid example of how to do that: https://tldr.nettime.org/@asrg/114742667183459482 via @asrg

ASRG (@asrg@tldr.nettime.org)

Attached: 1 image **"Trapping AI" – Slight Update!** 🌀 Activity in the **"Trapping AI"** project is accelerating: in just under a month, over **26 million requests** have hit our tarpit URLs 🕳️. Vast volumes of meaningless content were devoured by AI crawlers — ruthless digital leeches that relentlessly scour and pillage the web, leaving no data untouched. In the coming days, we’ll roll out a new layer of complexity — amplifying both the *intensity* and *offensiveness* of our approach. This escalation builds on `fakejpeg`, a tool developed by @pengfold@social.ty-penguin.org.uk. 🖼️ `fakejpeg` generates fake JPEGs on the fly. You "train" it with a collection of existing JPEGs, and once trained, it can produce an arbitrary number of things that *look* like real JPEGs — perfect for feeding aggressive web crawlers junk 🗑️. Explore `fakejpeg`: [https://github.com/gw1urf/fakejpeg](https://github.com/gw1urf/fakejpeg) Learn more about *"Trapping AI"*: [https://algorithmic-sabotage.github.io/asrg/trapping-ai/#expanding-the-offensiveness](https://algorithmic-sabotage.github.io/asrg/trapping-ai/#expanding-the-offensiveness) See the tarpit in action: [https://content.asrg.site/](https://content.asrg.site/)

tldr.nettime
@slothrop what do they even mean by words any more?! I was witness to a couple tech execs blithering on Twitter about how such and so new ChatGPT thingummy was going to be "agentic". WHAT DOES THAT MEAN IS THAT A WORD ~Chara

@kris_of_pnictogen @slothrop Oh wow. Yeah, taught me a new word today.

Agentic as in "agent." This is describing AI that will be able to do its own planning and execution, whereas the current generation just responds to input. So, comes up with its own questions, creates its own sub-goals and acts on them.

Oh good. CEOs are trying to automate VPs. Nobody tell them that a CEO is just a fancy kind of VP, yeah?

@mark @slothrop ahh okay that makes a little sense anyway. I find myself thinking that there must surely be a better, more sensible word for this sort of thing but...I don't suppose it really matters now ~Chara

@slothrop

Absolutely. "Frontier" means 'the place where no rule or force can stop us'. Thats all that word has ever meant.