I have some mixed feelings on the commons, LLMs, ownership and economics. Would love some input.

I find this hard to navigate so I hope you all can extend me some grace if I mess up. Happy to read and engage, please send links. So... here goes:

I'm seeing a lot of reactions to LLM value extraction that stand on copyright, or where people are reducing their contribution to the commons as a response. This feels like throwing the game to me: the worst move in a hard situation.

#noAI #ai #llm

I think people should be compensated for their work, in this capitalist world where compensation means survival.

I also think knowledge, culture and technology should be freely available, and gatekeeping either feels fundamentally anti-human.

This schism already existed before the current wave of LLMs extracting information in new ways from the commons, but it is certainly worse now.

I've seen people react to the value extraction by refusing to open source their code or widely share their art, because it—to them—feels like producing fodder for the extraction machine, more than anything else.

And, like... I can emphasize with that feeling. But I don't think it's true...

First, corporations were already extracting undue value from the commons, and ascribing some sort of special status to LLMs doing it feels like buying in to AI exceptionalism.

LLMs are perhaps a bit more efficient than previous technology, but the game we are playing is the same. The value of investing in a knowledge commons is not diminished by people exploiting it. We are building something stronger than siloed capitalistic corporations.

An LLM or tool accessing the commons does not make it less available to real people who could benefit from it.

The whole idea is building a shared pool of knowledge and technology that will allow more easy decentralized construction of dual power structures that are ready when the over-leveraged hypercapitalistic institutions that currently hold the reins to the world start failing.

At least that's how I see it.

Second, what's the danger? LLMs will never be able to competently adapt what you have built. They may marginally increase in ability to produce art or code that looks right at a glance, but it will not feel human-made.

And I am certain that we will see the cultural feedback loop play out in favor of things that feel genuine, that have a personal touch or vision.

Honestly, if we don't, we're kinda fucked either way.

As I see it, the real harm happening is a short term diversion of resources from already struggling artists to LLMs and other machine learning sloppily imitating their work. That harm is real.

Unfortunately, I don't really think we have any levers for this short term. And on a human level that's truly upsetting.

But it leaves me with this: we have to weather the storm unaffected by LLMs, continuing to share. Easier said than done, sure.

Am I way off base? Am I missing things?

My suggestions for constructive work short term would be encouraging support of real human artists and developers, making it socially fraught to use extractive technologies as long as it benefits a few corporations... maybe there are ways to better pool resources?

Long term feels more important: prepare for the economic and social collapse of LLMs, and be ready to welcome people to something better.

And to be clear: a strong commons, kept as alive as possible by those who can still contribute, is a big part of that long term goal. It *is* the alternative to slop we can offer.