I have some mixed feelings on the commons, LLMs, ownership and economics. Would love some input.

I find this hard to navigate so I hope you all can extend me some grace if I mess up. Happy to read and engage, please send links. So... here goes:

I'm seeing a lot of reactions to LLM value extraction that stand on copyright, or where people are reducing their contribution to the commons as a response. This feels like throwing the game to me: the worst move in a hard situation.

#noAI #ai #llm

I think people should be compensated for their work, in this capitalist world where compensation means survival.

I also think knowledge, culture and technology should be freely available, and gatekeeping either feels fundamentally anti-human.

This schism already existed before the current wave of LLMs extracting information in new ways from the commons, but it is certainly worse now.

I've seen people react to the value extraction by refusing to open source their code or widely share their art, because it—to them—feels like producing fodder for the extraction machine, more than anything else.

And, like... I can emphasize with that feeling. But I don't think it's true...

First, corporations were already extracting undue value from the commons, and ascribing some sort of special status to LLMs doing it feels like buying in to AI exceptionalism.

LLMs are perhaps a bit more efficient than previous technology, but the game we are playing is the same. The value of investing in a knowledge commons is not diminished by people exploiting it. We are building something stronger than siloed capitalistic corporations.

An LLM or tool accessing the commons does not make it less available to real people who could benefit from it.

The whole idea is building a shared pool of knowledge and technology that will allow more easy decentralized construction of dual power structures that are ready when the over-leveraged hypercapitalistic institutions that currently hold the reins to the world start failing.

At least that's how I see it.

Second, what's the danger? LLMs will never be able to competently adapt what you have built. They may marginally increase in ability to produce art or code that looks right at a glance, but it will not feel human-made.

And I am certain that we will see the cultural feedback loop play out in favor of things that feel genuine, that have a personal touch or vision.

Honestly, if we don't, we're kinda fucked either way.

As I see it, the real harm happening is a short term diversion of resources from already struggling artists to LLMs and other machine learning sloppily imitating their work. That harm is real.

Unfortunately, I don't really think we have any levers for this short term. And on a human level that's truly upsetting.

But it leaves me with this: we have to weather the storm unaffected by LLMs, continuing to share. Easier said than done, sure.

Am I way off base? Am I missing things?

My suggestions for constructive work short term would be encouraging support of real human artists and developers, making it socially fraught to use extractive technologies as long as it benefits a few corporations... maybe there are ways to better pool resources?

Long term feels more important: prepare for the economic and social collapse of LLMs, and be ready to welcome people to something better.

And to be clear: a strong commons, kept as alive as possible by those who can still contribute, is a big part of that long term goal. It *is* the alternative to slop we can offer.

@nielsa
> people are reducing their contribution to the commons as a response

Because the Commons is no more.

What you contribute with an intent to be in the commons instantonously is being grabbed by the #siliconiac monster to be rehashed and served in pulp to the serfs. You can keep none digraphs of the CC license now. Not even the BA (By Attribution) as your contribution to the "Commons" becomes oligarch's asset the second it joins owned by them heap of stolen work.

Soon #darknet will be the only place where your work could technically be recognized as yours.
#noai

@ohir I cover my take on this I replies to the thread I posted after you wrote this reply. Would love your input on where you think I'm wrong.

Your position is exactly what I'm responding to. I emphasize with it, but I don't think it is constructive, and I think it veers into AI exceptionalism... would love to flesh out my thoughts, so please tell me where/if you think I'm wrong.

#noAI

@nielsa
> I think it veers into AI exceptionalism
What do you mean using "exceptionalism"?

Because for me this reads like the "I" in this sentece was used for "Inteligence" while it should read "Illusion".

Nope. #siliconiac is not intelligent. It mimics being one. And it does this emulation seemingly well – It sounds so knowledgable and authoritative.

For all but a seldom recipient that happens to have a basic knowledge in the field the output babble is about. Basic knowledge allows one to spot that one per ten sentences is a lie. Expert knowledge of the field allows us to spot distortions and misrepresentations sprinkled over the rest.

What is really exceptional in current takeover is how many human beings are ready and willing to have their brains sucked out.

Now make a good use of #siliconiac translating services and read
https://en.wikipedia.org/wiki/Limes_inferior

Limes inferior - Wikipedia

@ohir I don't really understand your tone and the level of fervor you're coming at me with here... if you read anything else I write on LLMs you'll quickly see I agree with you on those points, e.g.
https://mas.to/@nielsa/116155256730780974
and https://mas.to/@nielsa/116205874563036815

...but you are not at all engaging with what I'm actually writing in the thread you are responding to?

Niels Abildgaard (@[email protected])

Every time I try to replicate or create useful cases for LLMs in technical areas I understand (including code/software development, corroborating various claims, summarizing text, etc) I have such a terrible experience convincing it to actually answer what I ask, without hallucinating or its clear training biases to shine through. I have yet to see anyone's shared "useful output" (non-trivial tasks) without the same flaws. I'm not surprised, but confused: people keep claiming it works??

mas.to

@nielsa My position is, roughly:

  • copyright has always a dodge and a smudge, a truce of sorts, because nobody creates anything ex nihilo and disentangling the web of influence and cross-pollination is impossible, but people gotta get paid
  • as a legal construct, copyright may or may not protect any given form or piece of art, but most people aren't talking about the legal construct, but the terms of the above truce
  • ingestion is usually what's mentioned WRT copyright (see the Anthropic book settlement), but reproduction is the legal problem, and the AI companies have spent a lot of time suppressing regurgitation of the training data (to some effect but not complete elimination)
  • I think the moral case is easy to make and easy to understand, but the legal one is a minefield, and the real solution is some more explicit legislation and/or regulations
@nielsa (all that said, I'm quite sympathetic to just outright banning image and video generation models entirely - they have absolutely no use case that's worth the utter shitpile they cause)
@delta_vee I think I broadly agree with your takes on all of these things. I always find legal solutions hard to argue about meaningfully, because I don't really respect the current global legal system. I don't think legislation will ever be the final solution - I think we should be preparing as much as we can for more structural change, rather than relying on existing (e.g. legal) structures.

@delta_vee Do you have any thoughts on my take on strategy? Concretely, that self-sabotaging the commons because it is exploited by LLM-creators ends up doing more long-term harm to the anti-slop "movement"? (idk which words to use to describe this exactly)

Tried to summarize my main point: https://mas.to/@nielsa/116237619916131781

@nielsa Depends quite a lot on what you count as "sabotage" -- I think trying to squirrel information away is doomed, but adding more gatekeepers isn't necessarily terrible. Encouraging more highly-curated corners is probably the best bet long-term; I'm not sure there's much hope for generalist or especially universalist knowledge commons. At least, not until well after this bubble pops, and the incentives change dramatically.
@nielsa I think the structural change is going to have to be to some extent within the structures of the legal system, because for all its faults, that's ultimately the mechanism we have that's anything close to both a) democratically (ish) controlled, and b) plausibly effective at restraining corporate machinery