If AI/LLMs are cut off from stolen human labour, their outputs turn to meaningless gibberish due to model collapse.

If AI/LLM owners are forced to pay for human labour instead of stealing it, they go bankrupt.

If AI/LLM data centres are forced to stay within energy consumption levels compatible with fighting climate change, they have to close down the overwhelming majority of their capacity.

AI/LLMs are just not sustainable, they can only function through labour theft and burning the planet. They're not so much new technology as a new type of ponzi scheme.

@FediThing very well spoken! So what? Who cares? Who's in charge?
@FediThing unfortunately before something would stop them, they would already destroy environment and many lives with it...

@FediThing There are three things that have been put forwards as ways to stop the AI onslaught.

1. Technology: Poisoning tools, tarpits, and access denial to training data. These are good ideas and throw up short-term roadblocks but are doomed to fail.

2. Law: There are a few pending cases (such as the ones involving the NYT and another involving Anthropic) that have the potential to do serious damage...but in a global legal environment this is a tough sell.

(1/2)

@FediThing
3. Economics: The market wheels grind slow, but they grind mighty fine. Investors are starting to get impatient, these companies are burning cash at a prodigious rate, and none of them have figured out how to be operationally profitable.

I know which of these three I'm betting (and hoping) on.

(2/2)

@elengale @FediThing The Big Short guy weighing in recently has to be on the minds of at least some investors. Same with Theil divesting
@FediThing Yup. It's just good old colonialism turned on ourselves through our data.

@FediThing

It's worth pointing out that this has pretty much been the capitalist model from the beginning. Capitalist systems famously can't launch without a massive act of larceny.

@RustyRing

Some would argue that AI is just the tip of the capitalist iceberg, others that it's simply today's crown of capitalist creation, the logical step of it's evolution.

Since ever capitalism, the right to exploitation by the strongest, lived of exploiting the commons and breaking 'things", common agreements not signed into written law.

@FediThing

@FediThing If you substitute “human” for “AI/LLM” in that text, it’s still true.
@adrianco @FediThing Humans can only function through theft and destroying the planet? Really?

@adrianco

No, it's not.

Humans can come up with new ideas on their own, they aren't dependent on just recycling other people's ideas.

Where would new ideas come from in the first place if humans were just recycling them?

AI/LLM comes up with zero, it has no intelligence at all. It's just looking at what humans say and spitting it back at humans based on statistical probability. That's why its answers are contradictory and make no sense, that's why it is totally dependent on being fed ever larger amounts of human data.

@FediThing If humans are cut off from education their output turns into meaningless gibberish.
If humans can’t exploit labor they go out of business. Humans use too much resource and are killing the planet.
@FediThing Give me an example of a new idea that a human had, that isn’t a product of cross domain synthesis, random guessing, adjacency or other emergent properties that LLMs already do.
@FediThing I ran the innovation program at eBay research labs in 2005 and have spent a lot of time trying to help companies innovate, and the sources of innovation aren’t that special from my experience. Most companies systematically shut down innovation, but it’s easy to generate new ideas themselves.

@FediThing I agree with a lot of this.

But I don't think you understand what a Ponzi scheme is.

@FediThing @emaytch they're like the distilled essence of every hazard of capitalism

@FediThing I don't think the TECHNOLOGY is the problem. the first time I heard of LLM's was about a decade ago when they used them to help doctors detect early signs of cancer. and I think in situations like that they are perfectly fine.

As with many things the problem is capitalism.

@ErictheOrange

I think you might be discussing a different kind of AI? As far as I know LLMs are not used to detect cancer?

LLMs are just meant to simulate how language is used (https://en.wikipedia.org/wiki/Large_language_model).

But I take your point, the term "AI" covers a wide range of technologies some of which have been around a while and are legitimate. Unfortunately the bad stuff is totally dominating the term now.

Perhaps the term AI needs to be abandoned.

Large language model - Wikipedia

@FediThing I may be mistaken this was a long time ago. But from what I remember they trained the model on early scans of patients that were later confirmed to have cancer. then told it to find similarities in them. and then fed new scans in and asked it if it found any of those similarities in those.

That may not be an LLM but I seem to remember them calling it a large language model.

@ErictheOrange

That sounds like pattern recognition or something similar?

@FediThing and from the Empire of AI book- it's a new form of colonialism ☹️

@patrickleavy

Yup, it definitely feels related. Everything is easier when you're allowed to steal other people's work.