If AI/LLMs are cut off from stolen human labour, their outputs turn to meaningless gibberish due to model collapse.

If AI/LLM owners are forced to pay for human labour instead of stealing it, they go bankrupt.

If AI/LLM data centres are forced to stay within energy consumption levels compatible with fighting climate change, they have to close down the overwhelming majority of their capacity.

AI/LLMs are just not sustainable, they can only function through labour theft and burning the planet. They're not so much new technology as a new type of ponzi scheme.

@FediThing I don't think the TECHNOLOGY is the problem. the first time I heard of LLM's was about a decade ago when they used them to help doctors detect early signs of cancer. and I think in situations like that they are perfectly fine.

As with many things the problem is capitalism.

@ErictheOrange

I think you might be discussing a different kind of AI? As far as I know LLMs are not used to detect cancer?

LLMs are just meant to simulate how language is used (https://en.wikipedia.org/wiki/Large_language_model).

But I take your point, the term "AI" covers a wide range of technologies some of which have been around a while and are legitimate. Unfortunately the bad stuff is totally dominating the term now.

Perhaps the term AI needs to be abandoned.

Large language model - Wikipedia

@FediThing I may be mistaken this was a long time ago. But from what I remember they trained the model on early scans of patients that were later confirmed to have cancer. then told it to find similarities in them. and then fed new scans in and asked it if it found any of those similarities in those.

That may not be an LLM but I seem to remember them calling it a large language model.

@ErictheOrange

That sounds like pattern recognition or something similar?