AI Models Get 'Brain Rot' From Social Media Training Data

New study reveals LLMs suffer cognitive decline when trained on viral social content

Will the popping of the so called "AI" bubble have any long term effects? Discuss.

The late 90s dotcom bubble failed to kill what we used to call world wide web. Instead it kickstarted turning into the small scale web into the monetized, finacialized, gamified, pornified and enshittified behemoth we are all forced to use every single day. LLMs and generative so called "AI" is the latest product of this on-going process.

"AI"'s sole innovation is that it steals all it needs to generate it's gibberish when it should be paying hundreds of trillions to the owners and creators of all the worlds artistic endeavors. It legalizes theft, or rather legislators look the other way when LLMs admit to committing wholesale theft of artistic works.

When model collapse finally happens or something else pops the bubble the shit will really hit the fan. Vast numbers of business and individuals will all discover that they all spent huge sums of money on hot air and marketing hype. The resultant backlash will bankrupt all the "AI" peddlers overnight.

None of this will have a long lasting effect on the global economy.The dotcom crash staggered the world economy, the 2008 financial crisis dealt it a severe blow. So did the COVID pandemic. Wars in Ukraine and elsewhere caused many severe problems. The world economy carried on through all of them. By the standards of economists it is stronger and healthier than it ever was.

Its this insane global economy that creates things like the "AI" bubble. The economy is a vast machine for extracting money from human endeavor and natural resources of all kinds. Furthermore, it concentrates all that money into the hands of a vanishingly small number of people. The rest of humanity simply starves in freezing hovels.

The "AI" bubble is not the problem. To rewrite and probably ruin an old American campaign slogan, "Its the global, neo-liberal, finacialized, economy stupid".

#AI #LLMs #Economics #GlobalEconomy #AIBubble #EconomicBubble #ModelCollapse

"The co-degeneration thesis is not a prediction about distant futures. It describes dynamics already in motion, already documented in peer-reviewed research, already observable in the declining quality of online discourse and the increasing unreliability of AI systems that should, by simple scaling laws, only be improving.

The feedback loops are active. Engagement-optimized content degrades training data. Degraded models produce degraded outputs. Humans consuming and delegating to these systems experience cognitive effects that reduce their capacity to recognize and correct the degradation. The cycle continues.

But this is not a counsel of despair. The research also suggests intervention points. Model collapse can be prevented through data accumulation strategies that preserve genuine human content. Cognitive debt can be mitigated through usage protocols that maintain human engagement. Platform incentives can be restructured through regulation, competition, or user demand.

The question is whether institutional actors—corporations, governments, investors, educators—recognize the dynamics in time to intervene effectively, or whether they continue optimizing for metrics that accelerate the degradation."

https://substack.com/inbox/post/180851372?r=6p7b5o&utm_medium=ios&triedRedirect=true

#AI #GenerativeAI #Chatbots #LLMs #ModelCollapse

The Cognitive Collapse Thesis: How Polluted Information Loops Are Degrading Both Machine and Human Intelligence—And What It Means for Capital, Power, and Civilization

A Strategic Intelligence Assessment for Decision-Makers Navigating the Most Consequential Technological Inflection Since Electricity

Giải pháp: Đề xuất giải quyết sự sụp đổ của mô hình: Kiến trúc lời nhắc phát triển và Chuyên gia trong vòng lặp. #ModelCollapse #EvolvingPromptArchitecture #ExpertInTheLoop #SựSụpĐổCủaMôHình #Kiến TrúcLờiNhắcPhátTriển #ChuyênGiaTrongVòngLặp

https://github.com/jeanstef974/Prompt-evolutif.git

. @glitter mentioned a few days ago that AI-generated images are becoming more and more yellow as the LLMs are trained on the output of other LLM runs. #ModelCollapse #AI #LLMs
#HoloWrites 1200-odd words today! I'm finding it super difficult to fake writing LLM output in a way that's engaging, funny, and obvious to the reader, but I think I'm getting there with the last chapter of #ModelCollapse. Shouldn't keep my audience of three waiting too long :D

I've read that LLMs and other generative models will eventually collapse if they are trained on their own output. I did a search and found this paper for example https://www.nature.com/articles/s41586-024-07566-y . Shouldn't this problem affect humans as well? Humans "generate" books which other humans use to "train" themselves. Then these trained humans generate new books and the cycle continues. What prevents the quality and diversity of the human output from collapsing in the same way that LLM output collapses?

My guess is that sometimes there are problems where the quality of human thought decreases over time. Group think comes to mind. In science, experimental work helps to keep the theory to be grounded. Also humans live in the real world so they suffer if their internal world model differs from the real world.

#LLM
#modelCollapse
#machineLearning

AI models collapse when trained on recursively generated data - Nature

 Analysis shows that indiscriminately training generative artificial intelligence on real and generated content, usually done by scraping data from the Internet, can lead to a collapse in the ability of the models to generate diverse high-quality output.

Nature

In big news overnight, #Anthropic have made a major change to their user data retention and training policy - giving customers until September 28th to opt out, or have their chats, code sessions and other artefacts used for training for up to five years.

This is a major departure from their previous privacy-first stance.

But what's really behind this change? As Connie Loizos points out in this @Techcrunch article, it's all about the #data.

As I've spoken about recently, we've passed #PeakToken - the point in history where we have the maximum amount of authentic, human-generated data available. Now, the internet is polluted with synthetically-generated #AIslop. If you're an #AI company scraping the web for new data to train on, that's bad news, because you also scoop up the AI slop. If models are trained on AI slop, they're likely to encounter #ModelCollapse - like a bad photocopy.

Anthropic's play here is all about the #TokenCrisis - the voracious appetite for new, authentic, human-generated data to train on - part of a broader phenomenon I've termed the #TokenWars.

As new data becomes scarcer and more valuable, it will be more sought after and contested. We're still in the early days of the #TokenWars, and we should expect to see more moves like this to secure more data for AI training.

https://techcrunch.com/2025/08/28/anthropic-users-face-a-new-choice-opt-out-or-share-your-data-for-ai-training/

Anthropic users face a new choice – opt out or share your chats for AI training | TechCrunch

Anthropic is making some major changes to how it handles user data. Users have until September 28 to take action.

TechCrunch

#ModelCollapse is not inevitable, but together we can make it happen    

#WHY2025 #Kugelmugel #TetrapodCult #Tetrapod

"Für den eGovernment Podcast von Torsten Frenzel habe ich einige Begriffe rund um #AI #KI erklärt, beispielsweise warum mehrere aktuelle Studien u.a. von #goldmansachs vor #PeakAI warnen, was #slop, #autophagy, #enshitification und #modelcollapse sind und warum wir dem gerade zuschauen. Abermilliarden werden da gerade investiert, Zerstörung des Klimas inbegriffen (aber das war hier gar nicht Thema). Ansonsten ging's ums Zentrum Digitale Souveränität (ZenDiS)...."
https://www.linkedin.com/posts/markusfeilner_monatsschau-0824-activity-7235961839652601856-Log9
Für den eGovernment Podcast von Torsten Frenzel habe ich einige Begriffe rund um #AI #KI erklärt, beispielsweise warum mehrere aktuelle Studien u.a. | Markus Feilner

Für den eGovernment Podcast von Torsten Frenzel habe ich einige Begriffe rund um #AI #KI erklärt, beispielsweise warum mehrere aktuelle Studien u.a. von #goldmansachs vor #PeakAI warnen, was #slop, #autophagy, #enshitification und #modelcollapse sind und warum wir dem gerade zuschauen. Abermilliarden werden da gerade investiert, Zerstörung des Klimas inbegriffen (aber das war hier gar nicht Thema). Ansonsten ging's ums Zentrum Digitale Souveränität (ZenDiS) und ein Kudo für den scheidenden Andreas Reckert-Lodde, #opendesk und B1 Systems und mehr. Anhören! https://lnkd.in/duCC8xWz