As predicted, ML models suffer irreversible damage when you train them on generated data, a phenomenon these researchers are calling "model collapse". Uncurated datasets are effectively poisonous.

Who, apart from anyone who thought about it for a few seconds, could have predicted.

https://arxiv.org/abs/2305.17493v2

The Curse of Recursion: Training on Generated Data Makes Models Forget

Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

arXiv.org

@mhoye

It's the oldest rule in computer science -- "Garbage in, garbage out" -- implemented as a feedback loop.

And yes, this was easy to predict as soon as LLMs became generally available. A coworker of mine pointed out that the Google search corpus from late 2022 has become a crucially important resource, as the last record of an infosphere not polluted by LLM output. There's a rather disturbing analogy here to the high market value of scrap steel from before 1945.

https://en.wikipedia.org/wiki/Low-background_steel

Low-background steel - Wikipedia

@isomeme I understand the idea, but I think it's pretty optimistic to think that the '22 google corpus is somehow usefully pure. SEO content farms have been around for only slightly less long than search engines.

@mhoye

Of course. Still, I think LLMs are already proving to be a vastly more damaging kind of pollution.