As predicted, ML models suffer irreversible damage when you train them on generated data, a phenomenon these researchers are calling "model collapse". Uncurated datasets are effectively poisonous.

Who, apart from anyone who thought about it for a few seconds, could have predicted.

https://arxiv.org/abs/2305.17493v2

The Curse of Recursion: Training on Generated Data Makes Models Forget

Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

arXiv.org
@mhoye @futurebird Doesn't this assume new models are going to continue using indiscriminately scraped training data, like most of the current generation? For a lot of (technical, ethical, legal) reasons imo that trick was only ever going to work once.
@n1ckfg @mhoye @futurebird what other options are there if the goal is cranking cost "efficiency"?

@kunev @futurebird @n1ckfg For a couple of decades there was a very valuable market for pre-WW2 salvaged shipwreck metal, to use in high-sensitivity radiological instruments; surface metals had been poisoned by nuclear testing, so untainted material was suddenly rare, expensive and necessary.

This is what we're going to see with training data sets; the highest-value work will be in exceedingly careful, tedious curation for date-range authentic, spam-filtered, human-sourced raw material.

@mhoye @kunev @futurebird @n1ckfg And I guess this will be called "Handcrafted LLMs" and sold at a premium by companies run by tattooed dudes with a ponytail. 😉

@random_musings @kunev @futurebird @n1ckfg

"Young people trying to build a meaningful future out of the world we've given them" aren't a class of people I'm inclined to condescend to, and computing as a field could stand to have a lot more curated craft and opinionated artisanalism in it than it does now.

@mhoye @kunev @futurebird @n1ckfg I agree, but simply can't see that happening with training data sets.

This is extremely tedious work that makes Wikipedia moderation look like a fun gig. So, I frankly speaking can't see open-source / open-knowledge type of communities evolving around this topic (but admittedly, I wouldn't have bet on Wikipedia working out before it came along).

Hence why I think that such handcrafted data sets and LLMs will end up being expensive artefacts sold at a premium.

@random_musings @kunev @futurebird @n1ckfg I think the opposite is going to be the case - that we're going to see communities emerge as people move away from centralized services, where those communities find value humanity but seek consensus around choosing to opt into participatory modelling.
@mhoye @kunev @futurebird @n1ckfg I like your optimism and hope the future proves me wrong. 😊
@kunev @futurebird @mhoye I think the costs are going to change, though--legal precedents on copyrightability, infringement, etc. will roll out around the world; "folks who can't rent a thousand H100s" form a very large market for efficient local compute solutions, etc. Firefly is an interesting data point for an attempt to conventionally license an entire corpus (Adobe Stock)

@n1ckfg @kunev @futurebird

I think that the computational costs of incremental improvement will get driven close enough to zero that assistive localhost stacks and personal model maintenance will become just another system-level background task, like filesystem maintenance. Start with a decent model, train it on a year of your own chat and sent-mail logs, here you are. Autotune for your own narrative voice.