In 2020 I blogged about large language models and the unstoppable tide of uninformation, saying "all ingredients for an information heat death are on hand": https://ideophone.org/large-language-models-and-the-unstoppable-tide-of-uninformation/
Now there are ways to test the consequences of uninformation feeding on uninformation, and the results do not look good. Arxiv preprint: Self-Consuming Generative Models Go MAD*
https://arxiv.org/abs/2307.01850
*Where MAD stands for Model Autophagous Disease


