Training large language models on the outputs of previous large language models leads to degraded results. Less diversity, more bias, and …jackrabbits?

https://www.scientificamerican.com/article/ai-generated-data-can-poison-future-ai-models/

#IsThisTheSingularity

AI-Generated Data Can Poison Future AI Models

As AI-generated content fills the Internet, it’s corrupting the training data for models to come. What happens when AI eats itself?

Scientific American
@janellecshane At least it's not giraffes..
@sol_hsa I am disappointed it’s not giraffes
@janellecshane @sol_hsa No, no. Giraffes are a unit of measure for asteroids.
Expert Insight: Dangers of Using Large Language Models Before They Are Baked

Today's LLMs pose too many trust and security risks.

Dark Reading
@janellecshane is this a way of getting the monster to eat itself? So it will vanish in a cloud of meaninglessness in the end?
@janellecshane Relying on LLMs to write or create art is a pathway to mediocrity.