[Random Thought] LLMs cause any singularity to move further into the future: they depend on human generated input data and a reliance on LLMs drops the quality of human created data for training both AI and humans. LLMs tend to flatten the slope of advancement over sufficient time.
@ee Flatten, or degrade? Several studies have shown that LLMs ingesting LLM content produces worse results because the statistical average is always lower than the original input. This will make it harder for them to ingest or regurgitate newer concepts. At some point the average will lower to a degree that all inputs look equivalent so the output will also become garbage. 🤷

@zimzat I'm suggesting a flattening of the progress of humanity as a whole due to LLMs degrading due to recursive input.

Humans rely on current tools, be they oral tradition or printed books. That transition led to more permanence. We're now in a vulnerable position where human knowledge is in increasingly precarious repositories. If the generational passing on of skills is turned over to those fragile tools and they fail, that human slope will also trend downward.

@zimzat This is something long recognized by groups like the Internet Archive project. My observation is not so much that this fragility is accelerating, but that offloading cognitive skills to tools is fine (make a list if your memory is bad!), unless those tools become unavailable after the skills used prior to those tools being available have no longer been passed on.