@dingemansemark @dunhamsteve

I love #uninformation as a word for this (have been trying non-information, but yours is better).

Mused about the latest Google nonsense here:

https://buttondown.email/maiht3k/archive/information-is-relational/

Information is Relational

Google's AI Overviews Fails Helpfully Highlight a Source of Danger By Emily The local volcano that did erupt during my lifetime: Mt St Helens on May 18,...

Another one for the #uninformation files: of course Google 'AI' summaries merrily regurgitate their own bullshit (ht @dunhamsteve). And remember, even if single cases like this may be zapped once they reach Google's attention, the root problem fundamentally cannot be addressed. As I wrote in 2020 (see upthread), all ingredients for an information heat death are on hand.

@emilymbender's metaphor of this as the equivalent of an oil spill in our information system is so painfully apt

In 2020 I blogged about large language models and the unstoppable tide of uninformation, saying "all ingredients for an information heat death are on hand": https://ideophone.org/large-language-models-and-the-unstoppable-tide-of-uninformation/

Now there are ways to test the consequences of uninformation feeding on uninformation, and the results do not look good. Arxiv preprint: Self-Consuming Generative Models Go MAD*
https://arxiv.org/abs/2307.01850

*Where MAD stands for Model Autophagous Disease

#LLMs #uninformation

Large language models and the unstoppable tide of uninformation – The Ideophone

@dingemansemark
Can you imagine a situation where people whose #livingSpace, or #breathingSpace, or #ThinkingSpace has been flooded by #uninformation have to apply the solution given at the end of this #Märchen ?
«... wer wieder in die Stadt wollte, der mußte sich #durchessen
My (mental) stomach would not resist that poison.

New blog post to register a #prediction: OpenAI and other purveyors of stochastic parrots are keeping the receipts to monetize #uninformation detection https://ideophone.org/monetizing-uninformation-a-prediction/

First flood the zone with bullshit (using that term technically: text produced without commitment to truth), then monetize detection of said bullshit.

In the wake of all this, my hope is that scholars will devote more time to what Ivan Illich called counterfoil research.

Monetizing uninformation: a prediction – The Ideophone

The onslaught of #chatgpt output (even here, but moreso on birdsite) reminds me that I wrote, back in 2020 when #gpt3 was still fresh, on the unstoppable tide of #uninformation: https://ideophone.org/large-language-models-and-the-unstoppable-tide-of-uninformation/

With LLMs training on scraped web data, increasingly including their own output, they're all set for an information heat death. Counterintuitively, this also means there was never a better time to be a scholar: dealer in high quality human-curated information and other scarce goods.

Large language models and the unstoppable tide of uninformation – The Ideophone