Yesterday I had a number of conversations with people working in the scholarly publishing sphere about what happens when AI chatbots pollute our information environment and then start feeding on this pollution.

As it so often, the case, we didn’t have to wait long to get some hint of the kind of mess we could be looking at.

https://www.theverge.com/2023/3/22/23651564/google-microsoft-bard-bing-chatbots-misinformation

Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow

Microsoft’s AI chatbot Bing incorrectly reported the demise of Google’s AI chatbot Bard. It’s an early warning sign that this technology is fueling a massive game of misinformation telephone.

The Verge
My fear is that we’ve created an information ecosystem that is uniquely susceptible to the perversions of these AI tools. Fifty years ago, had they existed, they would’ve been mere curiosities because we lacked the information infrastructure for their output to swamp more trusted forms of information. Even twenty years ago there would have been substantially less opportunity for them to have cause harm.

The confluence of this technology with the information ecosystem that we described in our paper from a couple of years could be an epistemic catastrophe.

https://www.pnas.org/doi/10.1073/pnas.2025764118

I’m coming to think that releasing these tools was a reckless act with the potential to generate negative externalities we have barely started to imagine.

The threat isn’t rogue superintelligence. It’s bullshit at unprecedented scale, reflected back upon itself and iteratively amplified.

I’m certainly not saying that the deployment of these systems will suddenly make it impossible to find and build upon trusted and vetted sources of information. Those aren’t going to magically disappear.

My bigger fear is that rather than making a blunder in tying their infotech empires, two automated bullshit generation. Microsoft and Google have correctly anticipated demand. My fear is that people might want what Bing and Bard are selling.

On the other hand, if I dial up the cynicism, just a little bit more, maybe it doesn’t matter much. One view is that by spewing bullshit into the information ecosystem, generative AI is poisoning the well from which it drew life and ensuring that future generations of such technologies will produce garbage.

Another is that the training set was never a pure wellspring. It was already the town cesspool—and even massive quantities of additional bullshit will barely be noticed.

@ct_bergstrom If bullshit is what’s produced by someone who doesn’t care whether they’re telling the truth or not (Harry Frankfurter),

then perhaps bullshit generators are what’s produced by someone who doesn’t care whether they cause harm or not.

@fivetonsflax I almost agree. I think bullshit generators are produced by someone who doesn't care whether their system produces true or logically coherent output.

@ct_bergstrom That’s true of many artists; compare @jwz’s “dadadodo”.

LLMs produce, not just nonsense, but nonsense which wears the clothes of sense, and therefore enables a particular species of harm.