Yesterday I had a number of conversations with people working in the scholarly publishing sphere about what happens when AI chatbots pollute our information environment and then start feeding on this pollution.

As it so often, the case, we didn’t have to wait long to get some hint of the kind of mess we could be looking at.

https://www.theverge.com/2023/3/22/23651564/google-microsoft-bard-bing-chatbots-misinformation

Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow

Microsoft’s AI chatbot Bing incorrectly reported the demise of Google’s AI chatbot Bard. It’s an early warning sign that this technology is fueling a massive game of misinformation telephone.

The Verge
My fear is that we’ve created an information ecosystem that is uniquely susceptible to the perversions of these AI tools. Fifty years ago, had they existed, they would’ve been mere curiosities because we lacked the information infrastructure for their output to swamp more trusted forms of information. Even twenty years ago there would have been substantially less opportunity for them to have cause harm.
@ct_bergstrom Fifty years ago in the US, propaganda purveyors like Walter Cronkite had vast audiences while providers of true information like IF Stone did not. I'm not convinced this is worse.
@ct_bergstrom Anecdote: As a child I was in BC during the 1968 Democratic convention and read in the local papers how the Chicago police rioted. When I got back home to Oregon the papers from the same dates all said the police were attacked by rioters. Today, we would have access to both sets of accounts, which doesn't mean we would all use them wisely, but at least it's possible.