Yesterday I had a number of conversations with people working in the scholarly publishing sphere about what happens when AI chatbots pollute our information environment and then start feeding on this pollution.

As it so often, the case, we didn’t have to wait long to get some hint of the kind of mess we could be looking at.

https://www.theverge.com/2023/3/22/23651564/google-microsoft-bard-bing-chatbots-misinformation

Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow

Microsoft’s AI chatbot Bing incorrectly reported the demise of Google’s AI chatbot Bard. It’s an early warning sign that this technology is fueling a massive game of misinformation telephone.

The Verge
@ct_bergstrom @jadeforrest the whole thing with these AIs is that they aren’t even trying to answer with the truth. They can’t tell what is true. They can only generate output that should appear to be plausible.
@relistan @ct_bergstrom yes for sure. I think the feedback loop where generative AI output starts being used for its own training data is something I think has been underreported.
@relistan @jadeforrest @ct_bergstrom can’t wait to see all these horrible second order effects of this 🫤