As someone who has participated in multi-year edit wars over, yes, Nazi shit, I will say that my biggest concern here isn’t about unedited LLM text hitting wikipedia articles—that’s v bad but probably largely fixable—but with the way Talk page sophistry is about to become absolutely fucking unmanageable as malicious editors set chatbots to do their infinite argumentation for them

To generalize: LLMs on the web’s surfaces are bad. LLMs in the backstage are much worse.

https://www.vice.com/en/article/v7bdba/ai-is-tearing-wikipedia-apart

AI Is Tearing Wikipedia Apart

Volunteers who maintain the digital encyclopedia are divided on how to deal with the rise of AI-generated content and misinformation.

@kissane
This is related to a problem I've mentioned elsewhere. I am (still, for now) making my living as a magazine editor, and my immediate problem isn't that the CEO believes ChatGPT can do the writing part of my job better... it's that my inbox is filling up with story pitches that are obviously AI generated.

They don't look like spam or press releases at first glance, but at second glance, they either get basic facts dangerously wrong or are weirdly incomplete and non-specific.

@grantimatter: So, let's figure out a way to make LLMs measure the specificity of a claim.

Near-fully automatic fact-checking, unfortunately, will require more advanced AI than LLMs can do.

@kissane