As someone who has participated in multi-year edit wars over, yes, Nazi shit, I will say that my biggest concern here isn’t about unedited LLM text hitting wikipedia articles—that’s v bad but probably largely fixable—but with the way Talk page sophistry is about to become absolutely fucking unmanageable as malicious editors set chatbots to do their infinite argumentation for them

To generalize: LLMs on the web’s surfaces are bad. LLMs in the backstage are much worse.

https://www.vice.com/en/article/v7bdba/ai-is-tearing-wikipedia-apart

AI Is Tearing Wikipedia Apart

Volunteers who maintain the digital encyclopedia are divided on how to deal with the rise of AI-generated content and misinformation.

@kissane Dumb question, is there a way to give non-human editors a (technically speaking) scarlet letter and therefore segregate them from making edits?
@smokler I don't know! I think the problem is that it would be very easy to use a chatbot just part of the time and yourself the rest of the time
@kissane @smokler there are bot labels that can be applied (and various limits/controls on bot accounts) but they mostly only make sense for good-faith bots, which often these won’t be.
@luis_in_brief @kissane What is a good faith bot? (I was with you up until that point).
@smokler @luis_in_brief Helpful reminder bots, things like that, IIRC. Super simple old school bots.
@smokler @kissane They also, among other things, help with various anti-vandalism work, and (in some languages, generally not English) create very short articles from databases of verified facts. The Wikipedia article is pretty good! https://en.wikipedia.org/wiki/Wikipedia_bots?wprov=sfti1
Wikipedia bots - Wikipedia