As someone who has participated in multi-year edit wars over, yes, Nazi shit, I will say that my biggest concern here isn’t about unedited LLM text hitting wikipedia articles—that’s v bad but probably largely fixable—but with the way Talk page sophistry is about to become absolutely fucking unmanageable as malicious editors set chatbots to do their infinite argumentation for them

To generalize: LLMs on the web’s surfaces are bad. LLMs in the backstage are much worse.

https://www.vice.com/en/article/v7bdba/ai-is-tearing-wikipedia-apart

AI Is Tearing Wikipedia Apart

Volunteers who maintain the digital encyclopedia are divided on how to deal with the rise of AI-generated content and misinformation.

@kissane
The only thing worse than trolls are automated trolls.

The developers of LLM's pretend that they have no control over whether or not their LLM tells the truth, but I think that will eventually be tried in court and they will be found liable because they could have trained their model to recognize patterns of ethical and unethical behavior and to distinguish between fact and fiction prior to producing results.

I believe that #ChatGPT and others of this generation of AI will eventually become a very expensive cautionary tale.

The Ford Pinto or Lawn Darts of #AI.

The danger of which was known and acknowledged by the developers and ignored by their corporate sponsors in order to make a quick buck.

@eggmont @kissane They've done much better at creating bullshit machines than making bullshit detecting machines - that's for sure. It's all down to priorities.