[Opinion] AI finds errors in 90% of Wikipedia's best articles

https://blackneon.net/post/72051

How could you do this to me? - BlackNeon.net

> For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 [https://en.wikipedia.org/wiki/ChatGPT_5] (more precisely, its “Extended Thinking” version) to find an error in “Today’s featured article [https://en.wikipedia.org/wiki/Wikipedia:About_Today%27s_featured_article]”. In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.

A tool that gives at least 40% wrong answers, used to find 90% errors?
Bias needs to be reinforced!

If you read the post it’s actually quite a good method. Having an LLM flag potential errors and then reviewing them manually as a human is actually quite productive.

I’ve done exactly that on a project that relies on user-submitted content; moderating submissions at even a moderate scale is hard, but having an llm look through for me is easy. I can then check through anything it flags and manually moderate. Neither the accuracy nor precision is particularly high, but it’s a low-effort way to find a decent number of the thing you’re looking for. In my case I was looking for abusive submissions from untrusted users; in the OP author’s case they were looking for errors. I’m quite sure this method would never find all errors, and as per the article the “errors” it flags aren’t always correct either. But the effort:reward ratio is high.

Accuracy and precision - Wikipedia

But we don’t know what the false positive rate is either? How many submissions were blocked that shouldn’t have been, it seems like you don’t have a way to even find that metric out unless somebody complained about it.

I can then check through anything it flags and manually moderate.

It isn’t doing anything automatically. It’s just flagging submissions for human review. “Hey, maybe have a look at this one”. So if it falsely flags something it shouldn’t, which is common, I simply ignore it. And as I said, that error rate is moderate but it’s still successful enough to be quite useful.

90% errors isn’t accurate. It’s not that 90% of all facts in wikipedia are wrong. 90% of the featured articles contained at least one error, so the articles were still mostly correct.

And the featured articles are usually quite large. As an example, today’s featured article is on a type of crab - the article is over 3,700 words with 129 references and 30-something books in the bibliography.

Ita not particularly unreasonable or unsurprising to be able to find a single error amongst articles that complex.