Advertisement

Wikipedia's AI can automatically spot bad edits

Say bye-bye to (some) trolls and spammers.

Wikipedia has a new artificial intelligence service, and it could make the website a lot friendlier to newbie contributors. The AI, called Objective Revision Evaluation Service (ORES), will scour newly submitted revisions to spot any additions that look potentially spammy or trollish. Its creator, the Wikimedia Foundation, says it "functions like a pair of X-ray specs" (hence the image above) since it highlights anything that seems suspicious; it then sets that particular article aside for human editors to look at more closely. If the Wiki staff decides to pull a revision down, the contributor will get notified -- that's a lot better than the website's current practice of deleting submissions without any explanation.

The team trained ORES to differentiate between unintentional human errors and what's called "damaging edits" by using the Wiki teams' article-quality assessments as examples. Now, the team can use the AI to score an edit based on whether it's damaging or not.

This example, for instance, shows what the human editors see on the left and what ORES sees on the right. The AI's "false" or not damaging probability score for it is 0.0837, while its "true" or damaging probability score is 0.9163. As you can see, "llamas grow on trees" isn't exactly helpful or accurate.

As the Wikimedia foundation pointed out in its announcement, this isn't the first AI designed to help human editors monitor the site's content. However, those older tools can't tell the difference between a malicious edit and an honest human error, making ORES the better choice if Wikipedia doesn't want to lose even more contributors.

[Image credit: MGalloway (WMF)/Wikimedia]