Skip to main contentSkip to navigationSkip to navigation
Are you a troll? Answer in the comments.
Are you a troll? Answer in the comments. Photograph: Alamy
Are you a troll? Answer in the comments. Photograph: Alamy

Algorithm 'identifies future trolls from just five posts'

This article is more than 9 years old

A website commenter who will end up being banned for antisocial behaviour can be spotted with 80% accuracy simply by examining their first five posts, claim researchers

It is possible to tell comment trolls apart from other users simply from looking at the way they write, researchers have found.

Studying the comments on three sites – CNN, Breitbart and IGN – over an 18 month period, the researchers at Cornell and Stanford universities found that users who went on to be banned wrote differently to other users in the same comment thread, using fewer words indicative of positive emotion.

Future banned users also tended to write comments that were more difficult to read than typical users, the researchers found.

“We find that such users tend to concentrate their efforts in a small number of threads, are more likely to post irrelevantly, and are more successful at garnering responses from other users,” the researchers add, in a pre-publication paper titled Antisocial Behavior in Online Discussion Communities.

“Studying the evolution of these users from the moment they join a community up to when they get banned, we find that not only do they write worse than other users over time, but they also become increasingly less tolerated by the community.

The researchers also discovered that antisocial behaviour was exacerbated when moderation appears to be overly harsh.

The researchers studied more than 35m posts sent from almost 2 million users on the three websites under investigation, and found nearly 50,000 individual users who had been banned over the 18 month period. They also examined the number of individual comments that had been deleted or reported to the site’s moderators, with all the data provided to the researchers by Disqus, the commenting platform used by all three sites.

They focused their investigation on the 50,000 users banned over the period under examination, and attempted to find tell-tale signs in their prior posts that acted as an indicator for their later behaviour.

They discovered that users who would end up being banned from the site often wrote noticeably different to the main bulk of commenters. “Users can stay on-topic or veer off-topic; prior work has also shown that users tend to adopt linguistic conventions or jargon in a community … and that they also unconsciously mimic the choices of function-word classes they are communicating with.” Sure enough, they found that “text similarity” of banned users was significantly lower than that of non-banned users.

Additionally, the posts of banned users had similar word counts to those of non-banned, but when tested against a standard readability index were revealed to be significantly harder to read.

On top of the information found in the actual posts, the authors also found that users who would go on to be banned interacted differently with the community at large. “For instance, [future banned users] tended to spend more time in individual threads than [users who weren’t banned],” they write.

With all the information together, they created a prediction model which can guess with 80% accuracy whether or not that user will go on to be banned from just their first five posts. Looking at the first 10 raises the accuracy of the model by a further two percentage points, which raises the possibility of automatically highlighting potentially problematic users to moderators so that antisocial behaviour can be dealt with more quickly.

But the authors warn that overzealous moderation can have its own downside: “Taking extreme action against small infractions can exacerbate antisocial behaviour (e.g. unfairness can cause users to write worse) … Whereas trading off overall performance for higher precision and have a human moderator approve any bans is one way to avoid incorrectly blocking innocent users, a better response may instead involve giving antisocial users a chance to redeem themselves.”

Explore more on these topics

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed