Good Behavior: How Riot Games is Using Psychology to Stop Online Harassment 

Gentle Readers,

I don’t know if any of you play League of Legends, a game I insist on calling “lol”, much to my older brother’s chagrin. I’m sure that by now, though, you’ve at least heard of it. League is a MOBA (Multiplayer Online Battle Arena) published by Riot Games, wherein teams composed of five people choose characters with specific abilities, called “champions”, and do battle against other teams. League of Legends is a community of millions of players, with as many as 7.5 million playing at any one time. For perspective, that’s more people than live in Massachusetts, or in all of Bulgaria.  And daily counts are in the high twenty millions. It is a truly massive collection of people interacting, often as strangers to one another. With any community of a reasonable size, some portion thereof are assholes.

The Champion Select screen.

The Champion Select screen.

While I’m not incredibly invested in the game itself—I played for a while, found it to be a lot like the WoW mod Defense of the Ancients that inspired it, and moved on—attempts to corral, quarantine, or reform these assholes are compelling object lessons in how one might manage a massive digital community. Over the past year, Riot Games has made well-publicized efforts to bring some of this behavior under control, considering their previous systems too lenient. As Jeffrey Lin, lead social systems designer for Riot, put it:

By giving the worst 2% so many chances, we’re actually letting them ruin a lot more games and players’ experiences and that’s something we want to try to reduce… we’re hoping to address with our systems is that some players understand what’s crossing the line and believe it’s ok, because other games never punished it in the past.”

Riot acknowledges that what it has is a relatively small problem, but considers that among the sheer number of games and reports of negative experiences,  even these are unacceptable. Thus, they are taking proactive steps to make their corner of the internet a little less like Lord of the Flies.

It is perhaps remarkable that they are making use of a sort of carrot-and-stick approach. The carrot? Well, one example, announced just after the new year, was a Mystery Gift, typically a new Champion or a new skin (costume) for a Champion already available to the player. These were sent to any and all players who were at least Level 5, hadn’t received any bans or restrictions in 2014, and had at least ten skins they didn’t own.

We hear a lot about methods and calls to block, report, or avoid players (or Twitter users or what have you) who have toxic behavior, which is important in League and all of online gaming because it can get pretty fucking toxic. It is much less often that we hear about strategies for rewarding good behavior in online communities. If you want people to behave well, it is important to model and reinforce positive actions alongside confronting negative ones. It’s a psychologically savvy approach to moderation.

In fact, Lin has a Ph.D in cognitive neuroscience and heads a team of thirty researchers conducting experiments and analyzing the behavior of League players. Why go to so much effort?

Riot’s stake in reducing toxic player behavior goes well beyond the simple virtue of sportsmanship—the company’s “free-to-play” business model of selling non-essential game items depends on keeping players happy and invested in the game.

This isn’t just true of League of Legends; many social media and app companies focus on on a positive user experience in everything from interactions to interface. Lin and his team are able to perform large and small-scale experiments to collect vast quantities of data.

In one such study, labelled the “Optimus Experiment”, Lin et al. found that reprimands for bad behavior which were colored red were more effective than reprimands which were colored white. Perhaps it follows that messages colored blue which encouraged good behavior were also more effective than their respective white baseline. Beyond the stated results, this carries the implications that online communities can be improved with small changes that take human behavior into account.

Now, the stick. As I mentioned, Riot has been recently endeavoring to react to negative or unwelcoming behavior more quickly—particularly racist, homophobic, or sexist behavior. Last year, they began rolling out a system of instant fourteen-day or permanents bans for the most toxic players:

https://twitter.com/RiotLyte/status/491270902554689537

This week, they’re introducing a new system for cases of reported toxic behavior that will verify the behavior in question, and then send the offending player a message which describes the behavior, displays chat logs, and then follows through on the appropriate punishment, all withing fifteen minutes of game’s end. This system is automated, but Lin’s team will be “hand-reviewing the first few thousand cases the instant feedback system sorts through.” More features are upcoming.

Now, this change is in testing, but it is not without its share of naysayers, who can be found in the comments of any article on this change, such as this one at Forbes, or this one at Kotaku. I don’t imagine it will be a trouble-free rollout, but the idea—rapidly showing players what exactly about their behavior was objectionable—can help foster a sense of consistency and help improve the community.

The big takeaway here is not about League of Legends, though I think this is general good news for League players. It’s an example of broad community research being used to design policies which will determine the future of our online communities. Our society as a whole has mostly taken a deer-in-headlights strategy when it comes to the changes that come with a parallel digital society. Ineffective methods are wielded with a heavy hand by people who attempt to police or regulate the internet and it is only recently that we’re starting to have productive debate on the subject. If what Riot is planning works, League of Legends players will have the ability to, as a community, say “We won’t tolerate slurs or death threats. Not in our house.” That, and whatever else Lin’s research team is driving at, could have big implications for the rest of the internet.


Follow Lady Geek Girl and Friends on Twitter, Tumblr, and Facebook!

1 thought on “Good Behavior: How Riot Games is Using Psychology to Stop Online Harassment 

  1. I like the idea of rewarding players who behave in a civilized manner but I also like the idea that when a player is toxic, they should lose level as or anything else they’ve built up in the game. The creators could try that instead of banning, if that doesn’t work.

    One of the reason I don’t play online games is that so many of them are just too toxic. I’m a WoC who has enough sh** to put up with without subjecting myself to more drama, while seeking entertainment. This sounds like a step in the right direction.

    You’ve really only got a small number of people spoiling it for everyone else, but that’s enough.

Comments are closed.