Can We Build a Safer Internet?

Photo
Credit Kristian Hammerstad

We often take it as a given that the Internet is a cruel place, a natural haven for those who seek to harass and threaten others. But to some people, social networks are not mere conduits for our worst impulses. They’re structures whose design can influence how we behave, for good as well as for ill.

Right now, having a social media account can mean facing down a torrent of harassment — including, for some, attacks that are misogynist, racist or both. “Just as you create a space for people to use something in innovative, creative ways, there are also people who will use it for other means,” Moya Bailey, a postdoctoral fellow at Northeastern University who writes about race, gender and media, told Op-Talk. She mentioned Anita Sarkeesian, the video game critic who has faced harassment for critiquing the portrayal of women in games.

“Because she is doing that work, she becomes a target of a lot of violence and hate,” said Ms. Bailey. The rise of online communication is “a gift and a curse always. It’s always both/and.”

And the way we behave online may depend on which site we’re using. Ms. Bailey cites Tumblr as an example. “I think there’s something about Tumblr that is really attractive to social-justice folks, and the kinds of conversations that people have on Tumblr are very different from what’s possible on Facebook,” she explained. “The platforms themselves help shape the kind of content that people post to those different sites.”

The design of those platforms can also determine who sees what we post. Kate Losse, a writer on technology and culture and a former product manager at Facebook, told Op-Talk that Facebook has widened the scope of some of our conversations.

“Pre-Facebook there would be all these different kinds of interactions you might have socially,” she said. “You might talk to one person, you might talk to three people, you might talk to a hundred people. But Facebook’s interesting because you’re always talking to a hundred people when you post, or more.”

“You have to look at something like Facebook as structuring social interactions,” she added. And interacting via what Ms. Losse called “large-scale announcements” can introduce problems. “The Internet is the classic case of tragedy of the commons,” she said. “If something that’s important to me gets viewed by someone across the world, who has no attachment to me, doesn’t care about me at all, doesn’t have any reason to know me or have empathy for me, it’s much easier for that person to do something hateful with the content than to be respectful of it.”

But if platforms can structure our interactions, can they steer us toward kindness rather than toward bile? Batya Friedman, a professor at the University of Washington’s Information School who studies the relationship between technology and human priorities, thinks it’s possible. “Any time people talk to each other,” she told Op-Talk, “we have all kinds of social norms that check how we say things to each other. We give each other social cues, we tell each other when somebody’s starting to go too far.”

The question for designers of online communities, she said, is “how do we either create virtual norms that are comparable, or how do we represent those things so that people are getting those cues, so they modulate their behavior?”

Mihaela van der Schaar, an electrical engineering professor whose research deals with online social norms, has some ideas. “One method that I’ve been looking at is to build some form of reputation,” a way for people in a social network to rate how well their fellow users are following the norms. But it’s not enough to allow ratings — users need to be able to “rate the person that is rating you,” so that nobody can “hijack this idea of a norm and then try to expel or ostracize a particular person by wrongly posting such information.”

If a user gets a bad reputation, she said, the social network could kick him or her out, but that’s not the only option. The user could get a chance to stay if he or she behaves better — young people, especially, may realize their mistakes with time.

“These reputation mechanisms don’t need to be binary,” she said. “They could be more gradual.” And just as a bad reputation could trigger punishment, a good one could unlock rewards, like having your posts recommended to more users or getting access to a service not available to people with lower scores.

One service that already uses a reputation system is eBay, which, she pointed out, has fairly simple norms: “Sell a good product. Don’t sell a broken product.” Other networks might have more complex rules, like “communicate politely, don’t give bad advice, don’t post a particular type of content, promote some type of behavior that this particular type of community may like.”

In the future, she believes such norms will shape our online lives much as they already shape our offline ones. “It’s just that Internet is, right now, highly unpoliced, and policing the Internet needs to be done indirectly and in a kind of distributed way,” she said. “People need to kind of police each other.”

A reputation system isn’t the only possible fix — changing the format of people’s interactions might help, too. “When you’re using text to communicate, you can’t see people’s faces,” said John Suler, a psychology professor who has written about online behavior. He thinks systems that integrate images and text may be ideal, and that this may be part of Instagram’s popularity. An image “can capture a lot about who you are, and where you are, and what you’re doing,” he told Op-Talk. “There’s a lot of power in the image, but sometimes, pictures have to be explained. You have to provide the context for why this photo was taken.”

Social networks could also introduce more tools for managing who can see and interact with your posts. Mr. Suler would like to see networks implement tools to let users “shape our audience exactly the way we want them.” And N. K. Jemisin, an author and blogger who has written about on- and offline harassment, told Op-Talk that the network she’d felt safest on was LiveJournal, because “you had complete control over who could follow you; who could see your stuff; who could comment on your stuff; if they commented, what they said and so on.” And, she said, “that level of control made a huge difference.”

Of course, different users have wildly different experiences on social media — there may not be a single modification platforms can make to address all users’ concerns. Creating safer social media sites will require “understanding how different communities want to be respected,” Ms. Friedman argued. “It’s not one-size-fits-all.” She mentioned homeless people and people who have experienced domestic violence as two of the many groups that might have specific concerns about interacting online.

And whether social media companies are interested in taking action to create and enforce new norms is an open question. Ms. Losse said owners of social-media platforms might not yet understand how important it is to curb threats and harassment — these owners “are usually men, and usually in the least vulnerable position, because they have power, they have the money to protect themselves from any kind of physical threat.” Still, she said, “they’ve got all these users who are being abused and harassed, and if it gets bad enough, they’ll leave.”

More diversity in management might help matters, said Ms. Jemisin. She cited Twitter, which she said made no distinction in its reporting system between racist comments and expressions of anger over those comments. “The problem is that the people who are running Twitter are not diverse enough to think outside their own box,” she said, and they’d do well to hire people with different perspectives who can see what they may not be able to. “You can’t ask people to know more than they know. You can ask people to surround themselves with other folks who know different things.”

Of Twitter’s process for evaluating reports, a company spokesperson told Op-Talk in an email, “When content is reported to us that violates our Rules, which include a ban on targeted abuse, we suspend those accounts. We evaluate and refine our policies based on input from users, while working with outside organizations to ensure that we have industry best practices in place.”

Mr. Suler fears companies may not have a financial incentive to make their social networks more congenial. “They want to encourage as much interaction as possible,” he told Op-Talk. “They want to see as many ads fly by, as many eyeballs as possible. So they’re always pushing us in the direction of contacting more people, communicating with more people.” He sees an “intrinsic conflict between what the company needs to make money and what the community needs to have decent relationships with people.”

Ms. Friedman is more optimistic. “When people can choose,” she said, “they will choose environments where they feel safer, where they feel respected, where they feel better about their information being there. And I think the companies that do that and offer that are going to find people attracted to those places.”