Skip to main contentSkip to navigationSkip to navigation
Marjory Stoneman Douglas high school student Emma Gonzalez
Marjory Stoneman Douglas high school student Emma Gonzalez speaks at a rally for gun control. Photograph: Rhona Wise/AFP/Getty Images
Marjory Stoneman Douglas high school student Emma Gonzalez speaks at a rally for gun control. Photograph: Rhona Wise/AFP/Getty Images

How can we regulate our savage market for instant news?

This article is more than 6 years old
Emily Bell

Without consensus on how to cover events such as the Florida massacre, publishers are stuck

Bullet holes in a computer screen, filmed by a cowering high school student sheltering in a classroom where a gunman was on the loose: just one of the images in the visual market created for those watching the news of the shooting at the Marjory Stoneman Douglas high school in Florida. A map on the social messaging app Snapchat displayed pictures and videos located at the school as the shooting took place. Grabbed and recycled from mobile phone screens, they circulated through television channels, radio and websites almost instantaneously.

Much of what we know about the shooting, both in terms of how the events unfolded and what led up to them, was learned through social media channels. The perpetrator, Nikolas Cruz, a 19-year-old recent graduate of the school, had seemingly been flagging his disturbing interest in mass killing for some time through Instagram posts and even a comment on a YouTube video about becoming “a professional high school shooter”. Last week the FBI admitted it had failed to investigate a tipoff in January. With each iteration of gun crime in the US, there is an accompanying debate about the role played by social and traditional media in amplifying or controlling the narratives which seem to feed an escalating cycle of violence.

Within 24 hours of the Florida shooting another story broke that centred on the use of social media as a platform to spread chaos and undermine security. The US Justice Department issued 13 indictments against Russians who it said were seeking to defraud the American public by interfering in the 2016 presidential election. The use of Facebook and its photo-sharing site Instagram was central to their campaign. Facebook is mentioned more than 30 times in the 37 page indictment.

The DoJ document, and the evidence of the Florida shooting highlight the same dilemma. There are no readily agreed or perhaps even possible standards for how we want to control the information that shapes society.

The Russian interference engine, the Internet Research Agency, is a sophisticated and well-run organisation that stokes division and fear through cleverly crafted campaigns, social interactions and a blend of advertising and “organic”material which can flow freely into anyone’s feed. Its tactics are exactly the same as those used by alt-right groups in the wake of the Florida shooting to amplify untrue stories about the shooter’s political affiliations with similar effect.

Striking examples of how a sprawling information infrastructure create effects with real consequences come at a time when the platform companies are debating how they can moderate the trillions of pieces of content that their users initiate everyday. It is not just a problem for social platforms either. News and media outlets have a similar challenge in understanding how they can - or should- report a world where we are inching towards both the possibility of complete surveillance and the threat of continual information manipulation.

At the beginning of the month, a high-level group of content policy employees from social media companies along with lawyers, academics and others attended a conference on large-scale moderation techniques at Silicon Valley’s Santa Clara University. Facebook’s head of policy, Monika Bickert, laid out a detailed picture of how the 7,500 content moderators at the company work, saying there was a long way to go before machine learning and artificial intelligence could scale the task of humans. Describing the rule-setting at Facebook as “mini legislative sessions”, Bickert was not the only technology company employee who likened what the companies do to sitting as a governance structure over speech.

But the terrain these platforms – and all news publishers – inhabit at the moment lacks the characteristics of a governable territory. It is not at all clear that we have a consensus in society about what kind of media standards we want, or whether we can create them separately from the existing commercial models of communications companies.

Technology firms are piling on the number of content moderators or members of “trust and safety” teams as they are sometimes called - Google alone employs 10,000 - but this is happening despite a lack of agreement about what kind of societal norms we want for this new world of information. Snapchat has decided that we do want to see children terrified and cowering in their classrooms, as have CNN, Fox and every other news outlet that used the same images. As trolls or bots crowded the online conversations and fabricated reporters covering the event, Twitter said it did not “want to be the arbiter of truth”, when actually the rest of us wish it would at least try and have some aspirations of that sort.

It is no longer a debate just about graphic images, moments of death and personal privacy, but also of whether different contexts demand different standards. It is a question of whether newsrooms now have to spend more time explaining the false mechanics of stories as they do the facts as they happen. And it is a question of whether the platforms are even capable of managing what has been asked of them.

Facebook finds it relatively difficult to decide which incidents deserve to be included in its “mark myself safe” feature which grew out of large scale terrorist attacks including those in Paris and Brussels over the past couple of years. Applying a set of different moderation or editorial standards to each sizeable event that might precipitate danger is an impossibility, yet this is often what the culture demands. Sometimes we need to see or experience reality to understand what has happened, but not always. News organisations make those decisions one way or another every day, in large and small ways, and often get them wrong, and each set of decisions is different according to publication, brand, market and story.

In summarising the arguments around large-scale content moderation, Irina Raicu, the director of the internet ethics programme at the Markkula Center at Santa Clara University raises the most uncomfortable prospect of all.

“To do this well, consistently, at the scale at which some of these platforms operate, is a gargantuan task. Is it outright impossible? And, if so, should we accept the idea of ungovernable channels that allow for the instantaneous distribution and amplification of information around the world?”

If we do accept this as the natural state of being, then it consigns us further to a world where we abandon standards of a civil society to a savage and sometimes dangerous market.

Most viewed

Most viewed