Skip to main contentSkip to navigationSkip to navigation
T-shirts on sale from a street vendor that bear the message "Hand up, don't shoot" in reference to the shooting dead of 18-year-old Michael Brown by a police officer in Ferguson, Missouri on August 9th, 2014.
T-shirts on sale from a street vendor that bear the message ‘Hands up, don’t shoot’ in reference to the shooting of Michael Brown by a police officer in Ferguson, Missouri. Photograph: James Cooper/Demotix/Corbis Photograph: James Cooper/ James Cooper/Demotix/Corbis
T-shirts on sale from a street vendor that bear the message ‘Hands up, don’t shoot’ in reference to the shooting of Michael Brown by a police officer in Ferguson, Missouri. Photograph: James Cooper/Demotix/Corbis Photograph: James Cooper/ James Cooper/Demotix/Corbis

When algorithms rule our news, should we be worried or relieved?

This article is more than 9 years old

Facebook and Google use systems to curate what appears on our screens – but sociologists call this ‘algorithmic censorship’

The killing of an unarmed teenager in Ferguson, Missouri, led to an explosion of protest, both on- and offline. On the streets, campaigners were corralled by police into “first amendment areas”, while on the internet, a similar divide grew up in a more organic manner.

Not long after the first protests, people following the #ferguson hashtag noticed a sharp divide in what was being broadcast on Twitter and what made it on to their Facebook feeds. The former was bustling with news, commentary and anger about the brutal response from the police, while the latter was strangely quiet. If any one story was dominant on Facebook, it was the Ice Bucket Challenge.

To a certain extent, that difference is due to the various ways each site “friendship”. Facebook’s symmetrical model, where friendships only occur through mutual consent, means that social networks are invariably limited in scope, largely to people you’ve met in real life. In contrast, Twitter’s asymmetric model (you can follow the comments of people who don’t follow you) lets relationships sprawl across the world in just a few hops.

But there’s another difference which is germane: Twitter displays everything tweeted by everyone you follow, while Facebook focuses on a “curated” NewsFeed, showing you the posts it thinks you’ll like most, and hiding the rest.

Sometimes, this curation works flawlessly, hiding boring posts about a distant relative’s lunch plans while prominently showing the news that a close friend has got engaged. But when it reaches into the realm of politics, sociologist Zeynep Tufekci calls it “algorithmic censorship”.

As Tufekci points out, Twitter has also had a similar problem in the past with the way its Trending Topics are decided. Famously, #OccupyWallStreet never once trended in the US, and #Ferguson only briefly hit the top ten nationally. But in the intervening two years, the response that “it’s not being deliberately censored, it’s just how the algorithms work” has stopped being acceptable.

Now, when we face algorithmic censorship, we have started to ask why. What do the algorithms change? Why do the algorithms choose what they do? And how does that affect our culture?

Of course, for many users, it’s still not clear that they even know to ask those questions. In July, Facebook revealed it had been experimenting on users, changing the content of their newsfeed to see if they could incite more positive or negative posts. The reaction to that revelation was rapid, and negative.

But it’s becoming increasingly clear that for many users, their problem wasn’t that Facebook was “experimenting”, or that it was affecting their emotions, but that it was manipulating their newsfeed at all.

What people working and familiar with technology treat as a fact of life – the algorithmic interference of the NewsFeed – other people don’t know at all. When the interference is made clear, people’s reactions are often vehemently negative. An MIT study lifted the veil on Facebook’s interaction, and two-thirds of the participants were surprised by what they saw.

“We showed users how many posts they saw from each user – seeing which people appear commonly on their feed and whose posts are hidden. Often, people became very upset when they discovered posts from family members and loved ones were hidden.”

Google alters your search results; Amazon changes the products it shows you; the UK online supermarket Ocado even alters the prices it charges you, lowering shipping charges if you’re a new customer.

The myth is that these hyper-competent algorithms are improving life for all of us – that Amazon takes the hassle out of shopping, and Google puts what you’re looking for on the first page.

But actually, they’re dumb. Buy a tarp on Amazon, and it advertises other tarps. Watch a Taylor Swift video on YouTube, and the advert is for a Taylor Swift single.

Even when they get it right, we have to remember that “right” might not mean what we want it to mean. For some of Facebook’s algorithmic tweaks, their goal is clear, articulated, and inoffensive: it has managed to increase the number of organ donors; it’s managed to boost turnout at US, Indian and Brazilian elections; and, obviously, it’s managed to make a few billion dollars from advertising.

But when it comes to its newsfeed algorithm, the goals are less clear. It wants, ultimately, to keep you on the site – not just in the short term, but in the long term too. So it wants the “best” content. But how it measures “best” has huge implications – does it want things which get clicked on? That get read? Or even that make the user come back feeling happier, which would explain that motivation.

The lack of transparency around this isn’t just worrying for media types: it should be concerning for everyone. In May, a good study with a bad press release argued that Google could rig elections, if it so decided, by altering search results to be more or less favourable to particular candidates. Facebook could do the same, and it’s doubtful if anyone would even notice – a 1% increase in favourable articles to one party might have strong national effects, but be indistinguishable from normality on a standard users’ account.

Of course, to do so would be unethical. And we all know what Silicon Valley feels about ethicists: they know what’s ethical and what’s not already, so why hire one “to wring his hands for $100,000 a year”, as OKCupid’s Christian Rudder put it. Just trust them. What could go wrong?

More on this story

More on this story

  • End of the timeline? Twitter hints at move to Facebook-style curation

  • We can’t let tech giants, like Facebook and Twitter, control our news values

  • Mary Beard reveals she befriended Twitter trolls following online abuse

  • Facebook and Twitter users 'more likely' to censor their views offline

  • Twitter doesn't care about you anymore

  • Emmy awards 2014: celebrities pregame the show on Twitter

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed