How Much Should You Know About the Way Facebook Works?

An author of the infamous emotional-contagion study says it might not be reasonable to expect informed consent about social experiments online.

Every semester, Cornell professor Jeff Hancock asks his students to complete an experiment. First, he has them all Google the same search term. Then, he asks each student to turn to the right or left and compare the results on their screens.

What his students inevitably find, and what stuns many of them, he says, is how feeding Google an identical phrase can yield wildly different results. "They think your Google search is an objective window into the world," Hancock told me. "And they don't have a sense that they're algorithmically curated."

Daily life is filthy with algorithms. Online personalized ads and filtered streams of information change based on where we go and what we click. Offline, we receive coupons tied to our past shopping behaviors and credit-card offers based on financial history. The invisible systems that are built around our mined personal information and tracked activities are everywhere, generating data doppelgangers that can look frighteningly like us or hilariously divergent from the versions of ourselves we think we know.

But the traceless infrastructure that hides and surfaces the information all around us is only as scary as any major shift in technology that came before, argues Hancock, who found himself at the center of a research controversy earlier this summer. Hancock co-authored a now infamous study about a secret Facebook experiment he and other researchers constructed to study emotional contagion. The work involved changing what users saw in their News Feeds as a way to manipulate their emotional states.

When news of the study spread in June, people were outraged. (Hancock says he was still receiving physical threats in response to the study as recently as last week.) Hancock now says he's prioritizing conversations—with academics, policy makers, and others—to move forward so that people "don't feel wronged or upset" by this kind of work. It's a process he expects will take years. The emails he gets these days are still angry, but the stream of them has slowed.

I first asked Hancock to talk about the experiment back in June, but he wanted to wait until some of the media attention waned. We spoke for the first time this week.
* * *

"You have this algorithm which is a weird thing that people don't really understand," Hancock told me. "And we haven't discussed it as a society very much."

Hancock is still reluctant to draw too many conclusions about what he learned in the aftermath of the Facebook study. He declined to talk on-record about what he might do differently next time, or to detail advice he'd have for someone conducting a similar experiment.

One of Hancock's main areas of research has to do with "deception and its detection," according to his university website, a detail that people have asked him about, he says. "'You study deception and obviously you were super deceptive in this study'—That has come up in a few emails," he said.

"There is a trust issue around new technologies," he continued, "It goes back to Socrates and his distrust of the alphabet, [the idea that] writing would lead to us to become mindless ... It's the same fear, I think. 'Because I can't see you, you're going to manipulate me, you're going to deceive me.' There could be a connection there where there's a larger trust issue around technology." For now, Hancock says, he just wants to better grasp the way that people are thinking about algorithms. Understanding expectations will help him and others figure out the ethical ways to tinker with the streams of information that reach them.

"For me, since the Facebook study controversy and the reaction, we've just started asking in the lab, 'Well, what are people's mental models for how a News Feed is created? How a Google search list is created? How iTunes rankings for songs work?'" he said. "When you step back, it's almost like every large company that is consumer-facing has algorithms working to present its data or products to the users. It's a huge thing."

So huge, and so much a part of the way the Internet works, Hancock suggests, that we may have passed the point where it's possible for people to reasonably expect they'd have to give consent before a corporation messes with the algorithmic filters that affect the information they see online.

That issue of consent was one of the biggest questions to emerge from the Facebook study. Is it enough to only notify users in a terms-of-service agreement that they might be subject to the experimental whims of a company? (It appears the Facebook study may not have even done that.)

"It's the trickiest question in some ways because informed consent is a really important principle, the bedrock of a lot of social science, but it can be waived when the intervention, the test, is minimally risky and below a threshold risk," Hancock told me. "Informed consent isn't the be-all end-all of how to do social science at scale, especially when corporations are involved."

Instead, Hancock suggests, perhaps it makes more sense to debrief users after an experiment has taken place. Such notice might link to more information about the study, and offer contact information for researchers or an ombudsman rather than "bombarding people upfront with requests for consent."

But beyond that, Hancock insists, opting-out ahead of time simply may not be an option. When algorithms are everywhere, and when companies are constantly refining them in different ways and for various reasons, what would obtaining consent even look like?

"If you think about Google search and you say, 'I don't want to be experimented on,' then the question is, well, what does that mean?" Hancock said. "Google search is presumably being tested all the time ... They're constantly needing to tweak their algorithm. If I say, 'I want to opt out of that,' does that put me back to Google search 2004? Once you say 'I'm opting out,' you're stuck with that version of that algorithm.

"And that's the thing," he continued. "Once you start thinking about it, how does an opt-out situation even work? These companies are innovating on a weekly basis, maybe faster, and how do we allow people to opt out? Do we allow an inferior product to go out?"

* * *

Of course, there is a clear difference between the mysterious mechanics of Google's search algorithm and the deliberate filtering of a News Feed to manipulate emotions. How much does algorithmic intent matter?

"No, not all algorithms are equal," said Nicholas Diakopoulos, a computational journalism fellow at the Tow Center for Digital Journalism. Diakopoulos says he didn't have a problem with the Facebook study—the outrage was over the top, he says—but he sees "some things they could have done better."

"If we can agree that we're beyond the state where we can expect people to have informed consent for every little A/B test, I would like to see people have debriefing ... so that they know they were manipulated," Diakopoulos told me.

There is little consensus about this in the scientific community. Last month, Kate Crawford—a principal researcher at Microsoft—argued in these pages that users should be able to opt in to experimental groups. "It is a failure of imagination and methodology to claim that it is necessary to experiment on millions of people without their consent in order to produce good data science," Crawford wrote.

And arguments about the Facebook study have ranged far beyond questions of consent. Some of the research’s defenders have said that insisting social publishers ask permission before showing you different content demonstrates a false understanding of how the Internet actually works. Everyone knows that filters are imposed on information streams online, goes the argument. Indeed, filters are part of the Internet. To alter them is to alter the web.

Writer Tim Carmody pushed back against these ideas in a blog post earlier this month:

[Arguments like this are] all too quick to accept that users of [Facebook and OK Cupid] are readers who've agreed to let these sites show them things. They don't recognize or respect that the users are also the ones who've made almost everything that those sites show. They only treat you as a customer, never a client. […] Ultimately, [they] ought to be ashamed to treat people and the things they make this way.

It's not A/B testing. It's just being an asshole.

If people can't agree on what constitutes harmless A/B testing versus a serious breach of basic human rights, how do we move forward?

Determining a workable ethical framework is complex enough in a single industry, let alone across several disciplines interacting within algorithmic systems that are hidden to almost everyone who encounters them.

"I just think it's fascinating, the conflation of all these different areas of society," Diakopoulos said. "Whether it's business, or data journalism, they're colliding with the scientific method and quantification. What are the boundaries? When do we have to respect one set of ethics or set of expectations over another?"

Adrienne LaFrance is the executive editor of The Atlantic. She was previously a senior editor and staff writer at The Atlantic, and the editor of TheAtlantic.com.