Humans Are Tech's Next Big Thing—And That Could Be Risky

People, it turns out, might be the best at figuring out ways to capture their fellow humans' attention. But people are also far from perfect.
Getty Images

Internet companies make billions of dollars by capturing one of the world's most precious commodities: your attention. They need to amuse, amaze, entice, and intrigue you---and millions of users like you---to stay afloat and profit.

But figuring out what you want to read, watch, and see is harder than it looks. At Facebook, serving your wants and needs comes down to algorithms---click on something, and you'll see more of that thing, and things like it. At Twitter, your desires are met via your choices---follow certain people, you'll see updates from them. But, when it comes to channeling the most attention-grabbing content, it turns out that automation, or users left to their own devices, might not be enough.

In recent weeks, several tech companies have made it clear that they plan to use real, live humans to curate the news, entertainment, and content they'll deliver via their platforms. People, it turns out, might be the best at capturing the attention of fellow members of the species. And while Flipboard, Beats Music, and Facebook's Paper app have used human curators to help showcase the best content in the past, it seems a few major tech giants are starting to remember the value of humans, too.

The job descriptions posted recently for these new positions sound strangely familiar to those of us who work in traditional newsrooms. Apple is hiring editors to "to help identify and deliver the best in breaking national, global, and local news" for its upcoming Apple News app. Twitter reportedly will have teams of editors around the world selecting tweets, photos, and videos for its upcoming curated events-based feeds. And Snapchat is hiring “content analysts” to evaluate submitted snaps for its live stories.

These companies also seem to be poaching the kinds of people who have serious backgrounds in traditional journalism---like Snapchat's recent hire of top CNN political reporter Peter Hamby to lead its news coverage and the mysterious move of The New Yorker's creative director Wyatt Mitchell to Apple.

All of which sounds a lot like what news organizations have always done---and the kinds of skills they've long demanded. After years of undermining the traditional business model for journalism, it seems tech companies want to get in on the act. The question is whether they know what they're getting into.

“The amazing thing for all of us that come out of the journalism world is that curation is just another word for editing,” says Ken Doctor, a longtime analyst of the business of news. “Surprise, surprise! Human beings like editing.”

Platform Pitfalls

Tech companies such as Google and Facebook have long eschewed the role of direct human judgment in the content they serve up. Instead, the biggest online companies have sought to position themselves as platforms---value-neutral venues for other people's content rather than their own.

“Eric Schmidt used to say that all the time, ‘Google is not an editorial company. We don't want to be in the business of deciding what's news,’” says Jay Rosen, a journalism professor at New York University, of the search giant's former CEO.

If users aren't happy with what they see on Google, it's not, say, Eric Schmidt's fault, but the fault of the search engine---computer error, not human. At least, that's the kind of plausible deniability companies have sought to claim. Technology offers a kind of shield from taking the direct responsibility that accompanies human editorial judgments.

"You see the same thing now with Facebook saying the News Feed is not an editorial product, it's an algorithm, which I've written about myself," Rosen adds. "They say, ‘We don't control News Feed, users control News Feed.' But it's a very dubious thing to be saying."

These platforms, after all, are built by humans---and algorithms follow the rules set by those humans. What's more, as sophisticated as these algorithms may be, machines often have a tough time judging context and social norms. Take, for example, the jarring juxtapositions that showed up in Facebook's automated "Year in Review" videos, or the racist ads that once appeared in Google search results when users searched names more often associated with African Americans.

So while algorithms have allowed online companies to scale massively---serving personalized Facebook feeds based on your activity or accurate search results based on keywords---their inhumanness equates to a lack of transparency. And the results can be anything from clumsy to offensive.

"These systems are black boxes," says Christian Sandvig, a professor at University of Michigan's School of Information. "If you see a story in the newspaper, you have a reasonable idea where it came from. But if you see something in your News Feed, there are all kinds of reasonable scenarios for how it ended up there."

The Learning Curve

Adding humans to the mix could solve some of those problems. Humans, after all, are great at placing ideas in larger contexts, understanding social norms, and selecting stories that will resonate with others.

But, while we might not fault an algorithm for showing us more of our friend's baby photos---how could it know how annoyed we'd get?---we expect humans to get it. And the act of "getting it" is also where bias and subjectivity come into play. If Apple, Twitter, and Snapchat plan to depend on humans, even in some part, to serve up content, they'll also have to be prepared for the problems that come along with that bias and subjectivity---as well as the higher standards we may expect from humans than from machines.

“Companies that are not journalism companies have a big learning curve if they want to assume the role or pretend to be journalistic companies,” Doctor says.

For instance, newsrooms have editorial standards to ensure that fair, accurate, and complete stories are told. Will these tech companies, which are dependent on both advertisers and third-party publishers, establish ethics policies to keep editorial judgments distinct from business decisions? The questions that arise naturally in newsroom will need to be answered once tech companies abandon the platform pretense and start acting as real publishers.

Dear Machine, Meet Man

While Twitter, Apple, and Snapchat would not share their specific plans for editorial policies with WIRED, a few tech companies already use editors to help serve up the best content for users. Since 2011, LinkedIn has employed humans to help determine what content should be highlighted on its platform. Editors today are tasked with selecting the best content from other sources, encouraging LinkedIn users to write posts, and sharing the right stories with the right audience.

What's made all the difference for LinkedIn is augmenting human editorial judgment with the long-tail reach of the company's massive data trove, says Dan Roth, executive editor of LinkedIn, who has an extensive resume packed with journalism experience (including a few years as a WIRED staffer).

"From the very beginning, it was designed to be both editor and machine driven," Roth says. "If we use the best of algorithms and the best of editors, we felt we’d end up with a winning composition." To determine what stories should be shared with certain audiences, for example, Roth explains that sometimes it's up to an editor's gut. "But most of the time there's data too," he says. For example, editors may make a call on the quality of a post, but they'll also look at likes, comments, shares, or early indications of interest to decide whether to share it with a broader audience.

And while editors are needed to decide what counts as urgent breaking news (the machines just don't get it), Roth says LinkedIn's algorithm will also help surface stories that appeal to certain niche audiences---say engineers in Estonia---to whom a small team of editors wouldn't easily be able to cater.

The Whole Truth

LinkedIn still considers itself a platform, Roth says, so it's up to publishers---be they established ones or users---to verify information in posts, not LinkedIn editors. But a startup called Storyful is working to commodify veracity in the age of social media. The company uses proprietary technology alongside in-house journalists to find, verify, and share tweets, photos, and videos from social media---a service that Google-owned YouTube recently asked Storyful to perform on its behalf for a new YouTube Newswire.

"Algorithms are great," says Aine Kerr, managing editor of Storyful, "but you still need editorial judgment."

Storyful journalists are tasked, for example, with tracing a video back to its original source. The company reviews videos frame-by-frame and corroborates the content. While confirming some of this information can take no more than a few minutes, Kerr says it can also take hours.

Among the questions yet to be answered about the planned editorial teams at Twitter, Snapchat, and Apple News are whether they'll take that kind of time to do true factchecking in a breaking news environment. Will they be prepared to call out or quash hoaxes, propaganda, and campaigns of misinformation? It's possible that a combination of smart humans plus good data may make them as good as any newsroom at doing just that. But knowing that humans are behind the news could also saddle tech companies with more criticism and blame. As smart as some algorithms are, they're still just the instruments of their creators. If there's a person behind the screen, we know we can talk back.