Scientists Are Wrong All the Time, and That's Fantastic

On February 28, 1998, the eminent medical journal The Lancet published an observational study of 12 children: Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive development disorder in children. It might not sound sexy, but once the media read beyond the title, into the study’s descriptions of how those nasty-sounding symptoms appeared just after the kids got vaccinated, the impact was clear: The […]
Science Lab Laboratory CSA
Science Lab EquipmentGetty Images

On February 28, 1998, the eminent medical journal The Lancet published an observational study of 12 children: Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive development disorder in children. It might not sound sexy, but once the media read beyond the title, into the study's descriptions of how those nasty-sounding symptoms appeared just after the kids got vaccinated, the impact was clear: The measles-mumps-rubella vaccine can cause autism.

This was the famous study by Andrew Wakefield, the one that many credit with launching the current hyper-virulent form of anti-vaccination sentiment. Wakefield is maybe the most prominent modern scientist who got it *wrong—*majorly wrong, dangerously wrong, barred-from-medical-practice wrong.

But scientists are wrong all the time, in far more innocuous ways. And that’s OK. In fact, it’s great.

When a researcher gets proved wrong, that means the scientific method is working. Scientists make progress by re-doing each other's experiments*—replicating them to see if they can get the same result. More often than not, they can't. “Failure to reproduce is a good thing,” says Ivan Oransky, co-founder of Retraction Watch. “It happens a lot more than we know about." That could be because the research was outright fraudulent, like Wakefield's. But there are plenty of other ways to get a bum result—*as the Public Library of Science's new collection of negative results, launched this week, will highlight in excruciating detail.

You might have a particularly loosey-goosey postdoc doing your pipetting. You might have picked a weird patient population that shows a one-time spike in drug efficacy. Or you might have just gotten a weird statistical fluke. No matter how an experiment got screwed up, “negative results can be extremely exciting and useful—sometimes even more useful than positive results,” says John Ioannidis, a biologist at Stanford who published a now-famous paper suggesting that most scientific studies are wrong.

The problem with science isn't that scientists can be wrong: It's that when they're *proven *wrong, it's way too hard for people to find out.

Negative results, like the one that definitively refuted Wakefield’s paper, don’t make the news. Fun game: Bet you can't name the lead author of that paper. (It's okay, neither could we. But keep reading to find out!) It's way easier for journalists to write a splashy headline about a provocative new discovery (guilty) than a glum dismissal of yet another hypothesis, and scientific journals play into that bias all the time as they pick studies to publish.

"All of the incentives in science are aligned against publishing negative results or failures to replicate,” says Oransky. Scientists feel pressure to produce exciting results because that’s what big-name journals want—it doesn't look great for the covers of Science and Nature to scream "Whoops, we were wrong!"—and scientists desperately need those high-profile publications to secure funding and tenure. “People are forced to claim significance, or something new, extravagant, unusual, and positive,” says Ioannidis.

Plus, scientists don’t like to step on each other’s toes. “They feel a lot of pressure not to contradict each other,” says Elizabeth Iorns, the CEO of Science Exchange. “There’s a lot of evidence that if you do that, it’ll be negative for your career.”

When the politics of scientific publishing prevent negative results from getting out there, science can’t advance, and potentially dangerous errors—whether due to fraud or an honest mistake—go unchecked. Which is why lots of scientific publications, including PLOS, have recently begun to emphasize reproducibility and negative results.

Big-name journals have said they want to make data more transparent and accessible, so scientists can easily repeat analyses. Others, like the Journal of Negative Results in BioMedicine, are devoted to publishing only negative results. *PLOS One'*s collection of negative, null, and inconclusive papers, called The Missing Pieces, is now putting the spotlight on papers that contradict previous findings. PLOS thought—and we agree—it's time to give them the attention they deserve. Negative results, step up:

Vaccines and Autism. Wakefield’s 1998 study reported a possible link between the measles-mumps-rubella vaccine and the onset of autism in children with gastrointestinal problems. More than 20 studies have since ruled out any connection, but they didn’t focus on children with gastrointestinal problems. So in 2008, researchers led by Mady Hornig conducted a case study that did. Again, they found no evidence linking the vaccine with autism.

Psychic Ability. In 2011, Daryl Bem, a psychologist at Cornell, conducted nine experiments that seemed to suggest people could be psychic. Extraordinary claims require extraordinary evidence, so researchers replicated one of the experiments three times in 2012. As the newer paper states, “all three replication attempts failed to produce significant effects and thus do not support the existence of psychic ability.” Bummer.

Priming and Performance. In a highly cited study from 2001, John Bargh, a psychologist at Yale, found that people who were exposed to words like “strive” or “attain,” did better on a cognitive task. Researchers did two experiments in 2013 to reproduce the original findings. They could not.

Running Out of Self Control. Some research has suggested that when trying to exercise self-control, you really are exercising—like flexing a muscle. After a while, you get too tired and can no longer control yourself. But a 2014 study wasn’t able to reproduce this effect at all.

Buddies and Memory. In 2007, Sid Horton, a psychologist at Northwestern University, found that people who associated an object with a specific person were able to name a picture of that object faster when the person was present. But in a valiant display of self-abnegation, Horton tried to reproduce his results in 2014—and failed.