clock menu more-arrow no yes mobile

Filed under:

Do prestigious science journals attract bad science?

CebotariN/Shutterstock

A few weeks ago, I explored some of the problems with peer review in academic publishing — noting that even top scientific journals have run seriously flawed papers.

But perhaps that "even" was unwarranted. What if the top journals are more likely to run flawed research compared with other outlets?

Björn Brembs, a neurogenetics researcher at the University of Regensburg in Germany, recently took a look at research on the reliability of studies, focusing in particular on which journals published flawed papers that were later debunked or retracted. Surprisingly, he found that these erroneous papers were more likely to appear in more prestigious journals.

"If you take all journals and rank them according to prestige," he wrote in an email, "the most prestigious journals publish the least reliable science (at least when you look at the available evidence from experimental fields)."

He attributed this to a combination of factors, which suggests that the pressure to publish in a top-ranked journal might fuel bad science:

1. The 'top' journals explicitly attract "too good to be true" results.

2. The 'top' journals make careers, so people do anything to get published there.

3. The professional editors may be worse than scientists at reviewing the articles.

(It also might be due to the fact that papers in top journals get more scrutiny than elsewhere. More on this below.)

Top journals seem to have a higher rate of retractions, too

There's other evidence of a correlation between flawed papers and journal prestige. In 2011, two American microbiologists looked at the rate of retractions among journals and found a strong correlation between the journal impact factor and the rate of retractions. You'll notice that some of the highest-impact journals (the New England Journal of Medicine, Science, and Cell) are furthest along in the "retraction index," a measure of the rate of retractions:

(Journal of Infection and Immunity)
via Retraction Watch

This chart, according to the authors, "revealed a surprisingly robust correlation between the journal retraction index and its impact factor. Although correlation does not imply causality, this preliminary investigation suggests that the probability that an article published in a higher-impact journal will be retracted is higher than that for an article published in a lower-impact journal."

Do top journals publish more flawed papers — or do they just get more scrutiny?

The "post-publication" peer review website PubPeer has other interesting, albeit limited, data on this question. The site is designed as a forum for researchers to critique studies that have just been published. These comments are generally negative and specific (i.e., focused on picking apart flaws in data or the author's chosen methodology). Here, too, top journals were among those that attracted the most comments, according to one of the site's founders, Brandon Stell.

Here's the data on the number of PubPeer comments in the top 10 journals (ranked according to Google Scholar's impact measure):

So what's going on here? There are many potential explanations. Since PubPeer's data offers only the crude number of comments and not the rate of comments by journal, it's possible that higher-output journals get more comments simply because they're publishing more.

It's also possible that people pay more attention to and care more about the quality of data published in top journals and are therefore more likely to scrutinize and comment on it. It's the same way a small community newspaper might get away with publishing an error while the New York Times wouldn't.

On the retraction piece, Ivan Oransky — one of the founders of the site Retraction Watch — thinks the best publications could actually attract fraudulent authors. But, he added, "It's also true those journals are read by the most people because they're cited by the most people. Having more eyeballs on top-ranked journals may be a bigger explanation for the fact there are more retractions there."

The higher rates of retractions could also suggest top journals are more accountable than others. We know that retractions are rare overall — not all journals bother with them. So perhaps journals like the New England Journal of Medicine are simply more responsive to criticisms, and more retractions is a good thing. (Indeed, some have argued that more retractions are actually a good sign for science.)

But there were some potentially revealing gaps in the data. At PubPeer, the No. 1 journal, Nature, got the most comments. It also got twice the number of comments compared with the third-ranked journal, Science, and more than 10 times the number of comments in the medical journal the Lancet. This may reflect PubPeer's pool of commenters — if there are more basic scientists and fewer clinical researchers, medical journals like the Lancet would go under the radar.

Or maybe it suggests that Nature is an outlier. Nature and Science are comparable journals, after all, so is Science doing something other journals could learn from? 

For now, the data raises more questions than answers. I'd like to know what you think.

Email me at julia.belluz@vox.com.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.