Improving reproducibility: What can funders do? Guest post by Dorothy Bishop

We’re pleased to present a guest post from Dorothy Bishop, a researcher who focuses on neurodevelopmental disorders at Oxford University, and is also heavily involved in efforts to improve reproducibility in science, including chairing the steering committee of a recent symposium on the topic organised by the Academy of Medical Sciences. Here, she talks about one of the themes that emerged from that symposium – the crucial role of funders in boosting reproducibility.

Dorothy Bishop. Credit: Robert Taylor
Dorothy Bishop. Credit: Robert Taylor

Look at the selection criteria for any major funding agency, and you will find it aims to support research that is “ground-breaking,” “innovative,” “high-risk,” and “at the frontiers of knowledge.”

But are these criteria delivering the best science? Think about the “reproducibility crisis,” familiar to many Retraction Watch readers: Evidence is growing that a high proportion of published research findings are not robust. This is bad news for funders; irreproducible research is a waste of money, and actually impedes scientific progress by filling the literature with irreproducible false-positive findings that, once published, never die.

A major source of irreproducibility comes from research that is funded but never reported. As I have noted previously, many researchers have a backlog of unpublished findings. All too often, they sit on a mountain of data that is unpublished simply because it is not the most exciting thing on their desk, and they need to be working on a new project in order to remain competitive. Negative results – e.g. where a promising treatment shows no effect, or an anticipated association between a genotype and phenotype fails to emerge — are likely to end up in the file drawer. By lingering in obscurity, they contribute to publication bias and the consequent distortion of the truth.

In October, the Academy of Medical Sciences (AMS) published a report considering reasons for irreproducibility in biomedical research and ways to overcome them. It was clear that the problem was not down to any one cause, and that a range of solutions needed to be considered — some bottom-up (such as better training of researchers), and some top-down, driven by institutions, publishers and, the focus of this post, funders.

To my mind, the most important thing that funders could do is to treat reproducibility as a key criterion for funding research. Here are some specifics:

  1. Put far greater emphasis on methodology. At least in the UK, space is at a premium in grant proposals and there is often an implicit assumption, especially for awards to senior investigators, that they know how to do research and can be relied upon to get the details of experimental design and analysis correct. Unfortunately, this does not seem to be a safe assumption. Consider, for instance, a recent paper by MacLeod et al (2015), analysing over 1,000 reports of animal experiments, which found a woefully low level of reporting of key factors related to bias, i.e. randomisation, blinding, conflict of interest, and power calculations. If funders required applicants to provide information on such aspects of research, this would weed out those exciting-looking studies that are likely to be irreproducible because of methodological weaknesses and biases.
  2. Do not penalise proposals where reviewers give constructive suggestions for improvement. In a highly competitive field, funding panels are often looking for reasons to reject a proposal. Wise reviewers learn that any criticism can be a death sentence, and so they hold back if they want to see the work funded. The only way reviewer comments get taken on board is if a rejected proposal gets resubmitted in a new round. This is inefficient. It would be far better if the assumption was that all proposals can be improved, and if reviewers were actively encouraged to suggest modifications; where there are good suggestions, funding could be conditional on their implementation. Quite simply, we are missing an opportunity to improve research quality, and hence reproducibility, by deterring reviewer input into study design.
  3. Require registration of detailed protocols for funded studies. Study registration has become standard in the field of clinical trials, after it became evident that too much flexibility in selection of participants, variables, and analytic methods meant that any study could show positive results. Registration allows one to check whether the researcher did what they planned. There may be good reasons for them to change course, but this needs to be transparent. Some funders post abstracts of funded proposals online (e.g. the NIH Report website); this is better than nothing, but the level of detail is often inadequate.
  4. Check whether researchers are doing what they said they would do. Funders are starting to introduce reporting requirements, but it tends to be the smaller funders, such as charities, that are most stringent in this regard. I remember being quite shocked and even resentful a few years ago when asked for an annual report by a charity funding a small project – at the time it seemed like an unnecessary burden. Yet, of course, it is entirely reasonable for a funder to scrutinise whether their funds are being used for the purpose intended. Researchers can face all kinds of unexpected obstacles that prevent them completing a study; they may also have positive reasons for changing what they do in the course of a project in order to chase a promising lead. However, at present, some funders have little defence against the cynical scientist who promises the earth in order to make a proposal sound attractive, only to end up doing something much more limited and methodologically weaker – e.g., with far fewer participants and hence much less likely to be reproducible.
  5. Require open data, materials and analysis scripts. Many funders have policies encouraging researchers to make data available in an archive, but these are seldom enforced, and many researchers are reluctant to make data open. Having data available allows others to check findings are reproducible, to do alternative analyses, or to run replication studies using the same methods. Computer science has been at the forefront of developing standards for open research, recognising this is a key component of reproducibility.
  6. Tie future funding to track record of publishing previously funded work – or at least depositing the data in an archive. This could help avoid the huge waste that we currently have. It may be hard to achieve without making grants of longer duration; one factor limiting the write-up of research is the need to be applying for new funding before the current grant has expired.

There is a big downside to all these suggestions: They make life more difficult for hard-pressed scientists, and also run the risk of stifling exploratory research and pursuit of unexpected observations – which are also vital for scientific progress. Like most researchers, I hate bureaucracy and dislike being told how to do my job. I get particularly cross if I think that time that could be spent on the creative, fun parts of science is instead wasted on form-filling. What I have suggested here is at odds with a move towards a more liberal approach adopted by some funders, such as the Howard Hughes Medical Institute, which funds “people, not projects”, with the goal of finding the best scientists, supporting them generously, and trusting them to do outstanding research, without having their creativity confined by too much regulation and scrutiny. It’s a great vision, but hard to square with a drive for greater reproducibility.

Some funders are already taking steps to address reproducibility issues. The CHDI Foundation, a funder focused on Huntington’s Disease, was an early adopter of ideas to enhance reproducibility (Munafo et al, 2013).  And in early November, a mental health charity, MQ, was represented at a small consortium that I chaired to discuss ways of implementing recommendations from the AMS report. At that meeting, we had a sense that we are seeing the first signs of a major reform in how research is funded; the challenge now is to introduce measures to enhance reproducibility without stifling creativity.

The symposium was supported by the Biotechnology and Biological Sciences Research Council, the Medical Research Council, the Wellcome Trust, and the Academy of Medical Sciences.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.

4 thoughts on “Improving reproducibility: What can funders do? Guest post by Dorothy Bishop”

  1. These are generally good suggestions. Regarding #4, NIH/NIMH already requires annual reporting of progress on funded projects. For over 25 years I dutifully sent in these reports as principal investigator on many grants. Problem was, I never got the sense that anybody read them. An administrative or even just a clerical person within NIH would have checked a box for receipt of the report and would have recorded the number of publications, but no scientist applied discriminating judgment to my submitted reports. Moreover, the possibility of bias is large, as is the possibility of mischief, however well intentioned. Better not to micro-manage the progress of a grant but to take the broad view of the investigators’ progress when renewal time comes around.

  2. It would also be nice to mention the potential for independent verification of key experimental results. For example Science Exchange in partnership with The Center for Open Science (COS) is replicating the key experimental findings from each of the 50 most impactful cancer biology studies published between 2010-2012 thanks to funding from the Laura and John Arnold Foundation.

    This reproducibility service is available to anyone willing to allocate a portion of their funding to ensuring their findings can be independently reproduced and should really be considered a compulsory step in any large scale research initiative. Millions could be saved if inaccurate results were uncovered earlier in the research pipeline but there is very little incentive on the part of the researcher for this to happen and in fact arguably substantial disincentive.

    The Reproducibility Initiative is not designed to point fingers. It’s simply a basic quality control mechanism that ensures less precious funding is wasted on failed clinical trials.

    1. Independent replication is key. The other ideas here are good, but it is not necessary to directly incentivize them. Researchers will quickly learn to avoid weak methodology sections, p-hacking, etc when they experience a few non-replications. For example, it will become desirable to preregister the design/analysis so that this is least likely to happen. They will want to share the data because this will help in figuring out what caused the different results. Then the methods can be improved. Progress will be indicated by the increasingly precise descriptions of the phenomenon under study. This will then constrain theorizing, which will lead to better ideas for new experiments.

      That said, a non-replication should not be considered a bad thing or black mark. It is normal to require a long series of experiments/observations to really understand what is going on.

  3. Great suggestions! Irreproducibility’s turning out to be the soft underbelly of science, and funders can take definitive steps towards tackling this problem. Journals and publishers too have joined the efforts by agreeing on a common set of guidelines to promote reproducibility. I feel that lack of transparency lies at the heart of this problem. So above everything else, all major stakeholders of science should push for transparency in conducting and disseminating research.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.