Subscribe to our newsletter

Researchers, Metrics and Research Assessment

30th September 2016
 | Phill Jones

Summer is over – it’s official. You can tell because the weather has changed, and also because the ALPSP annual awards dinner and conference was last week (or perhaps two weeks ago by the time I finish writing this post). For me, ALPSP kicks off the fall conference season and provides a great opportunity to gauge the mood of the industry after everybody has had a chance to clear their head during the summer break.

This year, two particular sessions stood out for me: The first of these was moderated by Isabel Thompson of Oxford University Press and was on the subject of academic engagement and what it means today.

In her introduction, Thompson quoted an anonymous ex-researcher who currently works in publishing, as saying that researchers think of publishers, ‘spectrum that ranges from pure evil on one side, to a necessary evil on the other!’ – to my surprise, Isabel later fessed she was quoting me. Admittedly, I can’t remember making that joke, although it does sound like something I’d say (I hasten to add that I don’t personally think publishers are evil at all).

The take home message? From Isabel Thompson
The take home message? From Isabel Thompson

“…he gets the impression that researchers think about publishers as sitting somewhere on a spectrum that ranges from pure evil on one side, to a necessary evil on the other!”

To drive home that point, the opening speaker, Philippa Matthews, an academic medic from Oxford University, summed up many of the complaints that researchers have around the process of publishing. It is telling that most of the complaints that she brought up were familiar: lengthy review processes, onerous submission requirements and editors who don’t screen manuscripts properly before review; despite the fact that we’ve heard these complaints before, it’s a good idea that we’re reminded of their importance.

It wasn’t all negative, Matthews praised open science platforms like F1000. She also reported on her own positive experiences in getting a non-standard output published. In her case it was a live, interactive database of functional biological data. In that vein, she called for greater flexibility on the part of publishers with respect to non-standard publication types. In the same session, Emma Wilson, Director of Publishing at Royal Society of Chemistry, gave an excellent presentation describing how RSC communicate and engage with their editorial boards, as a way to stay in touch with their community – they also consider young researchers to be a valuable asset.

Another stand-out session was Beyond Article Level Metrics moderated by Melinda Kenneway – Ben Johnson of HEFCE was the first to speak. He gave a high-level summary of HEFCE’s Metric Tide report asking to what extent funders should be using metrics to aid, or even replace, qualitative assessment of research outputs. Jennifer Lin of Crossref also spoke in favour of the metricization of research assessment; she pointed out the metrics have the potential to reduce conflict and quantify decision making. Finally, Claire Donovan, who is a Reader in Science and Technology studies at Brunel, suggested that metrics have the potential to replace narrative in research assessment.

Obviously, this is a complex issue. Digital Science’s consulting group has warned in the past that the overuse of metrics can inhibit proper decision making because people tend to alter behaviour to fit the metrics.

A slide from Ben Johnson on the responsible use of metrics
A slide from Ben Johnson on the responsible use of metrics

You’ll be pleased to read that all who spoke at the ALPSP session called for the responsible and appropriate use of metrics. Donovan reminded us of Goodhart’s Law: when a measure becomes a target, it ceases to become a good measure.

Without delving too deep into the debate, what these discussions show are that various stakeholders in scholarly communication, including institutions and funders, are taking the measurement of research outputs increasingly seriously. This is an important signal for publishers because it points to a new area of value that they can provide.

The lifeblood of any publisher is the community that they serve. Publishing has always been about helping researchers communicate and disseminate their work. While that hasn’t changed, the mechanisms have altered drastically since the popularisation of the internet. Today, a new research communication infrastructure is being developed, which, in part, is being used to underpin new research evaluation frameworks. Those frameworks are being employed by funders and institutions alike to make strategic investment decisions – such as who to hire and what to fund.

Projects like ORCID and Crossref mark an important turning point – as decision makers in academia increasingly move towards metrics, or at least automatic tracking of research outputs and impact, through mechanisms like current research evaluation systems (CRIS), publishers will find that it’s increasingly important to participate. By providing meta-data and coordinating with institutions and funders, publishers will be able to help make sure that the communities they serve get the credit (and continued financial support) they deserve for the work that they do.

You can see a video of your humble narrator (jump to 36 mins) saying something similar during a panel discussion at ALPSP about the future of digital publishing!