XClose

UCL Department of Science, Technology, Engineering and Public Policy

Home
Menu

What's the Impact of the Research Impact Agenda?

27 May 2014

Article by Dr Kathryn Oliver

What’s the impact of the research impact agenda?

At the recent Circling the Square conference, the topic of the “Pathways to Impact” agenda was touched on several times. Many academics in the research world consider the ‘impact agenda’ as a form of performance management – and it is certainly true that academics are exposed to more performance management than ever before. Until relatively recently, academic contracts didn’t even have annual leave allowances, let alone quarterly career objectives to meet. Many senior staff I know look back on this era with fondness, interpreting this as a time of mutual trust and respect between academics and their employers.

How times have changed. Now, in today’s “publish or perish” culture, academics are expected to write more, get more grants, engage more – and are judged against these outputs. This has made the scientific community very uneasy, as summed up in this (courtesy of David Colquhoun)

“Modern science, particularly biomedicine, is being damaged by attempts to measure the quantity and quality of research. Scientists are ranked according to these measures, a ranking that impacts on funding of grants, competition for posts and promotion. The measures seemed, at first rather harmless, but, like cuckoos in a nest, they have grown into monsters that threaten science itself. Already, they have produced an “audit society” in which scientists aim, and indeed are forced, to put meeting the measures above trying to understand nature and disease”.

Clearly, there are legitimate fears about ‘gaming’ (publishing papers likely to have a high impact, rather than address substantive gaps), excessive publication (slicing results up to increase the number of papers), authorship manipulation and other corruptions of the scientific process. It seems likely, however, that the impact agenda is here to stay, and indeed will become more intensive, rather than less.

The logical endpoint of this process is a discipline-specific ‘best practice’ metric of productivity and impact: for instance, one could calculate the value of each paper from each grant in a pounds/paper format), or a more impact-oriented assessment of how societal, physical or environmental outcomes have changed. Many academics would regard this as an Orwellian intrusion into their academic freedom and a gross statement of mistrust by an officious state or employer. And indeed, the evaluation of research impact has lead to the creation of a new, and probably costly bureaucracy.

But was this the intention of HEFCE, when they made ‘impact’ a criterion within the last round of the REF. Aren’t there, in fact, good arguments that should encourage publicly-funded academics (often trained and supported by public money) to use their training and activities for the good of society? Engaging publics in scientific processes, and promoting better understanding of science are surely part of any scientist-citizen’s role. The moral and ethical arguments around public engagement and impact, in addition to financial accountability all support the use of an impact metric.

Undoubtedly both sides have valid points. The question is whether the processes around the REF can be used to support scientists, and improve the quality of science and scientific engagement – not just increase the volume. This will require thought and care, as issues like “scientist as advocate vs honest broker” arise.

Both these positions seem to have a rather simplistic idea about the evidence-into-policy process, and what ‘impact’ may be. As many have pointed out, expecting every piece of research to yield economic impact (let alone papers) is unrealistic – and not reflective of political or societal changes processes. Scientific engagement and impact can be about ongoing work, and ongoing debates – not always about patents and inventions. I can understand why scientists would reject any such simplistic valuing of their activities. But what if “impact” were agreed to include “contributing to academic debate”, or “developing new research questions”?

At the conference I also – playing devil’s advocate – asked whether engagement of this type – using social media, writing blogs, media engagement – would be likely to help early careers researchers, who are put under increasing pressure to publish papers. Sadly, the consensus was that it was the work that counts. Is this a false dichotomy? Open access publishing, post-publication review and wider use of blogging and commentary has certainly taken hold of the academic imagination in ways likely to improve the quality of science, and the quality of debate around it.

The fundamental point is that we don’t understand well how scientific evidence contributes to societal, political, physical, or economic outcomes – and acknowledging that makes a straw man of the direct impact metrics so many rail against. To me, this argues that we need better understanding of these processes, and a greater and more nuanced range of methods to understand what valuable and high-quality science looks like. At STEaPP several new projects aim to address exactly this gap – watch this space.