Why has no other European country adopted the Research Excellence Framework?

In 1986 the United Kingdom pioneered the development of performance-based research funding systems (PRFS) for universities with the introduction of the Research Assessment Exercise; what is now called the Research Excellence Framework (REF). Most European countries have since introduced PRFS for their universities, but not by adopting the REF. A large group of countries use indicators of institutional performances (“metrics” in UK terminology) for funding decisions rather than panel evaluation and peer review. The few countries to have chosen the latter approach either do not use evaluation results for funding allocation or have at least partly replaced the assessment procedures with metrics.

The blog post was first published on the LSE Impact blog

This situation should probably be understood from at least two sides. Starting with the UK perspective, the rest of Europe seems to disregard what is probably the most developed model of best practice when it comes to national research assessment exercises. The two major alternatives used for PRFS in Europe, indicators of institutional performance versus panel evaluation and peer review of individual performances, were discussed in The Metric Tide report (Wilsdon et al. 2015), an independent review on the use of metrics in research evaluation. The review convincingly concludes that within the REF, it is currently not feasible to assess research quality using quantitative indicators alone. Peer review is needed. The review also warns that the use of indicators may lead to strategic behaviour and gaming. One of the main recommendations is that:

“Metrics should support, not supplant, expert judgement. Peer review is not perfect, but it is the least worst form of academic governance we have, and should remain the primary basis for assessing research papers, proposals and individuals, and for national assessment exercises like the REF.”

This recommendation could be interpreted as a formulation of best practice also for other countries, particularly since it is aligned with the first of the ten principles of the Leiden Manifesto for Research Metrics (Hicks et al. 2015): “Quantitative evaluation should support qualitative, expert assessment.” The implication would then be that most other countries ought to change their PRFS. But before reaching this conclusion, let us firstly see the challenges from the other side.

Why not the REF?

Denmark, Finland, Norway, and Sweden belong to the majority of countries with indicator-based PRFS for their universities. The tradition in Scandinavia, however, is to look to the UK for inspiration. Sweden did so three years ago. A new model for research assessment and institutional funding, called FOKUS, was designed as an adaptation of the REF. The government decided not to implement it, mostly for reasons of cost, but also because the universities were concerned about their institutional autonomy and preferred to organise research evaluations themselves. Sweden decided to continue with the approach it has used since 2009: a small part of the resource allocation for research is based on indicators of external funding and of productivity and citation impact within Web of Science.

Sweden’s choice can only be understood if we separate the two main purposes of a PRFS: research evaluation and funding allocation. They can be difficult to distinguish. Hicks (2012) defines PRFS as related to both purposes; they are “national systems of research output evaluation used to distribute research funding to universities”. The understanding in Sweden is now that the purpose of research evaluation must be achieved by other means than the indicator-based PRFS. The emerging alternative is that each university runs a research assessment exercise by itself and with the help of international panels of experts. As an example, Uppsala University is presently running a research evaluation named “Quality and Renewal” where the overall purpose is to “analyze preconditions and processes for high-quality research and its strategic renewal”.

Sweden is thereby following the model of the Netherlands with regards to research evaluation. The national research assessment exercise in the Netherlands has no funding implications and is self-organised at certain intervals by each of the universities and coordinated on the national level by a Standard Evaluation Protocol (SEP). With this autonomous self-evaluation system in place, there is an agreement with the government that performance indicators representing research should not be part of the PRFS. Norway and Portugal also have national research assessment exercises that may look like the REF, but in fact mainly have a formative and advisory function. Without the link to funding, a flexibility is created in which evaluations may have a thematic rather than institutional focus (e.g. climate research in Norway), and the units of assessment may be self-organised units representing collaboration across several universities in a certain field (as in Portugal).

Indicators rather than evaluation

Italy has so far been closest to adopting the REF as a model for PRFS, but since its first version in 2003, a semi-metric solution has been developed that differs considerably from the REF. Most other countries have chosen indicator-based models directly, not because they do not observe the scholarly standards and fundamental principles of research evaluation, but because they do not see direct institutional funding as the appropriate place for executing research evaluation. The indicators are not replacing peer review, they are used for purposes other than peer review. While the direct institutional funding may be modified by performance indicators, proper use of peer review is instead installed in procedures for competitive third-stream funding or in assessment exercises with the main purpose of supporting strategic development.

The method that became the purpose

In 1986, research assessment based on peer review was the chosen method for institutional funding allocation in the UK. Funding allocation was the main purpose. Growing constraints on public funding and the prevailing political ideology resulted in policies aimed at greater accountability and selectivity. Gradually, the method became the more important purpose. The REF is now officially “UK’s system for assessing the quality of research in UK higher education institutions”. Seen from the inside, there seems to be no better solution. Seen from the outside, the REF is quite unique as a combination of performance-based institutional funding and research evaluation. Most countries do both, but in independent setups and with different and less expensive methodologies.

This blog post is based on the author’s article, “Unique, but still best practice? The Research Excellence Framework (REF) from an international perspective”, published in Palgrave Communications (DOI: 10.1057/palcomms.2017.78).

About the author

Gunnar Sivertsen is Research Professor at the Nordic Institute for Studies in Innovation, Research and Education (NIFU) in Oslo. His expertise is in policy-oriented studies of research related to statistics, performance indicators, evaluation, funding, and science policy. He has advised the development of new systems for research assessment and monitoring in the Czech Republic, Denmark, Finland, Flanders (Belgium), Norway and Sweden.

Tags: Research Excellence Framework, Methods, Evaluation By Gunnar Sivertsen
Published Apr. 27, 2018 3:46 PM - Last modified Mar. 31, 2023 11:34 AM
Add comment

Log in to comment

Not UiO or Feide account?
Create a WebID account to comment

illustrasjon

The OSIRIS blog

On the OSIRIS blog the members of the project team write about impact of research as our research on this topic progresses.

We aim for a collection of posts that represent preliminary and conceptual findings and ideas, discussions from meetings and seminars, shorter analyses of empirical data and brief summaries of the vast literature on impact. Some of the posts will be shared with the Impact Blog at the London School of Economics, the most comprehensive web page devoted to this topic and a great source of interesting ideas about many topics within science policy and science in practice.

The blog is also open for contributions from people outside of the OSIRIS team. Send us an email if you have a text that would fit into the blog.