The phrase ‘further research is needed’ is often found in both research articles and in policymaking, where the quest for more ‘evidence’ has become a mantra. But is research really lacking, or can there be other forces behind policymakers’ request for more research?
In OSIRIS in 2018 we have tested a new method for mapping the use of research by practitioners and policymakers in the public sector. The results show that there is a large degree of diversity in how research is accessed and used. In general, informal practices like “asking a colleague” and “googling” are more frequent than formal ways of searching for research-based knowledge.
Public research & development (R&D) subsidies are costly. In our article Public R&D support and firms' performance we show that such subsidies do have a positive effect on Norwegian firms. However, the effect differs between different subsidy programs and affects start-ups and incumbents differently.
Altmetrics track down and count the mentions of scholarly outputs in social media, news sites, policy papers, and social bookmarking sites. To what extent are they used and valued to measure impact in research funding? This post was originally published by the Europe of Knowledge blog.
Environmentally friendly technologies are an important example of an area where innovations have a high social value, but where markets would be scarce – or even absent – without public interventions. In our article “Can direct regulations spur innovations in environmental technologies? A study on firm-level patenting” we address this timely question and find that such public policies indeed encourage innovation in environmentally friendly technologies.
In 1986 the United Kingdom pioneered the development of performance-based research funding systems (PRFS) for universities with the introduction of the Research Assessment Exercise; what is now called the Research Excellence Framework (REF). Most European countries have since introduced PRFS for their universities, but not by adopting the REF. A large group of countries use indicators of institutional performances (“metrics” in UK terminology) for funding decisions rather than panel evaluation and peer review. The few countries to have chosen the latter approach either do not use evaluation results for funding allocation or have at least partly replaced the assessment procedures with metrics.
What are the main approaches for measuring impact of research and the most important methodological challenges in such measurements? Magnus Gulbrandsen and Richard Woolley discuss main methods, historical examples and interesting recent developments in this blog post.
The effects of research are uncertain and disputed — and efforts to evaluate them must take this into account.
On the OSIRIS blog the members of the project team write about impact of research as our research on this topic progresses.
We aim for a collection of posts that represent preliminary and conceptual findings and ideas, discussions from meetings and seminars, shorter analyses of empirical data and brief summaries of the vast literature on impact. Some of the posts will be shared with the Impact Blog at the London School of Economics, the most comprehensive web page devoted to this topic and a great source of interesting ideas about many topics within science policy and science in practice.
The blog is also open for contributions from people outside of the OSIRIS team. Send us an email if you have a text that would fit into the blog.