Nick Hillman, director of the UK’s 成人VR视频 Policy Institute, : “The longer I?work at the borders of higher education and policymaking, the more I?think the biggest difference between (a)?academic output and (b)?policy output is the degree to which the former focuses on the author and people like them and the latter focuses above all on readers.”
Hillman is a champion of higher education, but the criticism is clear. Academics are writing for other like-minded academics, while policy reports are written for readerships that may be far from like-minded. Hillman believes that this explains “the sometimes huge gap between the impact an author would like their research to have and the impact it will actually have”.
My own work as an academic has been policy-related, but I?agree with this analysis. The problem, however, arises not from individual failings but from the different environments in which we work as either academics or policy practitioners.
Many years ago, I moved from my lecturing job to head a local government research unit. I?wanted to use research to inform decisions by officers and local politicians. What I?found is that it was very hard to find usable evidence in the journal articles thrown up by keyword searches. It was often impossible to connect the content to how policy is developed and implemented. Expert insight was sometimes absent because it needed familiarity with practice or policy documents.
Moreover, the all-important journal abstracts, especially in the social sciences, often did not report substantive findings but were instead a short introduction to the article and its structure. Articles frequently concluded with explicit or implicit recommendations for more funding when we were working with fixed or shrinking budgets and needed evidence on how to establish priorities.
Things have improved since then. “Impact” now features in the Research Excellence Framework (REF), which determines UK universities’ block grants for research, and research users are involved in these assessments alongside academics. Grants often require evidence of practical benefit, which is just as relevant for “pure” as for “applied” research because public understanding and engagement should be an expected outcome for all publicly funded research.
The main issue behind Hillman’s comment is a paradox about refereeing. Research commissioned or undertaken by policymaking bodies is not refereed, but, for that reason, much of it can be criticised for a framing and approach that are biased by political agendas, and while it is easy to digest, it rarely has a long shelf life. Academic outputs, importantly, appear in searchable databases, so their value can extend well beyond short-term and immediate audiences, becoming part of a shared body of knowledge that can be cited and built upon. And while academics do indeed focus on themselves when they write, this is largely because they will be the subject of detailed attention by referees (or, more accurately, their work will be – double-blinding is typical in the social sciences).
This can ensure that their work has more rigour but can also make it less useful to policy practitioners. It means that word count is taken up by showing how well the authors know the field’s existing canon. Methodological details are presented at length to head off criticisms about validity or reliability, and findings are caveated, often with the conclusion that further research is needed. For the policy practitioner, most of this needs to be in appendices. What they want explained in the article is the findings, stated with a level of confidence that is “good enough” to act on, and related to the type of tools they have for implementation.
For example, quantitative research may present statistical correlations that are impossible to translate into decisions to do?x rather than?y. A?statistically significant correlation can mean in reality a very low level of effect. Qualitative research, meanwhile, is often enlightening and its “real world” accounts are valued by politicians, but it may say little about the causes, extent and variability of an issue.
Although these drawbacks can be addressed by mixed-method research designs, they are best addressed by bringing academics and policy practitioners into dialogue with each other about common research problems.
At the Open University, we have recently brokered a new collaboration between academics and policy professionals in the four nations of the UK, plus the Republic of Ireland. We hope that our project can leverage the comparative policy insights that are made possible by these nations’ economic and cultural ties. An important impetus was the extraordinary lack of academic work that uses policy comparisons across two or more of these countries.
Such initiatives can also widen the pools of policy practitioners familiar with the academic world and academics familiar with the policy world. I?hope this will enable and prompt academic journals to follow the example of the REF and include research users as referees. In that way, academic papers in the social sciences can achieve the impact of policy papers, full of takeaways for practitioners but, crucially, takeaways that stand up to rigorous scrutiny and add to the world’s knowledge.
Tim Blackman is vice-chancellor of the?Open University.
后记
Print headline: Research users should be referees in order to boost academic papers’ impact