成人VR视频

Mix and match approach muddles the results of the REF

Richard Black on data manipulation and the problems with multidisciplinary subpanels

十二月 4, 2014

Source: Miles Cole

The wait is nearly over. On 18 December we will all know the results of the research excellence framework. League tables will be produced – including by 成人VR视频. Departments will celebrate, start an inquest, or possibly put their hands over their ears and chant “la la la” in the hope that the result will go away. University finance directors will start to calculate the costs in pounds and pence. But what will the results really tell us about our collective and individual research reputations?

Don’t get me wrong – I am a fan of the REF. I had the privilege to serve on one of its subpanels and read some of the truly superb research that has been conducted in my discipline over the past six years, much of which I had not seen before. Some academics will already have used the perceived strength of their submissions to negotiate pay rises, better conditions for research or to secure advantageous moves to stronger or better resourced universities. Overall, the pressure on academic salaries, and probably also research quality, will have been upwards, and substantial. Yet there are many reasons for caution on results day. All will not be quite what it seems.

A first point is that, in most subjects, there will be a quick dash to find the most advantageous league table for each institution. The 成人VR视频 Funding Council for England plans to publish “quality profiles” for each unit of assessment – the proportion of work that is 4*, 3*, 2*, 1* and unclassified. But, for understandable reasons, it will not turn these into any sort of ranking.

The THE plans, as usual, to turn the quality profiles into grade point averages. Yet, for some institutions, focusing on their proportion of 4*, or 4* plus 3*, research will tell a different and more positive story, which they will surely tell if they can. Furthermore, the fact that grade profiles will also be published in disaggregated form – for outputs, environment and impact – multiplies by a factor of four the possible league table permutations that can be produced. It is entirely possible that within a discipline, 15, 20 or even 25 institutions might be able to claim that they are in the “top 10”. And who is to say that top 10 for “outputs” or “impact” is any less powerful than an overall GPA – although one might question any university that seeks to present data on a different basis across different subjects just to show each in the best light.

It is even less clear how we ought to interpret Hefce’s published results in the case of subpanels that cover two or more distinct disciplines, such as anthropology and development studies (subpanel 24). These are particularly common within the humanities and social sciences. Five of the 10 humanities subpanels and three out of the 11 social science sub-panels cover more than one discipline. And while most of the science subpanels involve a single discipline, it is nonetheless the case that a single set of results will be published for dentistry, nursing and pharmacy, even though these are quite different fields, with different institutions submitting to them.

These multidisciplinary subpanels were created partly to minimise the cost of the REF, which has 36 subpanels in total, compared with 67 for the 2008 research assessment exercise. But they are highly problematic. To take anthropology and development studies as an example, while the top development studies department came joint second by GPA, the second-ranked development studies department would have come seventh in a joint table, the third ranked would have been joint ninth, and the fourth and fifth would have come 21st and 22nd respectively.

Although what appears to be a quality difference between anthropology and development studies may have been ironed out by academics from these two disciplines working together on the 2014 subpanel, it is still not of much interest to development studies researchers to read that their department has been ranked ninth nationally if six of the eight departments that did better were not part of their discipline.

There is no evidence to suggest that this problem will be resolved. Not only has Hefce said that it will not publish subject-specific tables for such important disciplines as music, history of art or geography, it has not even revealed which institutions have submitted to each subpanel, which would allow an intelligent observer to work out which would need to be included in a ranking for individual disciplines. Nor do learned societies appear to be coordinating such an effort for their disciplines. When I asked one, I was told it would be left to journalists to “league table the landscape and flush out the game players”. Yet nobody seems to want to say how.

It seems pretty certain that someone – whether it is individuals, institutions or commercial organisations – will construct subject-specific rankings in due course. But what a shame that these are unlikely to appear on results day itself, but will instead emerge piecemeal, contested and at a point when the attention of the wider world has moved on.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.