成人VR视频

Don’t throw out the baby with the metrics bathwater

Peer review is the least bad system we have for assessing quality, but metrics can help to determine attention and impact, says Euan Adie

十一月 27, 2015
baby in the bath

The government’s Green Paper has revived the idea – thought to have been killed off by the recent Wilsdon review – that metrics could play a bigger role in the research excellence framework. There is a suggestion that the full exercise might take place less frequently than it currently does, with a metrics-based version being used to periodically update results.

Presumably that interim exercise would also be required to address impact. When I was a researcher, not so long ago, if you had asked me what impact was, I would have pointed to a high-impact journal. But impact factors look at something very specific. A high-impact journal may promise scholarly impact, but has little directly to do with any wider upshots.

Equally, something like the “t factor”?–?proposed as one possible measure of Twitter activity around a publication – reflects, arguably, attention. It’s clear that it has nothing to do with quality.

These examples show how easy it is to conflate different aspects of research assessment at the level of a single work. Perhaps at this point it is worth us, as the research community, addressing that and pushing a model in which we’re clear about how quality, attention and impact are different – if interconnected – things.

Quality seems fairly straightforward: does the work stand up to scrutiny? Is it replicable? Does it contribute to a body of knowledge and to what degree?

Attention, equally, is fairly simple: was the work disseminated? Did other researchers find, read and cite it? Did it reach other specific audiences of interest?

Impact is the real-world outcome of the work. Did it influence policy? Did it change the way people thought about something? Did it bring some small benefit to society?

A piece of research will have these characteristics to different degrees. Some combinations are obviously more desirable than others: in particular you would hope that, at a minimum, all the research from an institution was of good quality.

When we talk about a “high-impact” journal, we blur the distinctions between quality, attention and impact. You want your work to get published in Nature both because it implies a high level of quality and because many people read Nature, so it is going to garner scholarly attention. Impact case studies for the REF, in theory, reflect both quality and impact because they have to meet a minimum quality threshold.

Citation-based metrics are a measure of attention. They don't measure quality or impact, although they may reflect them to varying degrees. Peer review remains the least bad system we have for assessing quality. Where metrics and indicators, including altmetrics data, can help is in the areas of attention and impact.

Research managers should care about the attention paid to their research: it’s an important step between publication and impact, as well as something critical to an institution’s reputation. If no one hears or reads about your research, then no one will use it. Online attention is something that can be quantified, even if understanding what it means requires human interpretation. We can see how many people are downloading, sharing and talking about work, where they are doing it from and what they are saying. This is what authors are interested in when they look at altmetrics data on publishers’ websites. It is also what communications offices can benchmark and monitor.

Equally, there is a role for indicators in finding and elucidating impact, highlighting potential pathways to places about which authors and institutions may not have even been aware.

It is vital that we keep a human element involved in all aspects of research assessment, and when it comes to metrics and altmetrics, good practice is at least as important as good data. The Leiden Manifesto, , is a great basis for this. However, we can and should make it easier for people do a thorough and accurate job of assessing different aspects of research with the metrics tools we have at our disposal.

Euan Adie is founder of Altmetric.

?

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT