成人VR视频

Where next for world university rankings?

In the second blog from the opening seminar of the Centre for Global 成人VR视频, Robert Tijssen says league tables are meeting growing demand from institutions as well as students

三月 23, 2016
Winners' podium

World university rankings are multi-purpose tools. Rankings are primarily intended to promote transparency across higher education institutions and create global comparability. That information may contribute to evidence-informed decision making and facilitate institutional learning processes. However, the boundaries of usefulness are not always clear. How credible are rankings as a source of valid information and meaningful knowledge? Should they be accompanied by detailed guidelines for proper usage, with disclaimers to make users more aware of pitfalls and caveats?

Rankings are, of course, crude reflections of complex realities. Every university has unique features. And trying to compress a large diversity of information into a single overall rank score, or into a few metrics associated with "key performance indicators", is simply an impossible task. Reducing organisational features to a few numbers, and presenting that outcome as ranking positions, was criticised by academics from the very start. Nonetheless, for all their obvious shortcomings, rankings have risen in prominence over the past 10 years; they have proved to be valuable information tools for some managerial purposes, especially marketing and promotion. Ranking-based information has started to drive performance of higher education institutions and affect institutional missions and functions. Such impacts on organisational behaviour and institutional structures pose managerial risks; rankings tend to misrepresent university performance levels and distort their specialisation profiles.

The level of risk depends on the kind of ranking system. Some university ranking systems compute multi-indicator “overall scores” to position universities in top-down “league tables” (such as the?成人VR视频 World University Ranking, QS World University Rankings); others prefer to present their data for each indicator separately in “scoreboards” (for example U-Multirank and the Leiden Ranking). Some high-profile rankings have become global business enterprises in this competitive arena; others are produced by academics and operate mainly within the public sector.

Rankings, in their capacity as knowledge brokers and information tools, aim to reduce complexity in a user-friendly fashion, but oversimplification may prove to be counterproductive and sometimes even confusing. As a result, many important conceptual and technical issues are not sufficiently addressed: notably the ambiguous relationships between measurement and characteristics of university performance, and the seemingly arbitrary weighting systems of indicators applied in the league table rankings. Although the scoreboard rankings make for more transparent tools than their league table counterparts, all ranking systems are affected by scarcity of high-quality information and inherent limitations of suboptimal indicators. Ideally, high-quality indicators and associated metrics should be:?

  • precise and non-biased (the measurement is generally seen as valid representation of the underlying concept or empirical phenomenon);
  • transparent and verifiable (users can fully understand its mode of production and can use it properly for analysis and decision-making);
  • robust and versatile (enables aggregation and disaggregation to different levels of analysis; and fair comparisons across a diversity of institutions); and
  • independent and difficult to manipulate; users and stakeholders cannot influence data selection or processing in undesirable ways.

Today’s rankings are still far removed from achieving these methodological “gold standards”. In fact, one can argue that these will not be realised in the near future: metrics will suffer from errors and missing data, and indicators are by definition approximates. Moreover, ranking systems are “social technologies” subject to unpredictable consumer behaviours, erratic market forces and evermore sophisticated digital information infrastructures. User-adaptability, in the face of constant change, determines their chances of longer-term sustainable development.

The future of rankings lies in their practice and use of ranking-related information products. All prominent ranking systems are supported by expanding international data infrastructures. Several systems have created a following of universities that are willing to feed the ranking systems with self-reported information, either from their administrative databases or from internal surveys. Some of those large, high-profile rankings now seem to have entered a phase of increasing returns: through adoption and use in a global community, they have diversified and launched more spin-off rankings (regional, thematic) as well as complementary activities (training) and services (statistics provision). Having enrolled a critical mass of adopters, their competitive advantages will attract additional users and reach out to new user communities; not only prospective students seeking the most appropriate higher education institution, but also those organisations themselves, eager to become more visible at the global level. This ongoing trend towards expansion and growth determines the agenda for the next stage of development; not only to capture the dynamics of institutional changes currently sweeping through national and global higher education systems, but also to live up to expectations of a wider range of ranking users and stakeholders. ?

Current ranking systems, however, appear to be stuck between a limited supply of reliable data and a growing demand for high-quality customised information. To become generally accepted information tools, they will need to upgrade and upscale their operations, keep up with the digital “open data” revolution, access a wider range of information sources, impose even better data quality standards and develop widely applicable metrics. Which of the currently available rankings – or radically new ones yet to emerge – will be able to meet these challenges and survive in the long run is anyone’s guess.

Robert Tijssen is professor of science and innovation studies at Leiden University and an extraordinary professor at Stellenbosch University. He is a contributor to U-Multirank and the Leiden Ranking.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.