成人VR视频

Tables turned on critics

九月 25, 1998

The authors of The Times league tables are fed up with insults and want more constructive criticism

Speakers at a Committee of Vice-Chancellors and Principals conference felt The Times league tables were "derived from a particular concept of a university" and "purveyed a traditional, conservative, elitist view of the purposes of higher education".

It is true that the traditional universities dominate the upper reaches of the tables. It could be replied that teaching quality is the most heavily weighted element in the tables and these are the same universities that are accused of being too research-orientated to be interested in teaching. Many universities also seem happy to use league tables as part of their marketing effort when it suits them.

It would be possible to address the criticism that universities are too diverse to be compared in a single table by dividing universities into groups of similar ilk and creating a separate table for each.

Such groups could be existing self-selected clusters (such as the Russell Group and the Coalition of Modern Universities) or groups derived statistically from universities with similar characteristics. It would also be possible to use different measures and weights that were arguably more appropriate for each group (perhaps a higher weight for research for the Russell Group?).

The danger would be that this approach might reinforce notions of a first division, second division etc. However, such a discussion of the missions of universities would actually miss the point. The tables (at least The Times tables) are not designed to measure universities, they are designed to answer applicants' questions.

Applicants do not ask questions like: is a university fulfilling its mission to the local region?

They ask questions like: where is a good place to study archaeology or zoology? Do they teach well? Do they have good facilities? Will they help me to get a job?

The columns in the league tables are an attempt to address these types of question. Universities may be very different and claim very different missions, but the questions asked by applicants are rather more similar. This is reflected in the similarity of the information and claims in undergraduate prospectuses. Those who genuinely do ask rather different types of question (such as applicants for part-time courses) tend to be those who are much more restricted in their choice of university and for whom national comparisons have less relevance.

Even if applicants ask basically the same questions, they may differ in which they regard as most important. In other words, different applicants would attach different weights to the columns of a table, making any one chosen set of weights rather arbitrary.

This could be addressed in several ways. An empirical approach would be to ask a large number of applicants for their personal weights and use the average outcome. A better approach (though one hard to include in a daily newspaper) would be to have interactive tables where each applicant fed in their own weights and so created a personal table.

Another criticism of league tables is that the measures used are inappropriate or that other, more appropriate ones, should be used. For example, the use of A levels as a measure of entry standards is criticised on the grounds that students with A levels form a relatively small proportion of the intake to some universities. However, for anything to be included in the tables, the data must be available. A levels are the only qualification, apart from Scottish Highers, where average scores are available in national data sets.

In Australia an annual survey of graduates provides information about their satisfaction with their university and their graduate starting salary. Both of these measures could be very useful in a league table, but at present the data is not available here.

Perhaps the most vexed question of measures concerns the notion of value added. The concept is fine: look at a university's input and its output and compare the two as a measure of the value added by the university experience. The problems come with the implementation.

A-level scores provide a reasonable input measure, though their use has been criticised on the grounds of low relevance to some universities. The output measure is also problematic. Degree classifications and graduate destinations could be used but in both cases there are difficulties with the data. A value-added measure would be enhanced if lower academic standards led to inflated degree classifications, or if graduate destinations were inflated by students obtaining low level jobs (neither of which represents genuine added value).

Then there is the relationship between input and output. Earlier attempts to measure added value resulted in an outcome that was inversely correlated with teaching quality (and, indeed, all the other measures used in the league table). If this was really true it would be very surprising. Clearly, these simple attempts to relate A-level scores with degree classifications and graduate destinations are unsatisfactory.

Generating a reliable measure of value added is genuinely difficult. There is no doubt that The Times would welcome such a measure.

League tables may not be perfect but they are here to stay so it makes more sense to work with them. Rather than hurl insults about elitism, it would be better to offer constructive criticism and seek ways of creating tables that are genuine, if inherently approximate, reflections of reality.

Andrew Hindmarsh, Bernard Kingston and Bob Loynes. Compilers of data for the annual The Times league tables.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT