成人VR视频

If teaching needs fixing, is TEF the right tool for the job?

Giving evidence to a BIS committee inquiry on assessing the quality of higher education provoked questions about how teaching might be measured, writes John Gill

十二月 1, 2015
John Gill BIS Committee TEF evidence

How do we solve a problem like suspect teaching quality?

It’s a question that was posed to a series of vice-chancellors, teaching and learning experts, union bosses and student leaders who trooped into Committee Room 6 in Parliament today.

The MPs on the Business, Innovation and Skills Committee are digging into the government’s plans for a teaching excellence framework – the details of which were sketched out in the recent higher education Green Paper.

For once, I was on the other side of the fence, giving evidence to the committee, rather than reporting on it, because the committee wanted to understand how established rankings system deal with teaching (and, if you’ll permit a little immodesty, the MPs noted that the 成人VR视频 World University Rankings are now the “gold standard”).

For us at THE, it’s vital that our ranking looks at teaching and learning alongside the mainstay of research performance (we also look at universities’ international outlook and links with industry), because we’re ranking universities as rounded institutions, not simply as research institutes.

But there’s no doubt that teaching excellence is a much harder area of activity to measure – or, indeed, define – than research excellence. A comparison I’ve used before is with the US Supreme Court’s struggle in the 1960s to define what constituted obscene material. The justice in the case concluded that he might not be able to come up with a definition, but “I know it when I see it”. We probably all feel the same about great teaching: we know it when we see it, but how do we define and measure it?

Read more: Drop link between TEF and fees, vice-chancellors urge MPs

In the case of the THE World University Rankings, our focus is on the learning environment, which measures a basket of metrics related to the Humboldtian idea that research-led universities (which constitute the world’s top 800 institutions) should be providing a learning environment that is intimately linked to the research effort. These include the doctorate-bachelor’s ratio and the ratio of doctorates awarded to academic staff.

We also use measures such as institutional income, staff-to-student ratio and reputation – which despite being qualitative is an important part of the mix – that all contribute to a fertile learning environment – but do not measure what the teaching excellence framework has set out to measure, which is the actual quality of teaching at a granular level.

Another difference is that the THE World University Rankings focus on the world’s top 4 per cent of institutions – 800 out of about 20,000 globally – and as such tend to focus on universities with similar missions as research-led institutions.

The danger of applying a “one-size-fits-all” basket of metrics to a national system is that there is great diversity within that, diversity that is of huge value and which needs to be preserved, and there must be a risk of universities being led down a path to homogeneity if a small set of (flawed) metrics are employed and directly linked to the financial reward of a higher tuition-fee cap. It’s interesting that a united front appeared to be opening up among others giving evidence to the committee today that the TEF should be decoupled from permission to raise fees if the worst of the unintended consequences are to be avoided.

These are, of course, very complex issues and this barely touches on the questions of those perverse incentives, other metrics that might be developed (such as student engagement and learning gain), and whether or not it’s right that a greater focus is brought to bear on university teaching quality (although the consensus is that done the right way, this would indeed be beneficial).

Some of these topics were discussed in greater detail in the BIS evidence session, which (the relevant session starts at 10.08am).

For our part, we’re very proud to be the only world ranking that measures the learning environment, and helps to guide university strategy and performance among the world’s top institutions not only in research, but across all their key missions – teaching and learning included.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (2)

Isn't there a problem that in purporting to measure the 'learning environment' the Times Higher has just joined in with those who've encouraged the metric view of measuring teaching excellence? We've had 20 years of league tables and have grown accustomed to them. They are actually a reason to dispute the Minister's overall thesis that because there isn't a REF for teaching - universities don't value it. Universities value it a lot, inherently. But also because manifestations of metrics about teaching appear in league tables this is also a concern to universities. Find a university governing body that doesn't have a league table report - and particularly one that doesn't get a report on the metrics that appear in them - think NSS or employablity. But that doesn't mean the THE rankings measure the quality of teaching - your presumption that measuring aspects of the 'Humboldtian' ideal actually measures teaching is entirely wobbly - and so (as you note) inappropriate for other kinds of university. It's not actually doing the job for research universities anyway. The TEF, even in its mature form, is simply going to be unable to make valid judgments on 'teaching excellence' between universities - and so obviously it should be uncoupled from money. But any graded judgement coming out the TEF, Pass/Fail, Bronze/Silver/Gold, Unsatisfactory/Satisfactory/Excellent is going to be invalid as well.
I watched the meeting, and wrote this partly in response, on the myth of the level playing field. https://stumblingwithconfidence.wordpress.com/2015/12/01/higher-education-and-the-myth-of-the-level-playing-field/ Together I thought the two panels presented a reasonably unites front, and put forward some sitting arguments against a simple, one-size-fits-all TEF. But as we are in the realm of policy-based evidence, I have little faith in any meaningful or useful outcome.