Teaching intensity is the new frontier in the quest to demonstrate teaching excellence in the UK. By midwinter, senior managers will be descending upon departments across the country bearing spreadsheets and bad news: “We need to talk about your gross teaching quotient”.
The quotient – and let’s call it GTQ, because that’s what it will become – is the most eye-catching aspect of the published last month for the subject-level teaching excellence framework. It is just a pilot at this stage, but the full version is coming at us fast, scheduled to be in place in 2019-20.
I’m broadly supportive of the institution-level TEF. Some aspects of it, such as the Olympic-style medals, are manifestly crazy, but there’s no avoiding the push to demonstrate teaching excellence. The TEF uses reasonable proxies for quality – and I haven’t heard any better suggestions, despite all the noise – while the written submission encourages innovative actions and reflective thinking.
But I’ve had my head firmly in the sand on the subject-level TEF. On practical grounds, the concept seemed too cumbersome, while I’ve also wondered about the relation between cost and value. If the institutional-level TEF is already focusing managerial minds and driving reform – as I think it is, although nobody has bothered to wait long enough to assess this – where is the added value of a subject-level TEF?
Jo Johnson, the minister for universities and science, would respond that it’s all about the consumer. Potential students generally choose course first and university second, so they will want evidence of quality at that more granular level. In my experience, applicants are already overwhelmed with evidence – from league tables and the government’s website, for instance – but perhaps that might equally support Johnson’s argument. Some people, he might argue, just want to see a gold medal.
The subject-level pilot, which will involve only a handful of universities, will mostly use the same metrics as the institution-level TEF and also require a written submission, stripped back to five pages. And there will be 35 subjects or subject groupings, to avoid the metrical muddle that can be caused by small disciplines.
There are two pilot models. The “by exception” model will simply give subjects the same rating as their institutions unless the metrics indicate a need for closer investigation. By contrast, the “bottom-up” model will assess each subject fully, and build towards an institution-level award from this basis. I propose to label these, respectively, “the sane model” and “brace yourself, it’s coming”.
Then it starts to get really interesting. All the well-meaning complaints from across the sector that the TEF merely measures proxies have got the TEF team thinking. But they’re not for turning: they’re marching ever onward towards the holy grail of quantifiable teaching excellence. And this brings them inevitably to the GTQ.
The GTQ is a measure of “teaching intensity”. But the specification document insists about 15 times that teaching intensity is not all about contact hours –?so they must mean it. Kicked about in last year’s White Paper – which drew heavily on Graham Gibbs’s 2010 report ?–?the concept relates to the relation between the quantity and quality of teaching.
Teaching intensity will be measured in part by a student survey. Think about that for a minute: students will be asked questions about whether they’re getting enough teaching. And then there will be the calculations, weighting “the number of hours taught by the staff-student ratio of each taught hour”. Got it? The GTQ is then “calculated by multiplying the taught hours by the appropriate weighting and summing the total across all groups, followed by multiplying by 10 to arrive at an easily interpretable number”. And then it’s divided by the square-root of staff days lost due to stress. Or something like that.
It’s worth noting just what a restricted reading of Gibbs this actually is. In the desperation to create a new metric, lots of Gibbs' valuable “dimensions” have been set aside: from the critical questions of who does the teaching and how well they have been trained (puzzlingly ignored by TEF so far), through assessment and feedback, and beyond. I guess we get to brush up on this stuff when we’re preparing our written submissions, but, for all the rhetoric to the contrary, there’s a curious narrowing of vision.
My greatest concern about all of this is that GTQs will evidently be produced as comparative measures, driven by an underlying assumption that more intensity is always going to be better. Practice in my department may be perfectly sound from all sorts of perspectives; however, as I understand the proposals, if our GTQ is weaker than a competitor’s, we may be heading for a silver rather than a gold award. Admittedly, it’s only one metric, but from a management perspective it will attract attention as the newest and perhaps the easiest to manipulate. The subject-level TEF will understandably instil anxiety in all sorts of people in management positions; not all of them can be relied upon to respond reasonably.
Much may change between now and full implementation. Crucially, the review of the first round of the institution-level TEF will affect anything that happens thereafter at subject level. But the subject-level TEF may affect departments pretty much immediately. Much of that change will be for the good. But some will be more questionable – and an awful lot could increase workloads and stress levels for students and staff alike.
Andrew McRae is dean of postgraduate research and of the doctoral college at the University of Exeter.
后记
Print headline: Madness in the method