When the data behind the UK’s teaching excellence framework?(TEF) were released alongside the announcement of who had got gold, silver or bronze, it quickly became clear which universities had outshone others on the metrics underpinning the exercise.
成人VR视频 tried to aid this process by publishing its own ranking, further differentiating institutions within the three main award bands according to their metrics scores, based on aspects of student satisfaction, retention and graduate employment.
However, universities’ performance on the six “core” metrics – which reflect their scores in each measure for all their students against a “benchmark” figure – tell only part of the story.?The data also reveal how well universities did for different types of students, whether that be male and female, black and white or young and old: the so-called “split” metrics.
Many of the characteristics of these split groups – and their potential influence on a university’s core results – are accounted for in the process of benchmarking: for instance, the mix of young and mature students at an institution is reflected in its benchmark on each metric, while gender and ethnicity is reflected in five out of six of the metrics.
However, one group is notably absent from much of the benchmarking – students from less advantaged backgrounds, as defined by higher education participation data. Only one of the six metrics includes this group.
According to Paul Youngson, head of planning and timetabling at the University of Huddersfield, and an expert on the TEF data, this absence from the benchmarking could have been significant for an institution’s core score, especially on measures of performance where there is evidence that social background can affect dropout rate, for instance.
“There are clear correlations between non-continuation and the measures of social disadvantage: our analysis of the national benchmarking data for non-continuation shows that students from disadvantaged backgrounds are about 25 per cent more likely to drop out, and yet this difference is not reflected in the benchmarking used in the TEF,” he said.
He added that in his view there was “no barrier” to adding benchmarking for social background “from a statistical point of view” to the non-continuation data in future.
Given this absence of social disadvantage from most benchmarks, what do the split metrics say about how universities have performed for these students?
A ranking of English universities by their average Z-scores – which show the distance they are from their benchmark on each metric – for full-time students from disadvantaged backgrounds in England is illuminating, especially when compared with their Z-score ranking on the core metrics.
?
Rank by core metrics | Institution | Rank by split metric for disadvantaged students? | Difference in ranking | Average Z-score: England disadvantaged | Average Z-score: core |
1 | Coventry University | 1 | 0 | 6.95 | 16.52 |
2 | University of Surrey | 4 | -2 | 3.49 | 8.66 |
3 | Nottingham Trent University | 2 | 1 | 3.89 | 8.40 |
4 | Loughborough University | 3 | 1 | 3.85 | 8.36 |
5 | De Montfort University | 7 | -2 | 3.23 | 8.13 |
6 | University of Kent | 8 | -2 | 2.91 | 7.87 |
7 | University of Cambridge | 18 | -11 | 1.73 | 7.25 |
8 | Keele University | 5 | 3 | 3.45 | 7.21 |
9 | Conservatoire for Dance and Drama | 10 | -1 | 2.75 | 6.58 |
10 | University of Lincoln | 9 | 1 | 2.88 | 6.32 |
11 | University of Bath | 15 | -4 | 2.21 | 6.16 |
12 | University of Portsmouth | 11 | 1 | 2.60 | 5.93 |
13 | University of Exeter | 29 | -16 | 1.15 | 5.91 |
14 | Aston University | 6 | 8 | 3.33 | 5.87 |
15 | University of Buckingham | 32 | -17 | 1.04 | 5.67 |
16 | Falmouth University | 17 | -1 | 2.12 | 5.47 |
17 | University of Derby | 21 | -4 | 1.57 | 5.20 |
18 | University of Huddersfield | 12 | 6 | 2.57 | 5.10 |
19 | Arts University Bournemouth | 14 | 5 | 2.34 | 5.02 |
20 | University of Birmingham | 22 | -2 | 1.48 | 4.58 |
Note: Only institutions in England and those with majority full-time provision included
Many of those among the top 10 performers for the split metric are also those that have the highest average Z-scores overall on the core metrics: institutions such as Coventry, Nottingham Trent and Loughborough universities.
But elsewhere there is evidence of some universities doing better for disadvantaged students or other cohorts from widening participation backgrounds, and some doing worse.
Newcastle University is one example of an institution performing better, ranking 28th?for its core average Z-score, but 19th?for disadvantaged students. The TEF panel’s outcome statement for the university draws attention to its “outstanding outcomes, especially with regard to mature, disadvantaged and disabled students” and may have been one reason why the institution moved from an initial silver award based on core metrics to gold overall.
Another example is Brunel University London, 56th?for its core average Z-score, but 24th?for the disadvantaged split.
Mariann Rand-Weaver, pro vice-chancellor (quality assurance and enhancement) at Brunel, said that although the university’s extra support for students was aimed at helping everyone, “using data we will identify whether specific groups would benefit from additional initiatives”.
This includes professional mentoring for black and ethnic minority students, who make up about two-thirds of those studying at Brunel, or those from other widening participation backgrounds such as economically deprived areas.
But despite Brunel’s apparent good performance on groups under-represented in higher education, Professor Rand-Weaver said that there was not necessarily a need to change the way the TEF metrics worked.
“The metrics are already sufficient as they stand, in my opinion, and the narrative allows institutions to expand on what they do. Our focus is on supporting all students to achieve,” she said.
Among the institutions that take a tumble down the ranking order by Z-score when looking at the split for disadvantaged students is the University of Buckingham: 15th?on overall average Z-score but 32nd?on disadvantaged students.
Buckingham is also much less often flagged as being significantly above benchmark for disadvantaged students (just once, as opposed to the six “double” flags it scores, placing it at the top of THE’s TEF metrics ranking).
However, delving into the numbers reveals the difficulty in relying too much on the data for splits: in Buckingham’s case just 60 students counted towards its score for the split metrics informed by the National Student Survey. In other words, only a few students voting differently in the NSS can have a huge influence on the data.
For Alan Palmer, head of policy and research for the MillionPlus group of modern universities, this raises questions about the usefulness of small sample sizes that have relevance for plans to extend the TEF to subject level.
“As we unpeel the layers of the TEF results and understand the detail, it highlights issues that arise when we look at the data at a granular level and should give us pause for thought about how a subject-level TEF is implemented,” he said.
Nevertheless, he also points out that if the split metrics do show robust evidence that some universities perform better if their intake is skewed towards certain groups of students, questions will be raised about what incentives this creates.
“The fallout since the results were published shows that it is clearly a high-stakes game for institutions, so we need to be very careful that the focus on ratings does not lead to inadvertent recruitment decisions being taken by universities to ‘de-risk’ their student populations," he said.
“We also need to be sure that potential students have enough information and advice to be able to put the TEF results into context so that they don’t make choices based purely on the rating a university received.”
Find out more about THE DataPoints
THE DataPoints is designed with the forward-looking and growth-minded institution in view
后记
Print headline: Did TEF get a full picture of participation?