TEF and the vagaries of its metrics

The current state of play

At the time of writing – April 2019 – a number of separate strands of activity are underway, which will shape the final design of subject-level TEF, compulsory for all providers in 2020. These are: a final year of provider-level TEF (TEF4); an independent review; and a second year of subject-level pilots.

In this post I will consider what changes have been made to the metrics, in particular those used in the second pilot year, which is thought to be illustrative of the final model for subject-level TEF.

The format of the pilots taking place this year represents a consolidation of elements of the two models trialled in the first pilot year. It’s unusual for a national assessment regime to run through two pilots before implementation, which reflects the sheer scale of the exercise the OfS is taking on. That said, the decision to require written submissions for every subject group in addition to one at provider level – which differs from both year 1 pilot models – makes the process about as onerous as it could possibly be. Perhaps we can hope that the review may lead to a lessening of the burden of TEF, at the very least.

It’s a numbers game

Changes have been made to the metrics used in the initial hypothesis (i.e. a first calculation of a possible outcome) and these are expected to have a significant impact on future TEF outcomes.

From an OfS blog, here’s a summary of the changes this year:

– A revised model that will involve comprehensive assessment of all subjects and a separate provider level assessment.

– Student involvement and the student voice will be more prominent. Two new metrics proposed by students in last year’s pilot will be added: on learning resources and student voice. The criteria will include a focus on student partnership.

– Other revisions to the basket of metrics that informs the assessment will be tested including: a different combination of employment-related metrics; refined data on grade inflation; and new data focusing on gaps in attainment for disadvantaged student groups. Measures of ‘Teaching intensity’ will no longer be included.

Source: https://www.officeforstudents.org.uk/news-blog-and-events/press-and-media/work-to-develop-subject-level-tef-continues/

This has resulted in the inclusion of 9 metrics instead of 6, with 5 metrics derived from the NSS; 1 from HESA continuation data; and 3 employment-related metrics. The weightings have been adjusted accordingly, with NSS metrics now only carrying a 0.5 weighting each and the continuation metric worth 2.0.

The upshot of these adjustments to the weightings is that the NSS has been downgraded with its metrics worth 2.5 out of 7.5 (33.3%), whereas previously they were worth 3 out of 6 (50%). On the one hand the line is that the inclusion of additional NSS metrics is in response to student requests, yet the student voice now has a lesser impact. Might this change – and the inclusion of certain salary-contingent measures – be an effort to re-balance the playing field (in favour of Russell Group institutions)? The ‘lessons learned’ document (DfE, 2017) stated that:

The NSS remains a key component of TEF but, in order to achieve a more balanced assessment, the weight of each NSS metric will be halved.

DfE, 2017, 5

This was apparently due to concerns about the overuse of NSS metrics in the TEF and a judgement that although the metrics should be used it was appropriate to reduce their weighting.

Employment metrics
With the graduate survey DLHE being replaced by the Graduate Outcomes survey there was a potential data gap to cover.

Utilisation of the previously supplementary LEO metrics will help provide continuity in light of the discontinuation of the DLHE survey and its gradual replacement with data from HESA’s Graduate Outcomes survey

OfS, 2018a, 16

Some technical, yet significant adjustments have been made to these metrics, however, in terms of what they measure.

The propotion of graduates in “highly skilled employment or further study” (taken from DLHE) has changed to “highly skilled employment or higher study”. And “Employment or further study” is now “sustained employment or further study”.

The first change, i.e. to use higher study instead of further study, limits the definition of a successful outcome to those studying a qualification at a higher FHEQ level than they were previously studying. This may seen reasonable given an expectation to move on to postgraduate study at Level 7 after completing an undergraduate qualification. This, after all, is a climate in which undertaking another course to further one’s education is precluded through funding rules (so-called equivalent or lower qualifications (ELQs)). The impact in some subject areas is significant, however. A typical path for Law graduates is to proceed to a qualifying law degree, such as a Graduate Diploma conversion course, which is studied at the same level as the final year of an undergraduate degree (level 6). Students following this route would not be counted under the ‘higher study’ measure and this would appear to be an oversight.

DLHE to LEO

The use of ‘sustained’ employment signals a shift in the student cohorts within the scope of TEF. DLHE was a graduate survey undertaken 6 months after graduation, close to when a graduate’s studies took place. This had advantages in reflecting the current quality of provision at their alma mater; however, it was self-selecting (i.e. it included only those who participated in the survey) and took place before many graduates find their long-term employment path.

The inclusion of LEO data ensures the sample encapsulates all graduates in paid work in the UK, three years after graduation. The downsides are it does not indicate the nature of the work being done, e.g. full-time or part-time; it is not benchmarked by region; and it omits international students, those in self-employment, and those working abroad.

The disparities between the student cohorts within scope of each metric is illustrated here (based on subject-level TEF in 2020):

This shows that whilst some metrics relate to current or recently graduated students, the LEO data refers back to cohorts studying up to a decade previously. This must undermine the relevance of the data in relation to how teaching is or has recently been delivered at the provider (not mentioning the lack of a relationship between such data and teaching excellence in the first place).


References

Department for Education (DfE) (2017) Teaching Excellence and Student Outcomes Framework: lessons learned from Year Two. Available at:
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/651157/DfE_TEF_Year_2_Lessons_Learned-report.pdf [Accessed 4 April 2019]

Office for Students (OfS) (2018a) Teaching Excellence and Student Outcomes Framework: Guide to subject-level pilot data. Available at: https://www.officeforstudents.org.uk/media/ea9f3f58-00b6-45ef-bd00-1be029eb7114/ofs-201844a.pdf [Accessed 4 April 2019]

OfS (2018b) Teaching Excellence and Student Outcomes Framework: Subject-level pilot guide. Available at: https://www.officeforstudents.org.uk/media/57eb9beb-4e91-497b-860b-2fd2f39ae4ba/ofs2018_44_updated.pdf [Accessed 4 April 2019]


Leave a comment