Apples and Pears: cross-sector comparisons of academic standards

The Quality Assessment Review discussion document (HEFCE, 2015) published in January by the group tasked by HEFCE with reviewing quality assurance arrangements in England, Wales and Northern Ireland, includes a question about comparability and consistency of quality and standards:

  • Question 7: Should we seek to demonstrate to stakeholders that academic standards are comparable between providers? And between subject areas? If so, what assurances should be sought about such comparabilities?

The document states that the review is “seeking to re-test assumptions about the importance or otherwise of demonstrating a reasonable degree of comparability of standards in an increasingly diverse system with different types of provision, and, if desired, the mechanisms that might be appropriate to achieve this” (HEFCE, 2015: 6).

Although the document is deliberately broad in scope, the inclusion of this topic reflects, as is stated therein, the lack of a settled view inside, and perhaps more pertinently, outside the sector. I make this comment with the backdrop of increased attention given to the sector by consumer groups, the continuing focus on league table performance and the expansion of provider-types allowed access to student loan funding. It can reasonably be predicted that attention from outside the sector will turn to how well degree outcomes can be compared between universities and across subject areas (a possible corollary from consumer contracts and how students may more easily move between institutions).

Back in July 2009, the Innovation, Universities, Science and Skills Select Committee questioned the Vice-Chancellors of Oxford Brookes and Oxford (and the former Secretary of State, John Denham) about the comparability of degrees between the two institutions (IUSSC, 2009a: 113). Their answers, which were deemed wholly unsatisfactory by the committee, sought to explain how standards were broadly comparable by making reference to the external examining system and the academic infrastructure published by the Quality Assurance Agency (QAA).

The committee were seeking assurances that could not be given, not categorically at least.

Three years earlier the QAA itself had issued a briefing note to the ‘Burgess Group’ (investigating alternative methods to the degree classification) with the following statement:

“The class of honours degree awarded to a graduating student by an institution reflects the academic achievements of that student, the marking practices inherent in the subject(s) studied and the rule(s) authorised by that institution for determining classification, on the basis of marks obtained in components of the programme of study. Therefore, it cannot be assumed (a) that students graduating with the same classified degree from different institutions, having studied different subjects, will have achieved similar academic standards; (b) that students graduating with the same classified degree from a particular institution, having studied different subjects, will have achieved similar academic standards; and (c) that students graduating with the same classified degree from different institutions, having studied the same subject, will have achieved similar academic standards.” (QAA, 2006)

Along similar lines, it is worth also noting the detailed research undertaken by the Student Assessment and Classification Working Group (SACWG), a group of academics and administrators with an interest in assessment who, over the past 20 years, have investigated issues relating to assessment regulations, quality and standards. Their written submission to the select committee (IUSSC, 2009b) is a useful summary of some of the issues at institution level impacting on the notion of wider comparability across the sector:

  1. Assessment regulations and practices (“practices” is taken to include not only the rules and conventions that complement the published regulations, but also assessment methods) across the higher education sector are quite varied.
  2. The profiles of honours degree classifications in different subject areas are varied (…”One cannot therefore with confidence interpret classifications without an appreciation of the norms pertaining to the particular subject(s) involved”).
  3. The type of assessment task set for students influences the grades that they receive for their work.
  4. Assessment criteria are, in practice, fuzzier than is often acknowledged.

Returning to the vice-chancellors citing the external examining system and the academic infrastructure – the language used in the various parts of the academic infrastructure may have helped to create, for those wishing to make a case for comparability of standards, the obfuscation identified by the select committee in the vice-chancellors’ answers. Subject benchmark statements were to be ‘considered and taken account of’; the framework for higher education qualifications was to ‘assist higher education providers to maintain academic standards’; Codes of Practice provided ‘system-wide principles (precepts) covering matters relating to the management of academic quality and standards in higher education’.

In the intervening years the QAA replaced the academic infrastructure with the UK Quality Code with its expectations which all providers of UK higher education are required to meet. In ‘How did it come to this?’ Raban and Cairns (2014) argue that the agency has moved the sector away from ‘self-regulation and academic freedom… (to) an external agency prescribing standards’ by changing its mode of engagement with institutions. The Quality Code is designed and operated in institutional reviews as a list of requirements to investigate. If it is expected that reviewers will work through the indicators for evidence that the expectations are being met, a process-driven approach is likely.

My supposition is that the agency’s shift in approach and the establishment, whether or not it was intended for use in this way, of a set of common parameters for comparison purposes, was down to a combination of (a) responding to external influences; (b) reacting to the difficulties matching academic standards across degrees that it itself identified.

The other aspect referenced by the Vice-Chancellor of Oxford University in 2009 was external examining. HEFCE is undertaking, in addition to the consultation on quality assessment arrangements, further work to support the review, with one strand announced in a letter to Vice-Chancellors (HEFCE, 2014) dated 18th November 2014:

“We will be commissioning a piece of work to review the extent to which the reforms proposed by the 2011 Finch Review on external examining arrangements have been implemented across the sector. We will be seeking views on whether the current arrangements will remain fit for purpose in the changing higher education environment to 2025, or whether the sector feels that some strengthening of these arrangements, or indeed supplementary models, would be desirable for the assurance of standards.”

What the review is likely to find is a model that provides a useful network of critical friends but assurances about equivalence to sector standards are limited by the personal experience of the examiner. The Higher Education Academy’s Handbook for External Examining states “The idea that a single external examiner could make a comparative judgement on the national, and indeed international, standard of a programme has always been flawed. Current trends are to identify and use threshold standards” (HEA, 2012: 29).

What this shows is that current quality assessment arrangements do not allow definitive conclusions to be drawn about cross-sector comparable standards, not in the way that consumer groups or comparison websites, nor the select committee, might wish. This remains the case despite the shift in approach adopted by the QAA through the changes made to the review method and related reference documents. And it remains to be seen what approach HEFCE takes once the review has run its course.


Brown, R. (2010) Comparability of degree standards? Available at: [Accessed 8 February 2015]

Higher Education Academy (HEA) (2012) Handbook for External Examining [online]. Available at: [Accessed 26 January 2015]

Higher Education Funding Council of England (HEFCE) (2015) The future of Quality Assessment in Higher Education [online]. Available at: [link updated – accessed 19 March 2015]

Higher Education Funding Council of England (2014) Letter to Vice-Chancellors, 18th November 2014 [Accessed 1 February 2015]

Innovation, Universities, Science and Skills Committee (IUSSC) (2009a) Students and Universities. Eleventh Report of Session 2008-09, Volume I [online] Available at: [Accessed 26 January 2015]

Innovation, Universities, Science and Skills Committee (IUSSC) (2009b) Memorandum 16: Submission from the Student Assessment and Classification Working Group (SACWG) [online] Available at: [Accessed 26 January 2015]

Quality Assurance Agency for Higher Education (2006) Background briefing note: the classification of degree awards. Higher Education Empirical Research Database [online]. Available at: [Accessed 8 February 2015]

Raban, C. and Cairns, D. (2014) ‘How did it come to this?’, Perspectives: Policy and Practice in Higher Education, vol. 18, issue 4. 9 December 2014, pp.112-118

Rust, C. (2014) ‘Are UK degree standards comparable?’ Times Higher Education [online]. 13 November 2014. Available at: [Accessed 26 January 2015]


One thought on “Apples and Pears: cross-sector comparisons of academic standards

  1. Pingback: Future approaches to external examining | Exit Velocity

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s