NB: this post relates to the consultation on Higher Education Review, April 2013
The Government’s Higher Education white paper, ‘Students at the heart of the system’ (June 2011) has been, in lieu of the promised Higher Education bill, the instruction manual for recent reforms in the sector. We have already seen changes in areas such as funding arrangements and published information, with the introduction of mandatory Key Information Sets (KIS) and the relaunched Unistats website.
The latest development is to introduce a more ‘risk-based approach to quality assurance’, through further revisions to the review methodology used by the Quality Assurance Agency (QAA) when scrutinising the operations and standards of all higher education institutions.
In June 2012 HEFCE were asked by the Government to consult on a new approach, which set the following parameters in place:
- A two-tier arrangement for the period of time between reviews: 6 years for most providers and 4 years for those without a track record;
- The consolidation of higher and further education reviews, and collaborative provision into a single method;
- The tailoring of the scope of reviews to providers, moving away from a one-size fits-all system.
The QAA was then set the task of designing the new system within these constraints and has launched its own consultation called ‘Higher Education Review’ (HER). The deadline for responses to the consultation is 22nd April 2013 and if you have an interest in this area do take part. [Link to QAA site] Here’s why:
Burden on review teams
To maintain that review teams were already under pressure following the introduction of Institutional Review (IRENI) in 2011/12 says nothing for the extra load now being proposed in HER. Whilst it has not been explicated stated, financial imperatives are surely paramount in this further redesigning of the review methodology, so soon after the last. There will be less time available to review teams to carry out their role; fewer reviewers involved in a review from start to finish; and the removal of the review secretary role, which played a key part in co-ordinating the process.
The most striking change is the replacement of the first team visit with a desk-based appraisal. This is carried out by a portion of the review team in order to establish a provisional level of ‘confidence’ based on a set evidence base. The fact that members of the review team visiting the institution to conduct the review won’t have been involved in the initial appraisal means they will have only a limited opportunity to familiarise themselves with the context of the Institution prior to the single review visit.
Another aspect of the initial appraisal is in determining the length of the visit and size of review teams. The structure is such that most reviews will be of ‘medium intensity’, with a review visit of three days. Whilst this may lower the stress levels of V-Cs and fulfill the brief of a more tailored process, it does add to the risk that conclusions may be reached without fully exploring the issues at hand.
More worrying might be the production of a short appraisal report, which is intended to provide the institution with transparent reasoning as to how the level of ‘confidence’ in their provision has been reached. The fact that this would likely be subject to a Freedom of Information (FOI) request may be music to the ears of news-hungry HE journalists, and that it uses language such as ‘confidence’ at a point when the review team have yet to step on the premises opens the way for misrepresentation.
Direct student contributions
A further initiative is to invite the entire student body to contribute to the review by raising issues with the QAA via email in advance of the visit. Apart from the potential problem of this undermining and bypassing student representation systems, it adds a further layer to review teams’ scrutiny and one which only provides issues of an individual nature that may have to be deciphered through further investigation. I would say that the notion of student involvement in the review process is in itself to be encouraged, but it should be in a properly considered and defined way.
More for less
All in all the new method fits with the government mantra of ‘more for less’, but is this appropriate for quality assurance? I am hugely sympathetic to the legion of academic and professional support staff, and experienced student reviewers, who will now find themselves having to give only a cursory glance at some aspects of provision due to time constraints. This may seem a step removed from the system they signed-up to support and time will tell as to whether some cease to continue as reviewers. By 2013/14 there will have been four different review structures across four consecutive years, and with a rate of change as fast as this, who can say where it will go next.