The article (via limited access/paywall) from Times Higher Education about their Student Experience Survey, published on 9th April 2015, has led to the usual raft of press releases from university marketing departments highlighting the good news story for their institution.
University of Loughborough: “In total Loughborough was ranked in the top ten in 14 of the survey’s 22 categories.” [Link]
University of Portsmouth: “This result puts the University in the top half of UK universities nationally for best student experience.” [Link]
University of Leeds: “… Other highlights include being voted joint-second for centralised and convenient facilities and joint-third for extra-curricular activities and societies.” [Link]
University of Leicester: “We’re proud to have risen 16 places in the Times Higher Education Student Experience Survey” [Link]
The annual survey of full-time undergraduate students at 113 UK universities received a total of 14,697 responses, each answering a series of 22 questions covering various aspects of university life such as quality of staff, social life, extracurricular activities, students’ union, accommodation and library facilities.
Whilst the overall response rate is statistically meaningful, in that broad conclusions can be drawn about the aspects of university life students are more or less happy with, the reporting of the survey, and those press releases, has neglected to mention the proportionately low response numbers at individual institutions. This undermines the representativeness of the results due to a relatively high margin of error.
By way of an example, the universities quoted above achieved the following response rates and associated margins of error in the sample:
|University||Student numbers*||Responses||% response||Margin of error**|
* Full-time undergraduates in 2013/14. Source: HESA, 2013/14 students by HE provider  – all figures are rounded to the nearest five.
** assuming a 95% confidence interval.
The polling firm, YouthSight recognise that “The difference in scores of similarly ranked institutions will not be statistically significant. When results are based on samples of about 100, we have to accept that some imprecision will arise from sampling variability”. This shows that no valid conclusions can be drawn in respect of institutions’ results, nor is it valid to use the data to construct a league table.
The minimum threshold for inclusion in the published results was 50 responses, which, based on a mean average cohort size of 9,300 (taken from HESA data), would result in a margin of error of a staggering +/- 13.82%. This is the territory of the on-screen small print of shampoo TV ads. At least in the case of the National Student Survey minimum response thresholds are set at a level that supports the validity of the survey results used for comparator data (threshold = at least 23 respondents and a 50% response rate).
YouthSight do make a case for data consistency over time providing reassurance as to the results, e.g. universities that consistently appear at the top of the rankings are likely to be achieving better results. When the results are taken together it does validate some broad analysis of general trends at the sector level, e.g. a hierarchy of the attributes questioned in the survey:
[In a previous blog post I’ve set out in general terms the issues affecting whether surveys can be considered representative]
Note: all external links accessed on 9 April 2015
 HESA data available at: https://www.hesa.ac.uk/index.php?option=com_content&view=article&id=1897&Itemid=634
 Times Higher Education, 2015 THE Student Experience Survey supplement