2,546
Views
10
CrossRef citations to date
0
Altmetric
Articles

Meritocracy or reputation? The role of rankings in the sorting of international students across universities

ORCID Icon, ORCID Icon & ORCID Icon
Pages 252-263 | Received 24 Apr 2021, Accepted 20 Apr 2022, Published online: 02 May 2022
 

ABSTRACT

University rankings have gained prominence in tandem with the global race towards excellence and as part of the growing expectation of rational, scientific evaluation of performance across a range of institutional sectors and human activity. While their omnipresence is acknowledged, empirically we know less about whether and how rankings matter in higher education outcomes. Do university rankings, predicated on universalistic standards and shared metrics of quality, function meritocratically to level the impact of long-established reputations? We address this question by analysing the extent to which changes in the position of UK universities in ranking tables, beyond existing reputations, impact on their strategic goal of international student recruitment. We draw upon an ad hoc dataset merging aggregate (university) level indicators of ranking performance and reputation with indicators of other institutional characteristics and international student numbers. Our findings show that recruitment of international students is primarily determined by university reputation, socially mediated and sedimented over the long term, rather than universities’ yearly updated ranking positions. We conclude that while there is insufficient evidence that improving rankings changes universities’ international recruitment outcomes, they are nevertheless consequential for universities and students as strategic actors investing in rankings as purpose and identity.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 According to a 2013 survey by the European University Association, sixty per cent of European universities surveyed report that rankings play a role in their institutional strategies, eighty-six per cent monitor their position in rankings, and sixty per cent dedicate human resources to monitoring rankings through dedicated units or staff (EUA Citation2013).

2 Even when rankings are limited to specific groupings, on the basis of age and region for example (as in QS Top 50 Under 50, THE Asia University Rankings), the same quality conventions and benchmarks apply.

3 This is crucial as we are interested in operationalising reputation and rankings independently. The CUG criteria are: student-staff ratio, academic services expenditure, facilities expenditure, entry qualifications, degree classifications, degree completion (operationalised from the Higher Education Statistics Agency, HESA), graduate prospects, student satisfaction (operationalised from the National Student Survey, NSS), as well as research intensity and research quality (operationalised from the Research Excellence Framework, REF).

4 Note that the earlier ranking tables by the Times and the CUG ranking tables display significant overlap in terms of the indicators being used, ensuring comparability of our two variables. Similar to the CUG, the Times indicators included research and development (R&D) income; scores from the research assessment exercises taken by the Higher Education Funding Council; teaching assessments taken by the funding councils for England, Wales and Scotland; employment ranking based on measures such as the proportion of graduates going on to permanent employment; as well as: student and staff ratio, completion rates, proportion of first-class degrees awarded, and expenditure on services such as library spending. The criteria used display some variation from a year to another (e.g., in 1994 an indicator for completion rates is introduced, in 1995 the indicator for R&D income is dropped) but the overall the measures remain consistent over time.

5 The Russell Group, established in 1994, is a self-selected association of (initially 17) universities in the United Kingdom. Although the group comprises universities with rather mixed outcomes in terms of teaching and research, the label nevertheless managed to capture the perceptions as a ‘distinctive elite tier’ (Boliver Citation2015).

6 Outliers have been excluded, e.g., the Open University, where the total number of students is about ten times larger than the average university in our sample, due to the nature of distance educational provision.

7 The Breusch and Pagan Lagrangian Multiplier test has confirmed that observations are more similar within universities (χ 2 = 160.24, p<.001).

Additional information

Funding

Soysal and Cebolla-Boado acknowledge funding from the UK Economic and Social Research Council [grant number ES/L015633/1].