2,465
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

No time for improvement? The chronopolitics of quality assurance

Abstract

Time is an omnipresent key dimension in everyone’s life, yet academic time has only recently found scholarly attention. The temporal aspects of quality assurance, in particular, are basically unchartered territory. Taking a chronopolitical perspective, this article aims to close the gap, by critically examining how temporalities are firmly embedded in many quality assurance schemes and routines. Using various examples from internal and external quality assurance, the author demonstrates and discusses how such mechanisms are not only binding time but regulating and governing it, imposing temporal norms regarding tempo, rhythm, time-spans, time-scales and time ownership on higher education institutions and the people working and learning there. Concludingly, the article advocates a more reflective approach towards the notion of time in quality assurance, as latent temporalities appear to be far more consequential for the effectiveness of quality assurance than methodological micro-differences.

‘The tempo of the university has changed and so has the temporality’ (Murphy, Citation2014, p. 151)

Time is an omnipresent force or entity in our daily lives: we exist in time, we are structured by time, we measure time, organise time—and constantly bemoan how time flies. On the one hand, we regard time as a physical entity, which can be measured, put into equations and appears to be an unchangeable environmental constant. On the other hand, we experience time as something subjective and relative. We all live with multiple temporalities or multiple frames of time.

Acceleration and ‘lack of time’ have become dominant social mechanisms and interpretive patterns that pervade our societies in many aspects (Rosa, Citation2005) and can, to a certain degree, be linked to the omnipresent globalisation processes, not least in the area of education (Buddeberg & Hornberg, Citation2017). In essence, ‘time is at the core of any form of education’ (Decuypere & Vanden Broeck, Citation2020, p. 603).

Academic time or the temporal patterns of higher education, however, have only recently gained increased scholarly attention (Guzman-Valenzuela & Di Napoli, Citation2014), also because institutional time as a concept is still rather new (Murphy, Citation2014). The impact of time and how it is structured, organised and managed, on core processes and activities in higher education, is indisputable, though: as Felt (Citation2017) argued, the different temporalities of academia cannot be separated from the indicators that govern academic life and define success. She deliberately highlighted how the temporal construction of academic contracts, research projects and funding periods requires knowledge production (and publication) to be typically packaged in three-year units. Similarly, Murphy (Citation2014) argued how delivery has become the dominant mode of operations in universities, diminishing the time at hands for discovery. Spurling (Citation2015) drew attention to the important effects on organisational structuring of time and the intrinsic rhythms of practices for organisations and individuals.

Within the slowly emerging discourse on academic time, even less has been published on the temporalities of quality assurance. Typically, if a temporal perspective is applied to quality assurance, it is more about how perceptions, instrumental approaches or even systems change over a certain period of time (Shin, Citation2018; Seyfried et al., Citation2022; Overberg & Ala-Vähälä, Citation2020). Or, alternatively, time is indirectly invoked by pointing out the quality assurance related workload for academic staff (Newton, Citation2002, Citation2007). A rare exception is the paper by Clegg (Citation2014), who discussed latencies in the United Kingdom (UK) system and argued how the embedded temporal elements in the recently introduced Personal Development Plans as a mandatory curricular element (for example, projecting someone’s possible future self) might be excluding students with a disadvantaged background. Yet despite the lack of attention, the temporal régime(s) embedded in quality assurance seem to be of high relevance. Vostal (Citation2014, p. 82), though not explicitly addressing quality assurance, discussed the specific temporal connotations of the concept of excellence and the related imperative of competition, again drawing on UK experiences. He makes a case for differentiating between what he calls the ‘growing net of surveillance mechanics’ linked to the institutional discourse on excellence and the less time-sensitive notions of excellence inherent to scholarly work. Overall, however, as stated above, the notion of time is strangely absent in scholarly work on quality assurance.

Based on the concept of chronopolitics, this article aims to close this gap, by critically examining and discussing how the temporal dimension is firmly embedded in many quality assurance schemes and routines. Examples are drawn from the rhythm and regularity of external review cycles, as well as from selected internal quality assurance mechanisms employed by many higher education institutions (such as course evaluations of teaching, graduate surveys and key performance indicators). By extrapolating how quality assurance processes and instruments organise, structure and bind time, the paper shows the risks of underestimating time as a key variable for institutions and individuals alike. After a brief introduction of chronopolitical perspectives in current higher education research, three dimensions for analysing temporalities in quality assurance are introduced: periodicity and rhythm; timescales and timebudgets; and regularity and continuity. The article concludes with a reflection on the latent temporal norms embedded in quality assurance and argues that the issue deserves far more attention.

Chronopolitics in higher education

Time very much functions as a constitutive contextual dimension for people’s sensemaking of their lifeworld (Compton-Lilly, Citation2016) but, even more importantly, time from a sociological perspective can be regarded as multidimensional and performative (Adam, Citation2004). There is not just one time which human beings perceive as linearly passing but they orient themselves at different overlapping timescales (Lemke, Citation2000).

Chronopolitics in general describes the relation of temporality within a broader (political) context. With regard to higher education, ‘Chronopolitics refers to the politics of time governing academic knowledge generation, epistemic entities, and academic lives and careers, as well as academic management processes more broadly speaking’ (Felt, Citation2016, p. 54). For Adam (Citation2003), the political use of clock time accompanying modernisation, includes regulating time as well as defining and imposing time norms. Economically, this is complemented by commodifying time and using time as a resource. Multiple temporalities form a dense infrastructure (Felt, Citation2017) that guides, frames, and regulates academic work and knowledge generation as well as teaching and learning. Similarly, and drawing on the works of Marx, Leathwood & Read (Citation2020) argued that time can be seen as a technology of governance. From such a perspective, time is far from neutral but rather ‘embedded in the social and cultural dynamics of power and inequality’ (Bennett & Burke, Citation2018, p. 915). Bennett & Burke (Citation2018, p. 914) regarded higher education in general as a ‘timescape in which participants manage their own and others’ time according to normative frameworks’. In this, they follow the influential conceptual work of Adam (Citation1998, Citation2004, Citation2008), who coined the notion of timescapes for clusters of temporal features, including time-frames, temporality, timing, tempo, duration, sequence and past, present and future as temporal modalities. Adam (Citation2008) also pointed out the epistemological importance of such timescapes, that is, how the temporal frameworks we impose (and, which are imposed on us, vice versa) determine what we are able to see.

Examples in recent higher education research linked to the notion of chronopolitics include the exploration of acceleration in academic life (Vostal, Citation2014), allocation of time to different types of activities (Guzman-Valenzuela & Barnett, Citation2013) or even the chronopolitics of peer reviews (Kaltenbrunner et al., Citation2021). Felt (Citation2017) explored the links between temporal régimes in academia as well as the performance indicators in which such régimes manifest themselves. Hayes and Findlow (Citation2020), examined the role of time in higher education policymaking based on the case of Bahrain, arguing that connecting a spatial with a temporal lens allows for a better understanding of which global policies get prioritised in a specific context. Also on a macro-level, Lumino & Landri (Citation2020) examined the temporal politics inherent of the standardisation schemes within the European Higher Education Area (EHEA), concluding that the EHEA’s time space is essentially interfering with the multiple time spaces of higher education institutions.

Leathwood & Read (Citation2020) have drawn up a qualitative study to examine the negative effects of the increasing short-term nature of academic contracts in the UK system on teaching preparation and pedagogical relationships, thus ultimately educational quality. Bennett & Burke (Citation2018) described how certain assumptions about time are intricately linked with notions of student capabilities and prospects of success, and thereby also reinforce structural inequality. Rodgers et al. (Citation2022) came to similar findings regarding time-induced barriers that disabled academics are facing in the Australian university system. Manathunga (Citation2019) explored the equity angle with a focus on doctoral education, on the one hand showing how doctoral candidates and their supervisors in general are increasingly subjected to time pressures, while on the other hand showing how these pressures take a particular toll out of indigenous, migrant, refugee and international doctoral candidates.

Whereas most papers stress negative effects of standardised time and expectation structures, Vostal (Citation2014) tried to juxtapose the dominant interpretation of oppressive acceleration with some positive motivating and energising effects of acceleration in academia. Wallin (Citation2020) reported on student reflections on contemporary (neoliberal) timescapes in higher educations in the context of an interdisciplinary course.

As mentioned in the introduction, the chronopolitical perspective on quality assurance in higher education is yet rather underdeveloped, with hardly any scholarly research conducted in this area. All the more, there is a need to be reflective of the fact that quality assurance, as any other scheme for coordinating, governing or controlling higher education, is imposing specific temporalities on higher education and its actors and institutions. Such temporalities include the speed and rhythm embedded in particular practices (for example, teaching evaluations, external reviews), temporal regulations (for example, student and teaching workload schemes, standardised deadlines for follow-up actions) or temporal standards (as embodied by the link between academic career paths and certain types of contracts).

For this article, three different lenses were chosen to examine the temporal dimensions of and within quality assurance: the periodicity and rhythm lens focuses on different review and strategy cycles and how they govern institutional improvement opportunities. Within the timescales and time budgets lens, the view shifts to the relationship between temporal resources and improvement actions, touching upon ownership of time. The regularity and continuity lens is applied to the impact of trajectorial thinking (manifesting in most ideas of continuous improvement) and repeated quality assessments on processes and outcomes. Examples are drawn from internal as well as external quality assurance schemes, with a huge debt owed to the Country Information Knowledge Base maintained by the European Quality Assurance Register for Higher Education (European Quality Assurance Register for Higher Education (EQAR), Citationn.d.).

Periodicity and rhythm: cycle-driven improvement

In this sub-dimension, periodicity and rhythm refer to the frequency with which institutions (have to) undergo external quality reviews, as well as the way progress and change are structured via temporal packaging in strategy papers and action plans. Looking at external reviews within the European Higher Education Area, differences are quite striking. Standard 1.10 of the Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG) (European Association for Quality Assurance in Higher Education (ENQA) European Students’ Union (ESU) European University Association (EUA) European Association of Institutions in Higher Education (EURASHE), Citation2015), states that institutions should go through ESG-compatible external quality assurance on a cyclical basis. Details are, as always, left to national legislation, yet two things catch the eye, from a chronopolitical perspective: first, the explanatory guideline only mentions how the external quality assurance might differ regarding forms and organisational level (programme, faculty, institution) but does not even touch upon the possibility of different cycles. Part 2 of the ESG is stricter here, as quality assurance agencies need to be reviewed every five years. Second, reviewers and institutions are urged to follow up on recommendations (or check on them) between reviews.

This is quite relevant, as the length of a cycle differs quite considerably across the European higher education landscape. In Slovakia, institutions need to undergo accreditation at least once every ten years, similarly in the Czech Republic (though what is called an institutional accreditation there, mostly covers a specific field of studies within an institution). In Germany, a system accreditation seal is valid for eight years, before it needs to be renewed. The same holds true for institutional audits in Norway. In Estonia, Lithuania or Austria the cycles for institutions last for seven years, in Portugal, Georgia, Finland and Denmark for six years. In France, Hungary, Italy, Romania and Slovenia it is five years between cycles, which basically means that higher education institutions in these four countries have to shoulder twice as many mandatory external reviews as their counterparts in Slovakia. Within the UK, review cycles even differ between the four nations. Dissimilarities do not end here, though. Not all countries have institutional and programme level approaches in parallel; but if they do, cycles can either be congruent, as in the case of Portugal (where cycles are six years for institutional and programme level approaches) or incongruent, as in Italy (where institutional cycles take five years, but programme cycles three years). In Poland, programme level evaluations may lead to an outstanding assessment (awarded for 8 years), a positive assessment (valid for 6 years), a conditional assessment (a date of follow-up assessment will be given) or a negative assessment. The follow-up issue will be discussed further in the next section but one-to-three years is not a lot of time to fix anything major.

Internally, strategies and action plans are also often structured via three- to five-year cycles. In some cases, such a strategic plan might coincide with a change of leadership, yet in other cases (such as Austria, for example), renewal cycles are regulated by law. Incoming new leaders might be bound to strategic programmes they have not developed (or even bought into) themselves. The supranational accreditation scheme EQUIS of the European Foundation for Management Development (European Foundation for Management Development (EFMD), Citation2021) requires a clear definition of the main strategic objectives for the next five years, starting from the review date. It stands to reason, that the differences in cyclicality will have quite an impact on institutions’ own internal quality assurance systems and what they feel able to declare as an improvement goal within the respective cycle. In addition, an aspect that appears to be regularly overlooked is the fact that different disciplines have their own distinct temporal structures (Clegg, Citation2014) but, in the seemingly fair same-rules-for-everyone logic of most review schemes, such differences are invisibilised and ignored.

Timescales and time budgets: ownership of time

The timespan (duration) and time budgets lens are used to highlight how temporal resources are allocated to improvement actions and in what ways institutions are thereby losing ownership of time. It is quite striking that how much time it takes to implement a specific improvement action appears to be rarely taken into account when devising a quality assurance mechanism. As a result, institutions have sometimes too much time (leading to a situation where things are slumbering) or too little time, because the task is demanding and too complex. Action plans need due dates, higher education institutions themselves need to determine how long they will need to tackle a certain problem, yet are not completely free in this. Apart from any consideration linked to internal politics, available resources or management cycles, higher education institutions also need to heed the expectations embedded in the periodicity of quality assurance schemes or potential reviewer expectations. It can be quite challenging to explain why some actions take longer than the reviewers are used to from their own contexts.

Transferring Felt’s (Citation2016) considerations on the ownership of time in the area of research to quality assurance shows the inherent power dynamics: recommendations for improvement, in particular in their binding form as conditions impose on the time of organisations and individuals. Per se, such a statement might not raise many eyebrows: following Morley’s (Citation2003) observation of quality having become a universalising metanarrative, Vettori (Citation2018) had found in his research that arguments that contain positively imbued references and terms and carry the notion of improvement are rarely put to test or even contradicted. In this regard, it seems more than acceptable that improvements require time and other resources. The actual problem lies in the fact that neither reviewers nor quality assurance agencies seem to factor the time needed into their suggestions for improvement. A quite striking example for this proposed deficit can be found in Cirlan & Gover’s (Citation2019) analysis of topics addressed by recommendations in the reports of the Institutional Evaluation Programme, an ESG-compatible supranational peer assessment of higher education institutions run by the European University Association. Examining a sample of just 25 review reports written between 2014 and 2018, the authors count a staggering total number of 580 recommendations, with one institution receiving 44 alone. Higher education institutions are expected to report a year after on the progress made so far and can undergo a voluntary follow-up evaluation between one and three years after the initial evaluation. It is not mandatory to work on all recommendations but reports are published and, in several cases, IEP evaluations have been conducted as system-level evaluations in close cooperation with national authorities. Overall, the number and compulsoriness may vary depending on the methodology at hand but experience suggests that tempo, time budgets or sequencing are not exactly a priority concern for anyone involved.

In most national systems, follow up regulations are stricter and, again, there are considerable differences in the time span granted for resolving an issue. In Switzerland, for example, the Swiss Accreditation Council decides on a case-by-case basis how much time an institution gets to fulfil a condition defined during the procedure. In Sweden, the time span is the same for all, with a one-year follow-up phase for higher education institutions with programmes under review and two years in the case of an institutional audit. In Latvia, where within one type of procedure an entire field of study is accredited, possible outcomes are a refusal of the accreditation or a positive accreditation either for two or six years. The two-year follow up is reserved for cases with the more serious concerns, which leads to an almost paradoxical situation: less time to fix the bigger issues. The three examples are not only intended to show the differences in time allowed for starting or even finalising an improvement process across the European higher education area. They also demonstrate how a seemingly minor point within the overall design of a quality assurance scheme, the time span for follow ups, explicitly and implicitly imposes temporal norms on the institutions. This has considerable implications for the use of resources, or, even more importantly, for the actual improvement achievable in a certain timeframe. In other words, the often criticised ‘window dressing’ in quality assurance (Sziegat, Citation2021; Teelken & Lomas, Citation2009) might have strong roots in the underlying temporal régimes.

Relatedly, the time budget lens also highlights time as a resource that flows into quality assurance processes in abundance. How much time is spent in assembling reports and documenting activities, events and processes? Even more importantly, who is actually assessing the workload or time constraints of the processes they are creating, while at the same time linking this to effectiveness? The related question of quality assurance-related workload for the academics has been raised a long time, mostly in the UK context (Newton, Citation2002, Citation2007), but the overall time spent on quality assurance processes in organisations goes way beyond. Some organisations may react by creating shortcuts (Murphy, Citation2014, for example, discusses the bureaucrazy element) but this seems rather as a defensive strategy.

Regularity and continuity: repeating ineffectiveness?

From a chronopolitical perspective, the notion of continuity (in quality assessment and improvement efforts) is particularly interesting, as two different logics come into play: with regard to the assessment part, continuity is basically a synonym for regularity and stability: measurements are repeated over a certain period of time (implications touched on below). With regard to improvement, however, continuity becomes associated with growth and advancement. Very often the logic of a review, be it internal or external, requires the (graphical) depiction of quantifiable developments over time: research output, third-party funds or graduate numbers are just some examples here (see, among others, the EQUIS Standards & Criteria, European Foundation for Management Development (EFMD) (Citation2021)). If, however, the trajectory does not lean towards the upward, assessors usually are prone to identify a potential problem. The very logic of continuous improvement, wherever quantitative measures are involved, is infused by the problem of trajectorial thinking, where the past, present and future are aligned in the service of achieving progress via following a specific path (Felt, Citation2016). Kaltenbrunner et al. (Citation2021, p. 262) briefly discussed this aspect in their chronopolitical analysis of peer reviews of academic curricula vitae, touching upon how ‘quantitative comparison is adopted for its ability to radically break down evaluative judgment processes in settings without widely agreed-upon definitions of quality. This seems to be quite fitting for quality assurance in higher education, in general.

This type of thinking can also be found in internal quality assurance, for example, when teachers are expected to improve their course evaluation scores or when institutions take efforts to raise overall student satisfaction. Most internal KPI are infused with this kind of logic. In light of the popular Likert-type scales used for measuring student satisfaction, the continuous improvement notion is quickly confronted with statistical realities, though: there is an absolute limit to top scores, and, even more importantly, a relative one, as the top 10% of any population cannot extend one tenth of the same. The key problems of trajectorial thinking and how developments over time are easily misinterpreted, are firmly embedded in the practice of international rankings, too. While discussing the relationship between rankings and quality assurance, Hauptman Komotar (Citation2020) gives the example of the university of Ljubljana, which dropped from the 401–500 group in the Academic Ranking of World Universities (ARWU) in 2018 to the 501–600 group in 2019, which suggests a completely unrealistic decline in quality in a very short period. Part of the chronopolitical challenges that need to be overcome in quality assurance, is thus the way that time coordinates are unreflectedly used and misused to depict, assess and demonstrate changes in quality.

Coming back to the continuity in assessment schemes: many instruments used in internal quality assurance, such as course evaluations of teaching, student surveys, graduate surveys or employer and stakeholder surveys, are built around the notion of regularity. Data is collected and processed repeatedly (for example, semester-wise, annually, bi-annually), with the underlying assumption being that any longitudinal analysis can detect changes in quality. Yet, methodologically, this makes little sense, as reports typically do not compare developments over a long period of time but mainly year by year, or for three- or five-year observation periods, even if just for the fact that report templates become unorderly if the period is longer. In view of how most surveys are constructed (very often using nominal or ordinal type scales and aiming for descriptive statistics) and how stable the response behaviour of most groups tends to be, there are hardly any major changes to detect, leading to a situation where micro-changes (if observed) are vastly overrated. Considering how much (time) resources are used for operating all those much too dense collection cycles, it might be advisable to reconsider. Yet the more crucial aspect, arguably, is, how the inherent value of regular data collections (and confusion of regularity and frequency) drive interpretation, and thereby ultimately the definition of what is considered as a quality problem and or improvement. Quality notions such as transformation (Harvey & Green, Citation1993; Harvey & Knight, Citation1996) would require quality assurance instruments with a rather different relationship with time.

Last but not least, a focus on continuity might even thwart the intended effects: from a psychological point of view, improvement in the way of a continuous (trajectorial) process can arguably even be seen as a devaluation of previous achievements (Weick, Citation2000). Temponi (Citation2005) has also described some key academic concerns regarding the application of business-important continuous improvement schemes in higher education, which in essence leads back to Roffe’s (Citation1998) pioneering reservations based on the difference between higher education being people-oriented rather than process-oriented.

The unreflected temporal norms embedded in quality assurance

Derived from the previous observations, a cautious, but consequential, conclusion: quality assurance mechanisms, internal and external ones, are not only binding time but regulating and governing it, imposing temporal norms regarding tempo, rhythm, time-spans, time-scales and time ownership on higher education institutions and the people working and learning there. This might not come as a big surprise, regarding how quality assurance has been associated for a long time with new public management governance and managerialism (Enders & Westerheijden, Citation2014; Davis, Citation2017) and a bureaucratic beast to be fed (Newton, Citation2000). For Guzman-Valenzuela and Di Napoli (Citation2014), managerialism in the bureaucratic university is very much based on mechanisms of time-control and resource-control. This is mirrored in the view of Murphy (Citation2014, p. 145), for whom universities do not resemble business corporations but municipal corporations, ‘ever increasing micro-regulation of staff and student time’.

However, there are also less obvious, far more latent norms at play: concepts of time also manifest themselves in the standardised ways in which curricula and programmes are built (Leathwood & Read, Citation2020). How long are students allowed to take for a programme? How many hours is an average course allocated in order to achieve the teaching goals and learning outcomes? How does this impact on our very understanding of learning (and teaching)? In this context, it is important to again note that temporal patterns might not be completely standardised homogeneous but are also far from being created and managed by individuals. They are, to a large degree, mediated by the organisational (Spurling, Citation2015), or even institutional, structures of time. As one effect of this, challenges that would need to be tackled on the level of the organisation (such as the tension between fulfilling more and more external imposed bureaucratic requirements while also ramping up research output and teaching quality) become a problem of the individuum.

Quality assurance-related practices are deeply entrenched with time but this fact seems to be much undervalued, as is its impact on quality, however it is defined. As Felt (Citation2017) has aptly stated, the temporal construction of academic contracts, research projects and funding periods requires knowledge production (and publication) to be typically packaged in three-year units. Similarly, Murphy (Citation2014) argued how delivery has become the dominant mode of operations in universities, diminishing the time at hands for discovery. Such aspects might be immediately convincing and obvious but the real challenge lies in identifying the more latent levels where academic work and quality work (Elken & Stensaker, Citation2018) are influenced and governed by unreflected temporal structures.

There are various problems at hand that deserve more scholarly attention and professional reflection, inter alia:

  • The problem of temporal inconsistencies within and between different national and institutional quality assurance systems as mentioned above.

  • The problem of decoupled temporalities, for example, the differences in tempo and rhythm of external requirements and organisational processing speed.

  • The problem of conflicting requests on the time resources of academic and administrative staff, resulting in overload and window dressing strategies.

  • The problem that the way time is embedded in and governed by quality assurance mechanisms and processes may actually deter improvement, rather than encourage it.

This list of problems, as well as the facets and temporal features highlighted in this article are far from complete, with each of them deserving further attention and scrutiny. Closing up, though, there is a clear case to be made for more reflexivity regarding the temporalities of quality assurance in higher education and a more conscious treatment of time in all its dimensions when devising or implementing quality assurance mechanisms, internal ones as well as external ones. A first step towards more such awareness could be to invest more in-depth research to examining the chronopolitics of quality assurance, and to add this as a research strand to the suggestions by Harvey & Stensaker (Citation2022). In essence, the field would even profit from a growing awareness that some constituents of our daily practices are far more important and consequential for the effectiveness of quality assurance than methodological micro-differences.

Disclosure statement

No potential conflict of interest was reported by the author.

References

  • Adam, B., 1998, Timescales of Modernity (London, Routledge).
  • Adam, B., 2003, ‘Reflexive modernization temporalized’, Theory, Culture & Society, 20(2), pp. 59–78.
  • Adam, B., 2004, Time (Cambridge, Polity Press).
  • Adam, B., 2008, ‘Of timescapes, futurescapes and timeprints’, paper presented at Leuphana University of Lüneburg, Germany, 17 June 2008.
  • Bennett, A. & Burke, P.J., 2018, ‘Reconceptualising time and temporality: an exploration of time in higher education’, Discourse: Studies in the Cultural Politics of Education, 39(6), pp. 913–25.
  • Buddeberg, M. & Hornberg, S., 2017, ‘Schooling in times of acceleration’, British Journal of Sociology of Education, 38(1), pp. 49–59.
  • Cirlan, E. & Gover, A., 2019, ‘An analysis of topics addressed by recommendations in the reports of the institutional evaluation programme (Brussels, Institutional Evaluation Programme (IEP))'. Available at https://www.iep-qaa.org/downloads/publications/iep%20study_topics%20addressed%20by%20reports%20recommendations.pdf (accessed 4 February 2023).
  • Clegg, S., 2014, ‘Temporality, curriculum and powerful knowledge’, in Gibbs, P., Ylijoki, O.-H., Guzmán-Valenzuela, C. & Barnett, R. (Eds.), 2014, Universities in the Flux of Time: An exploration of time and temporality in university life, pp. 168–81 (London, Routledge).
  • Compton-Lilly, C., 2016, ‘Time in education: intertwined dimensions and theoretical possibilities’, Time & Society, 25(3), pp. 575–93.
  • Davis, A., 2017, ‘Managerialism and the risky business of quality assurance in universities’, Quality Assurance in Education, 25(3), pp. 317–28.
  • Decuypere, M. & Vanden Broeck, P., 2020, ‘Time and educational (re-)forms—inquiring the temporal dimension of education’, Educational Philosophy and Theory, 52(6), pp. 602–12.
  • Elken, M. & Stensaker, B., 2018, ‘Conceptualising ‘quality work’ in higher education’, Quality in Higher Education, 24(3), pp. 189–202.
  • Enders, J. & Westerheijden, D.F., 2014, ‘The Dutch way of New Public Management: a critical perspective on quality assurance in higher education’, Policy and Society, 33(3), pp. 189–98.
  • European Association for Quality Assurance in Higher Education (ENQA), European Students’ Union (ESU), European University Association (EUA), European Association of Institutions in Higher Education (EURASHE), 2015, Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG) (Brussels, EURASHE). Available at https://www.enqa. eu/wp-content/uploads/2015/11/ESG_2015.pdf (accessed 1 March 2023).
  • European Foundation for Management Development (EFMD), 2021, ‘EFMD quality improvement system: EQUIS standards & criteria'. Available at https://efmdglobal.org/wp-content/uploads/2021_EQUIS_Standards_and_Criteria.pdf (accessed 17 February 2023).
  • European Quality Assurance Register for Higher Education (EQAR), n.d., ‘Country information'. Available at https://www.eqar.eu/kb/country-information/ (accessed 1 March 2023).
  • Felt, U., 2016, ‘Of time-scapes and knowledge-scapes: re-timing research and higher education’, in Scott, P., Gallacher, J. & Parry, G. (Eds.), 2016, New Languages and Landscapes of Higher Education, pp. 129–48 (Oxford, Oxford University Press).
  • Felt, U., 2017, ‘Under the shadow of time: where indicators and academic values meet’, Engaging Science, Technology, and Society, 3, pp. 53–63.
  • Guzman-Valenzuela, C. & Barnett, R., 2013, ‘Marketing time: evolving timescapes in academia’, Studies in Higher Education, 38(8), pp. 1120–34.
  • Guzman-Valenzuela, C. & Di Napoli, R., 2014, ‘Competing narratives of time in the managerial university: the contradictions of fast time and slow time’, in Gibbs, P., Ylijoki, O.-H., Guzmán-Valenzuela, C. & Barnett, R. (Eds.), 2014, Universities in the Flux of Time: An exploration of time and temporality in university life, pp 168–81 (London, Routledge).
  • Harvey, L. & Green, D., 1993, ‘Defining quality’, Assessment & Evaluation in Higher Education, 18(1), pp. 9–34.
  • Harvey, L. & Knight, P., 1996, Transforming Higher Education (Buckingham, Open University Press & SRHE).
  • Harvey, L. & Stensaker, B., 2022, ‘Researching quality assurance: accomplishments and future agendas’, in Huisman, J. & van der Wende, M. (Eds.), 2022, A Research Agenda for Global Higher Education, pp. 81–95 (Cheltenham, Edward Elgar).
  • Hauptman Komotar, M., 2020, ‘Discourses on quality and quality assurance in higher education from the perspective of global university rankings’, Quality Assurance in Education, 28(1), pp. 78–88.
  • Hayes, A. & Findlow, S., 2020, ‘The role of time in policymaking: a Bahraini model of higher education competition’, Critical Studies in Education, 61(2), pp. 180–94.
  • Kaltenbrunner, W., Rijcke, S.D., Müller, R. & Burner-Fritsch, I., 2021, ‘On the chronopolitics of academic CVs in peer review’, in Vostal, F. (Ed.), 2021, Inquiring into Academic Timescapes, pp. 247–64 (Bingley, Emerald Publishing).
  • Leathwood, C. & Read, B., 2020, ‘Short-term, short-changed? A temporal perspective on the implications of academic casualisation for teaching in higher education’, Teaching in Higher Education, 27(6), pp. 756–71.
  • Lemke, J.L., 2000, ‘Across the scales of time: artifacts, activities, and meanings in ecosocial systems’, Mind, Culture, and Activity, 7(4), pp. 273–90.
  • Lumino, R. & Landri, P., 2020, ‘Network time for the European Higher Education Area’, Educational Philosophy and Theory, 52(6), pp. 653–63.
  • Manathunga, C., 2019, ‘‘Timescapes’ in doctoral education: the politics of temporal equity in higher education’, Higher Education Research & Development, 38(6), pp. 1227–39.
  • Morley, L., 2003, Quality and Power in Higher Education (Maidenhead and Philadelphia, Society for Research in Higher Education and Open University Press).
  • Murphy, P., 2014, ‘Discovery and delivery: time schemas and the bureaucratic university’ in Gibbs, P., Ylijoki, O.-H., Guzmán-Valenzuela, C. & Barnett, R. (Eds.), 2014, Universities in the Flux of Time: An exploration of time and temporality in university life, pp 137–53 (London, Routledge).
  • Newton, J., 2000, ‘Feeding the beast or improving quality? Academics' perceptions of quality assurance and quality monitoring, Quality in Higher Education, 6(2), pp. 153–63.
  • Newton, J., 2002, ‘Views from below: academics coping with quality’, Quality in Higher Education, 8(1), pp. 39–61.
  • Newton, J., 2007, ‘What is quality?’, in Bollaert, L., Brus, S., Curvale, B., Harvey, L., Helle, E., Jensen H.T., Komljenovic, J., Orphanides, A. & Sursock, A. (Eds.), 2007, Embedding Quality Culture in Higher Education. A selection of papers from the 1st European Forum for Quality Assurance (Brussels, European Universities Association).
  • Overberg, J. & Ala-Vähälä T., 2020, ‘Everlasting friends and enemies? Finnish university personnel’s perceptions of internal quality assurance in 2010 and 2017’, Scandinavian Journal of Educational Research, 64(5), pp. 744–67.
  • Rodgers, J., Thorneycroft, R., Cook, P.S., Humphrys, E., Asquith, N.L., Yaghi S.A. & Foulstone, A., 2022, ‘Ableism in higher education: the negation of crip temporalities within the neoliberal academy’, Higher Education Research & Development (published online 26 October 2022).
  • Roffe, I.M., 1998, ‘Conceptual problems of continuous quality improvement and innovation in higher education’, Quality Assurance in Education, 6(2), pp. 74–82.
  • Rosa, H., 2005, Beschleunigung. Die Veränderung der Zeitstrukturen in der Moderne (Frankfurt am Main, Suhrkamp).
  • Seyfried, M., Döring, M. & Ansmann, M., 2022, ‘The sequence of isomorphism: the temporal diffusion patterns of quality management in higher education institutions and hospitals’, Administration & Society, 54(1), pp. 87–116.
  • Shin, J.C., 2018, ‘Quality assurance systems as a higher education policy tool in Korea: international convergence and local contexts’, International Journal of Education Development, 63, pp. 52–58.
  • Spurling, N., 2015, ‘Differential experiences of time in academic work: how qualities of time are made in practice’, Time & Society, 24(3), pp. 367–89.
  • Sziegat, H., 2021, ‘The response of German business schools to international accreditation in global competition’, Quality Assurance in Education, 29(2/3), pp. 135–50.
  • Teelken, C. & Lomas, L., 2009, ‘How to strike the right balance between quality assurance and quality control in the perceptions of individual lecturers: a comparison of UK and Dutch higher education institutions’, Tertiary Education Management, 15(3), pp. 259–75.
  • Temponi, C., 2005, ‘Continuous improvement framework: implications for academia’. Quality Assurance in Education, 13(1), pp. 17–36.
  • Vettori, O., 2018, ‘Shared misunderstandings? Competing and conflicting meaning structures in quality assurance’, Quality in Higher Education, 24(2), pp. 85–101.
  • Vostal, F., 2014, ‘Academic life in the fast lane: the experience of time and speed in British academia’, Time & Society, 24(1), pp. 71–95.
  • Wallin, P., 2020, ‘Student perspectives on co-creating timescapes in interdisciplinary projects’, Teaching in Higher Education, 25(6), pp. 766–81.
  • Weick, K.E., 2000, ‘Quality improvement. A sensemaking perspective’, in Cole, R.E. & Scott, W.R. (Eds.), 2000, The Quality Movement and Organization Theory (Thousand Oaks, Sage).