878
Views
0
CrossRef citations to date
0
Altmetric
Articles

Lessons learned from enabling large-scale assessment change: a collaborative autoethnographic study

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 844-858 | Received 24 Mar 2023, Accepted 30 Oct 2023, Published online: 07 Dec 2023

ABSTRACT

The global pandemic prompted universities to rethink how assessment might be reconfigured to better support student learning across different modes of delivery resulting in unprecedented, large-scale, and rapid institutional change. Significantly, there has been a dearth of empirical studies examining the nuances of staff experiences of how they have managed and negotiated assessment change throughout the pandemic context. In this article we aim to bridge this gap. We are four colleagues at different UK higher education institutions who have all been involved in leading the sustained assessment response to the pandemic within our own organisations. We use collaborative autoethnography (CAE) to explore and analyse our approaches to enabling large-scale assessment change with the aim of locating sharable lessons for educational developers and others leading change efforts in higher education. We articulate key lessons generated from our collaborative exploration that we believe may be useful to other education developers and/or academic practitioners who seek to change the assessment landscape within a course, faculty, or institution. The lessons presented offer an alternative frame for managing future-facing assessment change in higher education that is sensitive to both the practice realities of practitioners and the impact subsequent change has on student learning and performance.

Introduction

Responding to the same unprecedented emergency situation, we (the authors) came together as a collection of educational development practitioners working across four different UK-based universities to reflect on our shared experiences of managing and leading large-scale assessment change at our own institutions. Designed as a collaborative autoethnography study, this paper examines our experiences of assessment change in an attempt to enrich and inform the literature on educational developers’ agentic responses to large-scale change initiatives during and beyond the COVID-19 pandemic. There has been a dearth of empirical studies examining the nuances of staff experiences of assessment change in HE due to the challenges and opportunities brought about by the COVID-19 pandemic. The current study endeavoured to examine and learn from our experiences of leading and supporting large-scale assessment change. While it provides insights into how assessment change came to be during this time of crisis and shows something of the nature of change in that moment, our firm focus is on the articulation of lessons for us as educational development practitioners and change leaders which will help us support assessment transformation in the long term.

The context

Assessment of student learning is a fundamental function of higher education. It is how we assure and express academic standards and has a vital impact on student behaviour, staff time, university reputations, league tables and, most of all, students’ future lives. It has been argued that assessment practices in most universities have not kept pace with the vast changes in the context, aims and structure of higher education. The case for assessment reform has been widely made in the literature over decades (see Jessop, Citation2019) and yet change was slow until the COVID-19 pandemic when new opportunities were ‘identified and limitations to our existing practices have been exposed’ (Reid & Sam, Citation2021, p. 130).

A more diverse student body in relation to achievement, disability, prior education, and expectations of higher education was already reshaping our focus on retention and standards. At the same time promoting student-centred learning has become an increasing focus. Against this backdrop, assessment must also remain valid, reliable, transparent, and aligned to regulatory codes (such as the QAA and the Office for Students). Alongside these at times conflicting agendas, we also have to consider the role of organisational culture, history, systems, logistics (Simper et al., Citation2022) and individual agency (Joughin et al., Citation2017) as factors shaping practice. The onset of the global COVID-19 pandemic compelled universities to rethink how the significant resources devoted to assessment might be reconfigured to better support student learning across different modes of delivery, with the resultant large-scale and rapid institutional change to university assessments rendering explicit the complex challenges presented by such fundamental changes. Educators were faced with the reality of students not being able to attend in person practical assessments. Similarly, in-person examinations held in large halls were no longer possible and the preparation for assessments needed to be undertaken remotely. We were working in a time of rapid change, and we all wanted to understand how such change, which prior to this point had seemed out of reach, had suddenly become possible. We wanted to notice what its features were and what the enablers and barriers to change were in this unique moment, so that we may learn from it for the long-term. We had in this shared endeavour become our own sites of research as we sought to understand the process we were in the middle of.

Educational developers are concerned with cultural change (Stensaker, Citation2018) which is complex, and features multiple stakeholders with their associated tensions. Educational developers must navigate the disciplinary differences, institutional agendas, and individual academic outlooks to frame and enable practice. The qualities needed for this include specific expertise, but according to Little and Green (Citation2022) there is also a need for trustworthiness, benevolence, and integrity to assist in brokering ways forward amongst the tension and different value positions. In the assessment landscape of the COVID-19 pandemic, we all sought to navigate and resolve change with the values cited by Little and Green. We were trying to ‘do good’ for both students and colleagues and support a time of change.

Achieving reform in assessment is especially challenging for educational developers and others. Despite assessment’s relative importance in students’ experience of higher education, change is often resisted (Deneen & Boud, Citation2014). The underlying reasons for resistance and inertia can be deeply rooted and aligned closely to customs and practice conventions within disciplines (Trowler & Bamber, Citation2005). Changes to assessment often also require consultation with professional bodies, key stakeholders, and quality assurance partners due to the externally validated judgment processes they ascribe to. Put simply, change is multifaceted. The relationships that shape the change process are often complex. Kezar (Citation2014) highlighted that a lack of trust in leaders or cynicism born out of previous failed change initiatives can become embedded within the fabric of an institution. Additionally, research emerging from the COVID-19 pandemic period highlighted the role that the relationships between colleagues, and the mutual understanding of each others’ skills, can impact how we co-operate and collaborate in times of change (Watermeyer et al., Citation2022). The struggle of the change process can also be seen to reside ‘within’ the individual. Often resistance is born out of fear or uncertainty and as Piderit (Citation2000) argued this can be mitigated if such fears are reduced.

During the initial period of the COVID-19 pandemic, when out-of-necessity assessment change decisions needed to be made quickly, the academic community became more alert to many of the rooted issues and assumptions around assessment change which needed to be revisited. In their reflection, Reid and Sam (Citation2021) recall how they let go of some of their pre-existing scepticism and embraced different ways of assessing their students. Change was happening; we had momentarily become less resistant. We wanted to understand the extent staff had become engaged with and encouraged to inquire into required change and its implications. We were asking – how was this change happening?

The general mood of UK higher education practitioners even before the COVID-19 pandemic was one of tension and unease. By example Darabi et al. (Citation2017) identified increased stress amongst academics linked to factors including, but not limited to, workload and administrative burdens, as well as student numbers and funding cuts. As far back as 2010, Tight argued that academics were increasingly being asked to do more within their workload, and the subsequent 10 years from this intensified such demands. academics must balance teaching, research, public engagement, administration, student support and increasing student numbers. It is, therefore, perhaps inevitable that assessment change could be interpreted as yet another thing to add to the mounting pile of work to be done. When COVID-19 became a UK reality in March 2020, the landscape suddenly changed, decisions had to be quickly made and consultations, reflections and dialogue surrounding pedagogy often did not occur. Back in 2014, Deneen and Boud argued that resistance to assessment change typically fell into three categories: epistemic (push back against the knowledge structures underlying change), procedural (certain administrative procedures acting as perceived and real barriers to change), and pragmatic (logistical and practical challenges around change implementation – i.e., time). In a period of rapid enforced change, typified by the beginning of the COVID-19 period, epistemic (conceptually and theoretically) based discussions arguably carried the greatest potential for leveraging change as the situation was so proximal to assessment periods. The areas of procedural and pragmatic resistance, however, carried the potential to highlight challenges that were unforeseen and expedited discussion surrounding what could conceivably be achieved within a proximal timeframe.

Methodology

The aim of our study was to explore the COVID-19 pandemic assessment responses of our own institutions, of which we were each part. Through this research, we drew out lessons regarding how we could better support assessment change in UK higher education in future. Research questions evolved over the course of the study as did the methods and the analytical strategies, yet our commitment to the underpinning approach of collaborative autoethnography was a constant. From the project’s conception we shared the belief of Duffy et al. ‘that closer self-reflection and self-examination of our practices might yield improved understanding of how to improve … thus improving eventual student learning outcomes’ (Citation2018, p. 61). Here ‘our practice’ was facilitating change.

Autoethnography has emerged from a range of disciplines and traditions, from literature studies (see Denshire, Citation2014), from anthropology (Bayerlein & McGrath, Citation2018) and in critical movements often associated with struggles and the exploration of difficult experiences (Chang et al., Citation2016). It is a postmodern approach which embraces the existence of multiple and subjective realities. It is growing in popularity and is being used for pedagogic studies, including to locate lessons from the COVID-19 pandemic period (Jung et al., Citation2021). Autoethnography puts the researcher at the centre of the process as both the subject and object of the research (Ngunjiri et al., Citation2010); they are both the researcher and the site of research. Autoethnography is not a study of self in isolation: it uses self as ‘a window to the world’ (Chang et al., Citation2016, p. 19) connecting it to, and triggering insights about, culture and context. Some studies are more therapeutic and introspective, and others are more analytical, seeking to draw findings.

Autoethnography can be extended such that it becomes ‘multivocal’ as researchers work together to share and interpret their experiences (Lapadat, Citation2017). Working together as a pair or group of researchers may mitigate some of the known challenges of autoethnography, for example literature describes a risk of introspection and self-absorption in solo studies (Wall, Citation2016) whereas collaboration can give rise to dialogue which is ‘intentional and critical and has the potential to deepen the process of research’ (Blalock & Akehi, Citation2018, p. 101). Collaborative autoethnography does not offer a single approach for working together; like other aspects of the research approach there needs to be an element of sensemaking in context to establish what works.

Like others in the COVID-19 pandemic period (see, for example, Markham and Harris, Citation2021), we (the four authors) were working through a time of crisis and with a desire to connect, learn, and make sense of the world around us. We all had prior interest in assessment change and we all identify as educational developers holding leadership responsibility for overseeing institutional change efforts in assessment. Though our institutional contexts were diverse, in our initial meetings we quickly realised that we were facing similar challenges in facilitating change through the COVID-19 pandemic period. We were colleagues known to each other by network and loose ties who came together to explore and make sense of a pressing issue – that of assessment change. Our initial focus was, as our meeting records describe: ‘how the experience(s) of [facilitating] change have changed you (in terms of outlook and practice); and how your work with assessment has changed from before the COVID-19 pandemic?’ We began this process to frame and understand our practice and to learn from the moment.

We began by creating narratives to answer the two initial questions posed. Each narrative was not more than 1000 words. We then agreed to read and react to each other’s narratives in turn with follow-up comments and questions. The original narrative author then came back and responded to the points raised. At this point in the process, we were continually checking our methods were trustworthy and appropriate, for example we began by agreeing to give feedback only in pairs, however, once we saw the richness of the data, we all felt that we should immerse in the process of ‘review and respond’ to allow us to fully appreciate each other’s stories; this proved rich in terms of data generation but also strengthening in our relationships as we came to better know each other. We kept detailed notes as to what we had agreed to help ensure that we were managing the process and being clear of our mutual expectations; this provided us with a set of ground rules for engagement and was important as a foundation for moving forward in such an approach.

To ensure that the data remained manageable and because we noticed convergence in the reaction and responses, we stopped generating new narrative-related data. We needed to then undertake analysis to enable sensemaking. We held meetings that were approximately bi-monthly, lasting for two hours. The duration is important as it allowed in-depth discussion. The meetings created realisations and impact beyond the research ‘findings’ – in the third meeting our meeting notes say, ‘part of this process is what we’re learning through our own conversations through doing this research; the reflexive process of conducting the research project itself and how it is transforming our outlook and practice’ (meeting 3).

Through our meetings we noticed themes and threads. We committed to undertake our own thematic analysis of the narrative data. We did not delegate the analysis to one group member; it was important that we all made our own sense of the data and having four different analyses offered a form of triangulation and ensured that all voices were equally represented. The thematic analysis was undertaken using an interpretive approach. In Braun and Clarkes’ terms interpretation is ‘essentially human activity working out what is going on’ (Citation2021, p. 198). After close reading, and discussion, we all asked ourselves what were we seeing in the data? This approach was one of noticing. We each came back to the group with a summary of the themes we had each generated. We discussed our data and process of interpretation on an ongoing basis and this influenced what was noticed. In a collaborative process, these interactions are central to learning and eventual final theme decisions.

Our meeting notes recall ‘[t]here was a consensus that the process had been tricky in practice. The nuance and layering of earlier processes and discussions had created a tapestry of text-based insights’. At this stage, we found ourselves suffering from data-overload and were left questioning the direction of our research. We needed to reduce a total of 24 themes we had generated to something that was digestible. One of the team was nominated to undertake this reduction step, following an in-depth discussion about the overlap and divergence in our analysis. The final analysis was then presented back to the group for member checking. Importantly, in this process every point from the individual analyses was translated into the final summary themes. There was sufficient overlap and convergence that it was possible to group themes into either categories of similarity or ‘meta-themes’ that described the sub-points. The result was a final set of six themes. Our ongoing reflections had finally yielded a coherent set of themes.

Findings

In this section, we present and consider the key themes generated through our analysis. We will use extracts from our data to illustrate each theme and then attempt to articulate lessons learnt from our exploration that we believe may be useful to education developers and academic practitioners alike as they reflect on their practice and the changing assessment landscape.

Wellbeing in focus: from a system of support to a system of compassion

The first theme within our narratives was the ‘wellbeing’ of staff and students which surfaced around issues relating to workload and the pressure of change in the COVID-19 pandemic period. Almost every discussion had a caveat about ‘yes, but what about my workload’. Our concerns were deep and frequent and as we looked across our data we noted ‘[t]hroughout our narratives is a concern for well-being – ours, students and also colleagues’. Framing the theme of well-being we observed that ‘[s]ometimes [concern for wellbeing] is a driver of action, other times it’s a consideration and sometimes it’s a worry’ but it was always present in our narratives and our discussion. For example:

I have encouraged and advocated for an empathetic approach to negotiating assessment change throughout the pandemic – I can certainly empathise with schools and departments not wishing to create circumstances … that might create even greater anxiety and uncertainty for students and staff.

The COVID-19 pandemic had undoubtedly, for all of us, surfaced our concerns about care and the need for us as individuals to be aware of the need for compassionate practices:

Things have been hard, really hard. I have become more aware of how management requests impact people.

Undoubtedly the biggest change that I have experienced is a greater regard for the burden of work felt by colleagues.

We questioned whether our awakened concerns would be shared by others and, if so, whether this would lead to a greater level of respect for each other’s workload, fears, and anxieties. We are clear that this experience has changed us with such personal commitments as this:

I am embedding wellbeing and mental health in all the thinking of projects.

As an aside, this personal impact is typical of the autoethnographic tradition (see Hernandez et al., Citation2022).

We also noticed a workload tension between staff workload and student wellbeing. Often, initiatives that favour student wellbeing carry a negative impact on academic staff.

colleagues never got a break from marking [with flexible submission deadlines] - this is very tricky and a challenge because students found the flexibility really beneficial (also for wellbeing, mental health) but in practice seems like it is not workable for staff.

The COVID-19 pandemic undoubtedly exacerbated and foregrounded workload concerns, but our accounts indicated that these concerns were not new. Trying to affect positive changes to academic practice in an environment with known and consistent pressure was undoubtedly challenging, as the next theme reveals.

In facing up to the complex realities of large-scale assessment change, staff have understandably felt challenged by the disruption change causes to the core structures, routines, norms, and values upon which they themselves, along with their institutions, rely, creating tangible feelings of anxiety and vulnerability. In this way, our shared experiences have been an expedient reminder of just how challenging implementing change at scale is, and yet how important it is to infuse our work with compassionate practices.

This theme has highlighted the need for decision makers to respect the workload pressures of colleagues working with assessment, and in our experiences, this requires the creation of a supportive environment for active reflection and open dialogue, where we are not only committing to noticing and acknowledging one another’s reality, we are also better positioned to empower individuals as they contemplate future actions as we seek equitable compassionate solutions.

Change processes: (re)framing resistance for sustainable change

The second of our themes was ‘change processes’. We had shone a light on to the nature of change in the assessment space during the COVID-19 pandemic period, and particularly the struggle to change individual practice, which was experienced widely. We describe some of our experiences from this time and in so doing we spotlight the underlying characteristics of higher education which were revealed.

In the change process, we observed a significant degree of resistance from academic staff to fundamentally changing familiar assessment routines and practices. A reluctance or struggle to change practices was often, but not always, related to a move away from examination approaches:

Despite significant effort to provide colleagues with viable models and suitable exemplar formats, there had been considerable staff resistance to developing alternative examination arrangements.

Pragmatic and highly legitimate concerns played a central role in the forms of struggle experienced during the COVID-19 pandemic, wherein academic staff have been asked to address diverse and fragmented tasks – i.e., teaching approaches, separate from assessment arrangements and separate from student support mechanisms. It is easy to see how changing patterns of assessment practices could be perceived as either contributing to or further fragmenting workloads, or as being likely to yield unintended consequences, with resistance a likely response.

I remember one group were discussing workload and a colleague said … if I change that assessment from an exam to an essay … then I have loads more marking to do at Christmas and they will be asking me questions about it throughout the first term too.

Getting things done, making decisions for good or ill or working with what we had at the time (known unknowns and unknown unknows etc) were very apparent. Time frames meant we simply had to take a pragmatic approach.

Furthermore, when staff perceive change initiatives/processes as a push for accountability or as a managerialist moment, rather than as genuine efforts to enhance teaching practice and students’ learning, we noticed that these processes may be perceived as threatening (rather than enabling). The second excerpt below illustrates tokenistic approaches to change as a response.

A concern I do have is the manner in which such change has been handled in some departments … and the impact, long-term, that these ‘managerial’ approaches have had, and will continue to have, on the trust, motivation and creativity of academic staff.

There is evidence of push back when change was essential during the emergency phase. It feels like we all experienced situations where the old way of doing things was just simply mapped onto a new situation without much thought about the potential impact.

Quality assurance processes appeared to play a substantive role in the change process. The interface between staff wanting to find practical solutions and the rules of quality management systems was sometimes problematic and gave rise to different responses including ‘[c]onfrontation, operating outside of regs (or on the edge of regs) and compliance to the letter as well as common sense interpretation’. One of our accounts recalled ‘people wanting to do something innovative felt that they had to prepare a case’ whilst another highlighted a preference for strict parameters that did not need negotiation as staff sought ‘the safety and security of a rule-based approach’ which may be seen as good or compliant or equally as ‘a learned helplessness of deferring to the rules’. Procedural changes and concerns caused people to worry about issues of compliance; and to understand their place within the ‘rules’. We reflected that in the process of change staff have a variable relationship with quality and policy during the COVID-19 pandemic but also in more normal times this is true. The co-presence of perceived inflexibility with quality assurance processes presented challenges for assessment change at scale.

As we reflected on the change process we witnessed and were part of, we recognised that fundamental changes to patterns of established assessment practices can be perceived as either contributing to or further complicating staff workloads, with resistance being a likely response. Layered on to this quality concerns can be perceived or experienced as a form of ‘procedural resistance’ (Deneen & Boud, Citation2014) impairing collaboration and creativity around assessment design options (Joughin et al., Citation2017).

We came to recognise our role in educational development as playing an important double duty in terms of helping to negotiate resistances and related tensions between providing useful guidance and justification in facilitating required changes, whilst upholding standards and strategic priorities.

We concluded that awareness of our own approaches in this field of change was essential; we needed to be conscious of our own change strategies. To broker manageable pedagogic change, we agreed that we must move away from reductionist, single problem solving towards holism where we recognise the barriers and perceived barriers to change, and where we see and respect the real struggles of colleagues. We need to move to relational approaches to work through this complexity where we understand not only the problem trying to be solved, but we must also understand the context in which change is set to occur. This demands participatory and dialogic approaches to change leadership and decision making in which educational development connects to and facilitates responsive curricula, pedagogy, and structures, through an ethos of partnership, as shaping dimensions of change in learning, teaching, and assessment.

Programme-focused approaches: harnessing holistic thinking and design

The theme of programmatic approaches to assessment was strong across all our accounts and discussions. We noticed, across institutions, an absence of programme-level thinking, designing, and planning of assessments. Individually, we all experienced frustration with this, since irrespective of how good an assessment was, the whole student assessment journey was not being considered.

We [colleagues] could connect online but much harder to join up for assessment innovation or change at a programmatic level. This was apparent also in the way that modular approaches seemed to still dominate and the overwhelming sense that modifications at modular level would not impinge on the programme LOs.

The majority of staff seemed to be turning their focus on to individual modular assessment practices and not necessarily considering the implications of their decisions or designs at a broader course and student experience level.

We observed that assessment-related decision making was largely at a modular level with little focus on the experience of a student who may be concurrently studying between three and eight modules (module here refers to a unit of study that is taken by students for academic credit within a degree programme). We all found this frustrating in the face of established evidence that students should experience assessment as a valid measure of their programme outcomes using authentic assessment methods (Bearman et al., Citation2016). Individual teachers can solve few assessment problems at the module level, and yet this was the dominant mode of practice.

Particularly, we noticed that student assessment load was not often a key feature in assessment planning discussions. Four institutional narratives converge on revealing a persistent modular system where institution-wide rules (e.g., cancellation of assessments in the first part of the COVID-19 pandemic) or modular changes failed to consider the wider impact on student learning and their load. The issue of student workload is a programme-level challenge since modules are treated separately, and in turn this phenomenon was seen to link to both well-being and the quality environment. However, while programme-level planning is a key theme, it does not exist in isolation.

We have seen student wellbeing compromised at a large scale this year … . Yet, limited measures to reduce load taken, flexibility given with reluctance. In fact perhaps an increase in load has occurred.

It struck me that even without a global pandemic and all the disruption this caused, the vast majority of the students would still have so much assessment due in at the end of the year.

Balancing the student assessment load and supporting formative opportunities was not at the forefront of discussions during the COVID-19 pandemic. Indeed, the pandemic reveals symptoms of a persistent culture of underplaying the value of formative assessment and feedback at a programme level (Irons & Elkington, Citation2021). The COVID-19 pandemic presented certain challenges and with them an opportunity to challenge typically heavy assessment loads and the traditional exam. A sector-wide trend indicates that these have been missed with a quick return to ‘normal’ (i.e., status quo). The cases in point further highlight the disconnect between practice and evidence.

While there have inevitably been disciplinary differences when designing and implementing alternative assessment practice arrangements, the scale and urgency of recent changes have amplified the value and need for assessment change to be considered holistically from the student point of view, seeking to ensure that the overall package a student experiences is manageable and fit-for-purpose. If staff and students are unable to see the links between elements of the programme, and modules and their assessments are treated as a separate item in change conversations, there may be no clear coherence to the resultant assessment experience.

A programmatic view of assessment change requires a strategic, collaborative, and planned approach by programme teams to carefully consider how the elements that comprise the student assessment experience come together and are structured to help support students’ attainment of learning outcomes. Such a programmatic view helps to frame assessment design to fully consider the learning journey and experience of the student and to critically evaluate what needs to be assessed and how. It follows then that such assessment needs to be integrative in nature, bringing together understanding and skills in ways that represent authentic and meaningful achievement of key programme aims.

Culture and structure: establishing a sense of urgency for broad-based action

The next theme highlighted the role of institutional culture and structure. There was a sense that the organisations of which we, too, are part were in fact constraining and frustrating the process of change, but there is hope also that by paying attention to systems and cultures, change can be made easier and better.

As we stepped back from our narratives, it was clear that the decision making was very much based on introspection. There was no reference to a coordinated approach between universities – we were each dealing with the same issues, yet in relative isolation.

As I was re-reading the narratives it struck me how introverted HE is. By this I mean the majority of discussions were at an institutional level and I think this is part of the problem. It’s almost seen as we have to solve this ourselves and looking outside to what others are doing down the road is seen as an admission of weakness.

This is quite a contrast to our own shared experiences within the educational development community, which was generous, frequent, and helpful. Whilst colleagues shared and collaborated on issues of practice, the same was not true at the policy level in so much as each of us could see.

As educational developers, we observed that we were working to continually counter a culture of complacency. The ‘way things were always done’ and ‘normal times’ appeared to be key reference points in our narratives and in associated discussion meetings. We were individually aware of our language, priorities, guidance, and actions and how they could reinforce the status quo – for example prioritising the dominant concerns around ‘stability and security of standards over innovative and authentic practices’. Instead, we sought to ‘not allow our actions in response to assessment change to perpetuate / reinforce conventional culture of assessment’. Our strategies to counter complacency are considered under the themes of pragmatism and evidence-based practice presented later.

As we questioned the operating culture, we asked deeper, sometimes frustrating questions about how we found ourselves here, and considered whether some of the national policies and institutional change projects had really been effective at transforming our institutions in the way that perhaps they were conceived to do:

Where has 10 years of creating plans to address NSS got us? – these university exercises what have they achieved?

What have universities learnt? Where is the organisational reflection?

We were undoubtedly frustrated, but questions may now be asked about the extent to which years of initiatives and interventions have actually impacted the deeper culture of higher education institutions. Have decades of high-level change really been as transformative as they could have been?

Despite a high degree of urgency for short-term change in the form of viable alternative assessment arrangements, senior leaders have perhaps overestimated how much they can force significant changes on to academic colleagues, whilst at the same time underestimating just how hard it is to drive people outside of their comfort zones and still affect positive change without sufficient support being provided. The consequence of this has seen a lack of change to assessment practice being perpetuated. This illustrates the central role educational development teams have in instigating broad-based action by (re)establishing a link between the organisational level of pedagogic leadership and large-scale change, and that enacted by academics focused on their own practice development.

It is crucial that we not only actively recognise local context(s) and work to foster staff autonomy; we also need to facilitate dialogue and development within and between different units across an institution. Such approaches challenge the dominant model of change management in HE, with high quality pedagogic change coming not from the imposition of reductive and performative structures, but from serious attempts to integrate organisational sensibilities and academic processes by making space for productive dialogue within and between academic communities through which new pedagogic activities and approaches can emerge and thrive.

Fostering evidence-based practice as a basis for lasting assessment change

As we explored our practice, two of the themes generated were related to how we as developers were able to affect change in a period of crisis. In analysing our narratives, we first saw the potency of evidence in affecting change. We came to conclude that evidence is needed as a pre-text to lasting change and as such is a central element of the educational developer role.

Earlier, we recognised that sometimes we saw a culture of complacency which, in our experience, could be resolved with evidence. Yet, our narratives showed that culture is more powerful than evidence. Maintaining the status quo does not require any evidence (e.g., persisting with exams). Evidence, it seemed, is only required when introducing change.

A big part of this [change] is taking steps to anchor assessment change in a culture of evidence-based practice. This requires intentional efforts at and designs for change that show practitioners how specific behaviours, attitudes, choices have improved performance / student learning.

Evidence and ‘what evidence’ and uses of evidence … We have evidence where new assessments were designed to address problem solving this practice has been positive and they are likely to stay … Exams are intrinsically ‘right’ (nobody questions their design but we know they are poorly designed!).

As professionals and as individuals we are already using evidence to underpin practice, for example to demonstrate the link between student well-being and assessment strategies. Our accounts demonstrate the need for evidence to be utilised to inform all elements of the change process in institutions. Paradoxically, many elements of practice that are not evidence-based are likely to remain in practice. The need for evidence applies to change, but not to keep existing practices which are purely customary.

The growing evidence base of research on assessment provides a useful basis on which to build and review policy and practice, but educational research and theory do not easily translate into simple prescriptions for change in relation to established educational practices. Assessment is not pedagogically neutral, and all assessment is situated in the local context and in the traditions, expectations and needs of different universities and academic disciplines (Hanesworth et al., Citation2019). Theory and evidence must be interpreted and applied within those parameters and cannot be applied simply or uniformly. Accordingly, educational development cannot prescribe standardised changes, but rather needs to drive evidence-informed planning based on knowledge of effective assessment practices that is used to evaluate and benchmark existing approaches and inform future developments.

Given that assessment permeates many areas of institutional life, educational developers have an important role to play in supporting academic colleagues to understand its complexity, as well as be able to live and work with a level of ambiguity and not be averse to a certain level of risk when seeking change. The high-stakes nature of assessment for individual students and institutional reputations means that any level of change can generate anxieties regarding quality assurance and potential negative publicity. Furthermore, many aspects of assessment are mired in traditional approaches that are stubbornly resistant to reforms. Efforts at assessment change need to be sensitive to these anxieties, as well as local needs and context, but also willing to persevere in questioning taken-for-granted assumptions and practices, proactively enquiring into viable alternatives, and paying attention to the impact subsequent change has on student learning and performance.

Concluding remarks

On balance, the COVID-19 pandemic period brought out the pragmatic best in educators and educational developers alike. We have observed, first-hand, how colleagues have navigated multiple variables to forge a way forward in the face of great uncertainty. There are certain compromises inherent in any change practice and we (the authors) all found ourselves becoming necessarily pragmatic within the educational development and educational leadership spaces we inhabit. Adopting more pragmatic regimes around initial changes to assessment arrangements meant, on one level, the concepts of practice and action were recast to acknowledge individual and collective agency. From an educational development perspective, the differentiation of such pragmatic regimes illustrates the necessity of moving between modes of intervention and agency oriented to local and individual circumstances and those modes oriented towards general practice (i.e., institutional policy). Tuning into this pragmatism is critical because resources are finite, and academics have had need to compromise between what might be ideal and what appears plausible and defensible and what they think students want and need.

Further still, the response to the COVID-19 pandemic has revealed a comparative lack of future perspectives in how assessment change is managed. There is, therefore, a need to frame assessment change in such a way that enables staff to think critically and creatively about their practice, to begin to generate alternative visions of future possibilities for assessment, and to initiate action in pursuit of these. Structural (first order) changes – i.e., principles to guide alternative assessment arrangements in multi-modal delivery – require leadership and support, both institutionally and locally, to provide the necessary behaviour (second order) change. As our data demonstrate, change initiatives in assessment also often do not carry beyond the local university context. As a field, we need to seek to address this so that educational developers and disciplinary specialists influence the way their assessments function within the sector as a whole. The lesson in this goes beyond the COVID-19 pandemic – to affect real change we need to meet people where they are, recognise the nature of their resistance(s) and struggle, and work empathetically to recognise the complexity and challenge of developing and changing practice for the better.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Bayerlein, L., & McGrath, N. (2018). Collaborating for success: An analysis of the working relationship between academics and educational development professionals. Studies in Higher Education, 43(6), 1089–1106. https://doi.org/10.1080/03075079.2016.1215417
  • Bearman, M., Dawson, P., Boud, D., Bennett, S., Hall, M., & Molloy, E. (2016). Support for assessment practice: Developing the assessment design decisions framework. Teaching in Higher Education, 21(5), 545–556. https://doi.org/10.1080/13562517.2016.1160217
  • Blalock, A. E., & Akehi, M. (2018). Collaborative autoethnography as a pathway for transformative learning. Journal of Transformative Education, 16(2), 89–107. https://doi.org/10.1177/1541344617715711
  • Braune, V., & Clarke, V. (2021). Thematic analysis: A practical guide. Sage.
  • Chang, H., Ngunjiri, F. W., & Hernandez, K. A. C. (2016). Collaborative autoethnography. Routledge.
  • Darabi, M., Macaskill, A., & Reidy, L. (2017). A qualitative study of the UK academic role: Positive features, negative aspects and associated stressors in a mainly teaching-focused university. Journal of Further and Higher Education, 41(4), 566–580. https://doi.org/10.1080/0309877X.2016.1159287
  • Deneen, C., & Boud, D. (2014). Patterns of resistance in managing assessment change. Assessment and Evaluation in Higher Education, 39(5), 577–591. https://doi.org/10.1080/02602938.2013.859654
  • Denshire, S. (2014). On auto-ethnography. Current Sociology, 62(6), 831–850. https://doi.org/10.1177/0011392114533339
  • Duffy, J. O., Wickersham-Fish, L., Rademaker, L., & Wetzler, E. (2018). Using collaborative autoethnography to explore online doctoral mentoring: Finding empathy in mentor/protégé relationships. American Journal of Qualitative Research, 2(1), 57–76. http://www.ejecs.org/index.php/AJQR/article/view/161/104.
  • Hanesworth, P., Bracken, S., & Elkington, S. (2019). A typology for a social justice approach to assessment: Learning from universal design and culturally sustaining pedagogy. Teaching in Higher Education, 24(1), 98–114. https://doi.org/10.1080/13562517.2018.1465405
  • Hernandez, K. A. C., Chang, H., & Bilgen, W. (2022). Transformative autoethnography for practitioners: Change processes and practices for individuals and groups. Myers Education Press.
  • Irons, A., & Elkington, S. (2021). Enhancing learning through formative assessment and feedback. Routledge.
  • Jessop, T. (2019). Changing the narrative: A programme approach to assessment through TESTA. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education (2nd ed., pp. 36–49). Routledge.
  • Joughin, G., Dawson, P., & Boud, D. (2017). Improving assessment tasks through addressing our unconscious limits to change. Assessment & Evaluation in Higher Education, 42(8), 1221–1232. https://doi.org/10.1080/02602938.2016.1257689
  • Jung, I., Omori, S., Dawson, W. P., Yamaguchi, T., & Lee, S. J. (2021). Faculty as reflective practitioners in emergency online teaching: An autoethnography. International Journal of Educational Technology in Higher Education, 18(1), 1–17. https://doi.org/10.1186/s41239-021-00261-2
  • Kezar, A. (2014). Higher education change and social networks: A review of research. The Journal of Higher Education, 85(1), 91–125. https://doi.org/10.1353/jhe.2014.0003
  • Lapadat, J. C. (2017). Ethics in autoethnography and collaborative autoethnography. Qualitative Inquiry, 23(8), 589–603. https://doi.org/10.1177/1077800417704462
  • Little, D., & Green, D. A. (2022). Credibility in educational development: Trustworthiness, expertise, and identification. Higher Education Research & Development, 41(3), 804–819. https://doi.org/10.1080/07294360.2020.1871325
  • Markham, A. N., Harris, A., & Luka, M. E. (2021). Massive and microscopic sense making during COVID-19 times. Qualitative Inquiry, 27(7), 759–766. https://doi.org/10.1177/1077800420962477
  • Ngunjiri, F. W., Hernandez, K. A. C., & Chang, H. (2010). Living autoethnography: Connecting life and research. Journal of Research Practice, 6(1), 1–18.
  • Piderit, S. K. (2000). Rethinking resistance and recognizing ambivalence: A multidimensional view of attitudes toward an organizational change. The Academy of Management Review, 25(4), 783–794. https://doi.org/10.2307/259206
  • Reid, M. D., & Sam, A. H. (2021). Reflections on assessment in the wake of change from the COVID-19 pandemic. Medical Education, 55(1), 128–130. https://doi.org/10.1111/medu.14368
  • Simper, N., Mårtensson, K., Amanda Berry, B., & Maynard, N. (2022). Assessment cultures in higher education: Reducing barriers and enabling change. Assessment & Evaluation in Higher Education, 47(7), 1016–1029. https://doi.org/10.1080/02602938.2021.1983770
  • Stensaker, B. (2018). Academic development as cultural work: Responding to the organizational complexity of modern higher education institutions. International Journal for Academic Development, 23(4), 274–285. https://doi.org/10.1080/1360144X.2017.1366322
  • Trowler, P., & Bamber, R. (2005). Compulsory higher education teacher training: Joined-up policies, institutional architectures and enhancement cultures. International Journal for Academic Development, 10(2), 79–93. https://doi.org/10.1080/13601440500281708
  • Wall, S. S. (2016). Toward a moderate autoethnography. International Journal of Qualitative Methods, 15(1), 160940691667496. https://doi.org/10.1177/1609406916674966
  • Watermeyer, R., Crick, T., & Knight, C. (2022). Digital disruption in the time of COVID-19: Learning technologists’ accounts of institutional barriers to online learning, teaching and assessment in UK universities. International Journal for Academic Development, 27(2), 148–162. https://doi.org/10.1080/1360144X.2021.1990064