321
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Student views on the assessment medium for General Certificates of Secondary Education in England: insights from the 2020 examination cancellations

ORCID Icon, ORCID Icon, , , , & show all
Received 01 Jul 2023, Accepted 18 Mar 2024, Published online: 28 Mar 2024

ABSTRACT

In England, Wales and Northern Ireland, the General Certificate of Secondary Education (GCSE) has been for the last 35 years the most common qualification by which students’ attainment at age 16 has been measured. The range and balance of processes by which the GCSEs’ programmes of study have been assessed have varied over the decades, to include both teacher-assessed coursework and modular or end-of-programme timed written assessments (examinations). The necessary replacement in 2020 of GCSE written examinations with teacher-led assessment provided a unique opportunity to explore students’ perceptions of the validity and utility of both examination and teacher-led assessment media for this high-stakes award. This research reports part of a survey that explored the experiences of GCSE candidates in 2020, with particular focus upon students’ perceptions of preparing for written examinations, and then for teacher-led assessment. 216 students responded to a mixed quantitative/qualitative methods survey that invited opinion and reflection on the relative affordances and shortcomings of written examinations and teacher-led assessment, both for the individual respondent and the wider student cohort. Responses revealed a balance of both positive and negative evaluations of both assessment media, with an almost equal preference for either, albeit many respondents indicated consideration of some other assessment medium (e.g. modular assessment; combined assessment media). We re-emphasise the attested value of ongoing ‘dialogue’ and consultation with student stakeholders to inform planning for optimisation of assessment utility and validity.

Introduction

Assessment for the General Certificate of Secondary Education (GCSE)

The General Certificate of Secondary Education (GCSE) is an academic qualification, generally taken at age 16 in England, Wales and Northern Ireland, which historically signalled the completion of compulsory secondary education within those jurisdictions. GCSEs are often considered to signal the start of a student’s entry to competition for further education, employment and training places, and so consequently, since their inception, GCSEs have been regarded as ‘high stakes’ (Denscombe, Citation2000; Elwood, Citation2013; Woods, James, et al., Citation2018). Originally conceived as a criterion-referenced assessment, a guiding principle at the inception of GCSE was that assessments should be accessible for all candidates (DES, Citation1985; Woods, Citation2003). Accordingly, GCSE assessment has previously included a balanced combination of written examinations and coursework; this arguably allows for a wide range of skills/abilities to be demonstrated, improves assessment validity and enhances the student learning/assessment experience (Reid & Jones, Citation2002). However, previously widely used forms of GCSE (and GCE A-level) assessment, such as modular assessments, controlled assessments, some open-book assessments, and teacher assessments have more recently been reduced and the role of assessment by timed, end-of-programme, written examination paper was increased in GCSE reforms in England from 2017. The aims of these reforms were to; ‘raise educational standards’, make the content of GCSEs more ‘demanding and fulfilling’ for students, better prepare students for future study and increase the ‘rigour’ of assessment processes in order to ensure GCSES returned to being ‘highly respected’ qualifications (Citation2017; Department for Education [DfE], Citation2014a; Gove, Citation2013; The Sutton Trust, Citation2019; Torrance, Citation2018).

The 2017 GCSE and GCE A-level reforms prompted some academic and professional concerns about the consequences of development of a ‘performative’ school culture, such as increased test anxiety, ‘memorization’ (rote learning) demands assessment by examination, and narrowed curriculum options (Ball et al., Citation2011; Barrance, Citation2019; Barrance & Elwood, Citation2018b; Elwood, Citation2012). In turn such concerns have been linked with increased risk of detrimental psychological and academic performance consequences for students (Benn, Citation2017; Marshall, Citation2017; D. W. Putwain & Symes, Citation2018; Rodway et al., Citation2016; von der Embse et al., Citation2018).

Psychological research on examination stress and test anxiety highlights areas of potential stress relating to high stakes assessments, and assessment by examination in particular, relating to factors such as examination preparation/revision (e.g. memorisation demands, procrastination, organisation); social evaluation (e.g. fear of failure, social consequences of failure), and management of examination performance (e.g. emotional self-management, information recall demands, speed production demands) (Barrance, Citation2019; Buck, Citation2018; A. Hipkiss et al., Citation2021; McCaldin, Citation2019; D. Putwain et al., Citation2012; D. W. Putwain et al., Citation2013; Soares & Woods, Citation2020; Tyrell & Woods, Citation2018; Woods, Citation2000; Woods et al., Citation2010). At the same time, there is also evidence of the variable, or positive, effects of examination pressure (Cassady & Finch, Citation2020; Elwood, Citation2012), of a range of both positive and negative assessment-related emotions (McCaldin, Citation2019; Pekrun et al., Citation2004), and the view of significant/high-stakes examinations as constituting a ‘rite of passage’ for students (Stobart, Citation2008). Taken together, this body of research raises questions about the validity and utility of GCSE assessment by examination, in comparison to potential alternative assessment media.

Student voice within the development of processes of GCSE assessment

The process of any student assessment is developed from its stated objectives, usually phrased in terms of knowledge, skills and understanding. In turn, the assessment objectives may indicate a suitable assessment medium (e.g. written, oral, practical) and assessment conditions/arrangements (e.g. time limits, assistive technology). Further influences upon the process of assessment include considerations of feasibility for the examination centre (e.g. school, college), of utility for users of the award (e.g. practical assessments for subjects such as Art and Technology), and of integrity (e.g. through confidence in standards produced) (Stobart, Citation2008; Woods, McCaldin, et al., Citation2018). For GCSE and other high stakes qualifications, this complex network of factors, and how they are represented, balances differently under different national education policies and philosophies to produce different assessment processes (Isaacs, Citation2010; Marshall, Citation2017; Mathews, Citation1985; Tattersall, Citation1994; Waters, Citation2015; Wolf, Citation2009). Notwithstanding the vagaries of such varying socio-political influences, representation on the range of factors relevant to the development of assessment processes, for both GCSE and other high stakes awards, has, neither systematically nor extensively, incorporated information from the student perspective (Barrance, Citation2019; Barrance & Elwood, Citation2018a; Benchaim & Zoller, Citation1994; Brown & Woods, Citation2022; Eilersten & Valdermo, Citation2000; El Rassi, Citation2019; Elwood et al., Citation2017; Woods, James, et al., Citation2018; Zeidner, Citation1994). Elwood (Citation2012), Barrance and Elwood (Citation2018a), Woods, James, et al. (Citation2018) and Barrance (Citation2019) have highlighted and demonstrated the practical and rightful utility of student participation in educational assessment development, though Woods, James, et al. (Citation2018) observe the general, and remarkably anachronistic, lack of feedback mechanisms/communication routes available to GCSE students. Notably, both examination boards and regulators have dedicated online communication routes for teachers/examination centres, but not for students/candidates, and whilst Ofqual’s annual ‘Perceptions’ survey does directly access student opinion, this is very restricted in its focus and sampling (Brown & Woods, Citation2022; The Office of Qualifications and Examinations Regulation Ofqual, Citation2022).

Effects of covid-19 on GCSE assessment

In light of indefinite school closures under national health emergency measures relating to Coronavirus (‘Covid-19’), the English government’s Department for Education (DfE) announced in March 2020 that the summer 2020 (May–June) GCSE terminal examinations would be cancelled; it was subsequently announced that the examinations would be replaced with teacher-led assessment (McCaldin et al., Citation2023; The Ofqual Blog, Citation2023). Guidance produced by the national qualifications regulator (Ofqual), and the individual awarding organisations, was the primary reference point for teacher-led assessment, within which schools and colleges developed their own approach. In each school/college, teachers were tasked to work collaboratively to determine the appropriate indicative grade for each student, using a evidence from mock exam results, marks from work completed throughout the course, and teachers’ predicted and target grades.

At the time, numerous public and educational commentaries interpreted the replacement of examinations as an opportunity to reflect upon the place of terminal examinations in the system of student high stakes assessment (The Times Educational Supplement [TES], Citation2020; The Telegraph, Citation2020), reviving debate about the integrity of the methods and processes by which students are assessed, which is seen as central to the function of such assessments (Barrance, Citation2019; DfE, Citation2014a; Marshall, Citation2017; Tattersall, Citation1994; Torrance, Citation2018). Given the decision to proceed with, rather than defer, certification for the affected GCSE cohort, the question as to whether written examinations are generally the most, or only, fit-for-purpose means of delivering valid student assessment came (back) into sharp focus.

Aims of the present research

The cancellation in 2020 of GCSE examinations in England, Wales and Northern Ireland and their replacement with teacher-led assessments provided a unique opportunity to better understand views about the perceived relative affordances of a feasible alternative to assessment by examination; in light of considerations of ‘student voice’, our focus here is upon student/GCSE examination candidate views. Furthermore, prior to the inception of ‘Reform GCSEs’ in 2017, there was a significant element of moderated, teacher-assessed coursework within the GCSE assessment structure, highlighting the recent relevance of this particular alternative medium. Whilst it is possible at any time to survey stakeholders about alternative assessment media, the direct experience of the 2020 GCSE examination cohort of preparing for both assessment by examination and for receiving grades based on teacher-led assessment provided a very immediate and personally informed perspective: As one survey respondent within the present research commented, ‘it’s the only time I’ve really thought about it’ (Participant 81 (P81)). It is acknowledged, however, that the relatively late-stage and unanticipated notification of the change of assessment medium, for reasons not directly related to assessment integrity, and within a turbulent social-political context, may to some degree affect any views expressed. Therefore, whilst the primary aim of the research reported here was to explore GCSE candidates’ views, generally, upon the relative advantages and disadvantages of assessment by examination and teacher-led assessment respectively, a secondary aim was to differentiate from this candidates’ views about these assessment media within very specific context in which they had experienced them.

Methodology

Study design and data collection

Using the QualtricsXM™ survey tool (Qualtrics, Citation2023), data were collected through a two-part online questionnaire survey aimed at an opportunistic, self-selecting sample of Year 11 students (aged 15/16 years) in England who had been due to sit summer 2020 GCSE examinations (cf. Cohen et al., Citation2018; Gilham, Citation2008). Information about the survey, including an invitation to participate was posted between June and July 2020 on various social media platforms including Reddit, Facebook, Twitter, and The Student Room, to which students in year 11 were known to have previously posted. Therefore, this research was conducted prior to (the later resolved) widespread media attention and public controversy around the algorithmic calculation of student grades.

Findings reported here relate to the second part of this survey which was used to collect information on participants’ views about the relative validity and utility of the (expected) inscriptive, terminal/linearFootnote1 exam-based GCSE assessments and the (replacement) teacher-led assessments.Footnote2 The questionnaire (available upon request from first author) was designed through consultation with an assessment research group at the host university, comprising academics and education practitioners with extensive experience in the field. Given the very compressed timescale between examination cancellation and students leaving school, piloting of the questionnaire was not possible. The questionnaire format was designed to be easy to complete, time-efficient and non-disturbing to assessment candidate respondents, who had already experienced the disruption and confusion of examination cancellations and school closures as part of national COVID-19 containment measures.

The second part of the two-part online survey comprised four closed questions used to prompt participants for their views on the relative validity and utility of inscriptive and teacher-led GCSE assessment for themselves and peers; each question was then followed by the open question: ‘can you explain why you have chosen your answer?’, in order to allow participants to elaborate upon their question response (Gilham, Citation2008). Questions One and Two asked students their levels of agreement/disagreement with the statements: ‘grades calculated by teachers are better for students than exams are’ and ‘all students will feel a similar way about grades calculated by teachers’. Question Three asked students whether they thought they were likely to get higher, lower or similar grades with teacher-calculated, rather than examination, grades and Question Four asked, ‘If you could choose, in the future would you rather have: exams; teacher-calculated grades?’

Participants

A total of 216 final year GCSE students responded to the advert and completed the survey. Of these responses, none had missing data relating to the four closed questions reported here, and 6/848 of the follow-up open questions were left unanswered (99.6% total response rate). Demographic information on respondents was not collected as the survey was exploratory, set within an unprecedented social/educational context, and as such did not include any specific hypotheses about inter-group response differences, but rather would focus upon the nature and range of individual responses.

Data analysis

The responses from the questionnaire were downloaded from the online survey system and collated in Microsoft Excel. Descriptive quantitative analysis was carried out for each of the four closed prompt questions, the results of which were summarised in tabular form (see below). Following this, four separate deductive-inductive content analyses were carried out on the data from each of the open-ended response options in order to understand the manifest reasons given for the range of closed question responses (Erlingsson & Brysiewicz, Citation2017; Graneheim & Lundman, Citation2004; Hsieh & Shannon, Citation2005). All content categories and data categorisations were validated through discussion between six of the seven authors. Following familiarisation with the whole dataset, these six authors collectively generated initial content categories for a defined 5% sample of the dataset. Given the extensive detail of responses both within and between respondents, collaborative category generation across 5% of the data corpus provided reliable analytic sensitisation that was manageable within research team resources. Using these initial categories, each author then completed coding for allocated portions of the dataset before the first and second authors reviewed the whole coded dataset to produce a complete set of content categories, which was then developed between the six authors to a final category framework.

Table 1. Participants’ level of agreement with the statement ‘grades calculated by teachers are better for students than exams are’.

Table 2. Participants’ level of agreement with the statement ‘all students will feel a similar way about grades calculated by teachers’.

Table 3. Participants’ responses to the question ‘using this year’s teacher-calculated grades rather than exams, do you think students are likely to get … ?’.

Table 4. Participants’ responses to the question ‘if you could choose, in the future would you rather have … ’.

Ethical approval

Ethical approval was granted by the research ethics committee of the host institution. Given the online survey format, consent was implied by respondents proceeding after having read full participant information. No significant ethical issues were anticipated or encountered. However, it was recognised that some students accessing or responding to the questionnaire may have been experiencing distress, anxiety, or low mood as a result of school, examination, and daily life disruptions associated with the COVID-19 pandemic. Therefore, the questionnaire advert and participant information contained information signposting to relevant support agencies. Personal or sensitive data were not gathered, and all gathered data were anonymous.

Limitations

Three significant limitations appertain to the findings of the present study. First, the survey instrument was not piloted on account of the narrow time frame between the presenting opportunity (GCSE assessment scheme revision) and the point of school leaving for the GCSE student cohort. The caution then to the findings presented here is that data gathering may have, to some extent or other, ‘missed the mark’ of students’ primary concerns and insights. However, the questionnaire was devised by a group of experienced researcher-practitioners, each of whom has expertise in the study topic, and the questionnaire’s ‘open comments’ field attracted a high number of extended written responses, many of which elaborated in a wide variety of ways upon the specific question focus. Second, the data were gathered at time when all students were affected, generally, by the particular circumstances of social restrictions and, specifically, by a late-notified change of assessment scheme. Whilst these particular contexts will need to be acknowledged in comparing findings here with any others, we have endeavoured within our analysis and presentation of findings to differentiate student views specifically related to the (circumstances of the) assessment scheme change per se, rather than to the medium of assessment more generally. Third, whilst the effects of a self-selecting respondent sample cannot be predicted with certainty, it is perhaps reasonable to assume that a particular strength of feeling on the issues represented in the survey would be a significant response facilitator. Therefore, it is possible that some aspects of generalisibility of the presented findings are limited on account of this being a more ‘polarised’ sample.

Findings

Question responses

Within the following question response sub-sections, analytic content categories are highlighted by underlining.

As shown in , rather more students (45%) considered that assessment by examination would be better for students, albeit a significant minority (34%) considered teacher-led assessment the better option. Almost 1/5 students (17%) had neutral or mixed views on the question. Explanations for these varied views were categorised in three ways: First, as identifying either an advantage or shortcoming; second, as relating to either examinations or to teacher-led assessment; third, as applying either only to the current unanticipated mid-programme change, or to these two assessment media more generally.

In favour of assessment by examination, some respondents identified examinations to provide a focus point for concerted effort resulting in a ‘true’ and ‘fair’ assessment. In contrast, other respondents identified assessment by terminal examination as being ‘unreliable’ on account of its effect of sampling a small proportion of the overall programme of study and being limited to one or two relatively short time-points, which could be affected by a range of extraneous factors (e.g. personal/family circumstances):

[Exams are} overall more reliable and fairer as it’ll show our true capabilities and hard work (Participant 22 (P22))

[Exams] provide a motivation to study and work hard (P26)

[you] work really hard all year and then do badly on one day, then your whole year will be wasted. (P33)

Respondents also identified the reliance of assessment by examination upon students’ memorisation capabilities and considered the [psychological] pressure of examinations, both in preparation and during the sitting itself, to affect well-being and performance, and to be unnecessary and/or ‘unacceptable’. Some participants described having physical effects such as ‘not sleeping at night’ (P.216) or ‘getting very run down, ill’ (P.20):

Exams often stress me out and so I lose concentration and make errors that could’ve been avoided if I wasn’t as stressed and anxious. (P.207)

In favour of teacher-led assessment, respondents identified teachers as more accurate sources of knowledge about their capabilities, attainments and ‘work ethic’. In contrast, some respondents identified ‘teacher favouritism’, the overlooked ‘quiet child’, and parental pressure, as factors which would bias teacher assessment, which they felt was insufficiently accountable:

I think my teachers have a better understanding of what I’m capable of (P200)

No way can they [teachers] be impartial (P16)

… although teachers may say they don’t have favourites that is false … teachers seem to have lower expectations of quiet kids and our predicted grades tend to be lower just because we don’t contribute as much (P24)

I hope that my work ethic will play a big part in my teachers determining my grades. (P8)

Specific to the current circumstance of a late change to the expected assessment process, some respondents did not have confidence in a ‘rushed’ system for teacher-led assessment, in which teachers had a vested professional interest to achieve higher student grades. Also, some students felt that mock examination grades, gained at a time when their possibly instrumental role in final grades was not apparent, were not a fair proxy for actual attainment or final examination grades. Some respondents considered that different assessment media might differently suit the attainment demonstration of different students in different subjects.

As shown in , a clear majority of students (74%) predicted that different students may feel differently about the switch from assessment by examination to teacher-led assessment, though some responses could be categorised as applying more specifically to the current circumstance of a late notification of such a change (e.g. loss of control/feeling ‘cheated’ (e.g. P.4, P.52, P.183, P.202); loss of ‘rite of passage’ (e.g. P.35, P.116, P.149, P.191); smokescreen effect for examination/assessment indifference or for having fallen behind with study):

If you are the sort of person who feels extremely under pressure from exams you would feel happier with teacher assessments but ultimately if that were the established form of assessment that would begin to be stressful too. It is only that the teacher assessments weren’t planned from the beginning that they are less pressure. (P32)

Another overarching category of response was the tendency by a minority of respondents to qualify their response by reference to actual discussions with other students about the assessment scheme change, and in some cases, a recognition of the limited scope of such discussions.

Beyond this category of response, student feeling on the subject of teacher-calculated grades was viewed as a balance between positive and negative factors applying to either one medium of assessment or the other, or to specific students. The removal of exam pressure, removal of the need for ‘cramming’ (subject knowledge memorisation) and consequent opportunities for improved student well-being were seen as potentially positive for students:

I do think that teacher grades are better for people in terms of mental health. The exams take 3+ years of working and fit it all into 2+ hours which is extreme pressure for anyone to go through. (P121)

Similarly, some respondents considered that some students would fare relatively better under a non-examination assessment system, either because they thought examinations inherently unreliable or that some students have better aptitude for some forms of assessment other than examination:

… people are different and some people would do better in their exams, especially people who are good at memorising information. (P195)

Amongst factors potentially contributing to negative perceptions of teacher-led assessment were: mistrust of the teacher grade moderation system; underperformance in mock examinations that were at time of sitting not instrumental to final grade awards; teacher bias towards/against particular students (see above); anticipation of relatively lower grades for students considered to have particular aptitude for assessment by examination:

Some students feel that teacher assessed grades are biased against them for many reasons, e.g. the teacher is new so doesn’t know the students well enough, teacher doesn’t like a particular student … meaning that teacher assessed grades are biased towards the students that the teacher prefers (P.131)

Some students are very happy about it because they don’t do well in exams but have very good classwork, whereas students who are the opposite are very upset.

(P.164)

As shown in , participant responses were almost perfectly evenly split between predicting higher or lower or the same grades under the replacement teacher-led system of assessment. As for previous questions, some responses across all three categories can be categorised as applying more specifically to the current circumstance of late notified assessment scheme change (e.g. leniency/higher grades awarded to compensate for the disruption experienced by students/to detract from the controversy about the operation of the outlined teacher-led system of assessment; underperformance in mock examinations that were, at time of sitting, not thought to be instrumental to final grade awards; untaught parts of the assessment syllabus due to school closures). Interestingly, some factors were identified by different respondents as having either a potentially lowering or elevating effect on grades. For example, the potential use of mock examination grades within teacher-led assessment sometimes predicted higher grades where mock examinations were thought to be ‘easier’ than final examinations as they do not cover all of the syllabus content and teachers wish to be encouraging; or predicting lower grades under teacher-led assessment due to underperformance in mock examinations where students did not, at the time, recognise their utility/were pacing themselves towards full effort for final examinations.

A variety of factors predicting higher grades included: teachers being a more reliable source of information about student attainment than is assessment by examination; teachers’ overestimation of student attainment on account of individual, departmental and/or school level vested interests in student performance; removal of performance abasement that is caused by examination stress; a belief that teacher assessment will also take positive account of student effort and ‘potential’ as well as manifest attainment:

I think the grades will be slightly higher overall as teachers want their students to get good grades as that reflects on their teaching and the school (P93)

I think most teachers will also be taking into mind the students’ behaviour and attitude to learning in class and will probably be more generous with the engaged students (P.24)

I think most teachers will … know the student’s full potential so the grade will fully reflect the student’s ability (P.130)

The grades given will have much more thought put into them compared to just a couple exams and so students would feel a lot more respected for their work and effort’. (P.204).

Factors predicting lower grades included: the absence of extra student effort that is engendered by the focal point of terminal examinations; active ‘government’ strategy to counteract assumed teacher overestimation, or perceived general grade inflation:

These months were a time for us to make that final push but this is not being taken into account therefore I believe they will be significantly lower. (P.21)

Explanations for predictions of similar grades through teacher-led assessment included: teachers being accurate sources of knowledge about students’ attainments; a counter-balancing effect of assumed grade underestimation in ‘poorer’ schools with assumed grade overestimation in independent [more advantaged] independent schools:

I think the exam boards will expect teachers to give students slightly higher grades and will take this into account and therefore lower the overall grades to make them seem fair (P.11)

It would also make it easier for private schools to inflate grades over time and widen the gap between private and state students. (P.16)

Given the concerns, as outlined above, about the detrimental academic performance and psychological impacts upon students of GCSE examinations, it is perhaps surprising that a slight majority of students in this survey (52%) expressed a preference for future GCSE assessment by examination rather than the presented version of teacher-led assessment. It should be considered, however, that this finding is from within a context in which students had experienced a late-notified assessment scheme change and so may to some greater or lesser extent reflect a reaction to this change/loss of control per se, rather than a dispassionate evaluation of the two assessment options under consideration.

A small number of responses fell into one of three different categories: first, some respondents acknowledged the hurried and ad hoc development of the present system of teacher-led assessment to differentiate their views about this assessment medium more generally (e.g. teacher assessment could be better but not the version presented here); second, some respondents explicitly balanced factors for and against their stated assessment medium preference; third, some respondents contrasted what would be in their individual interests with what might be in the interests of the majority, or ‘fairness’ (e.g. respondent considers self to be ‘good at exams’, but would consider teacher assessment to be fairer for most):

I would choose exams, as I tend to perform better in exams than in class. However, I know that this is not the case for a lot of people, and that for most people, teacher calculated grades would be much fairer, and a better reflection of their ability. (P.68)

More broadly, factors favouring assessment by examination included: greater student control and sense of achievement; examinations provide a goal within a programme of study, encouraging of concentration and ‘work ethic’; avoidance of teacher bias towards engaged/preferred students; independence of markers of examination papers supporting greater reliability and fairness; familiarity with formal examinations as useful preparation for later phases of [academic] assessment, or as a trial in coping with future performance stress:

I would rather be in control of my own future than have it in the hands of my teachers (P.4)

You cannot expect them [teachers] to be able to monitor every child to such an extent when they are teaching hundreds of kids a year (P.9)

Exams aren’t enjoyable, but they give you something to focus on and work toward which is important for motivation (P.39)

They are important to prepare you for future stressful situations. (P.101)

Factors favouring teacher-led assessment included: a belief that teachers are well/better placed, and able, to make valid judgements by dint of ‘knowing’ students, and also being able to incorporate reward for effort and commitment; the opportunity for students to show performance over time; criticisms of the examination system as more expedient than optimal (‘easier not better’), as being not well suited to the assessment of understanding (rather than knowledge reproduction), and as relying upon a ‘snapshot’ performance and upon a limited sample of the overall programme of study; the avoidance of memorisation/‘cramming’ and handwriting speed and volume demands of examinations; the avoidance of the ‘unnecessary’ stress of examinations and class time spent/wasted on the teaching of examination technique:

A teacher calculated grade is based on all work throughout the course which is a more accurate reflection of ability and also gives incentive for students to work hard all year (P.116)

Most students doing GCSE exams have at least 20 exams to sit … Some students have to sit exams back-to-back since the time slots overlap. The stress is too much for some students to handle … (P.117)

Exams depend on how much you can write down in an exam paper (P.195)

Your best ability isn’t seen in exams as there are time pressures that influence how well you do. (P.210)

Within the various explanations of preferred assessment media, several students referenced possible alternative assessment schemes for GCSE, including: use of a teacher-led, grade point average (GPA) as thought to operate in some other countries; using a combination of both teacher-assessed coursework and assessment by examination; using regular in-school tests/exams as a basis for a teacher-assessed grade; offering student choice of the medium of assessment; use of practical, or controlled, assessments:

I wish that they would test us in an environment that is representing the real world, not some over glorified memory test (P.61)

Mock exams could be conducted over the year and then teachers could judge final performance. It wouldn’t then be based on your performance on one day. (P.153)

A combination of both would be great, such as an exam grade, a teacher calculated grade and a final weighted grade that is a combination of both (P.171)

It would be better if we had the choice and then people could be graded in the way that suited them individually but I realise that’s not realistic. (P.180)

Some responses evidenced an awareness of a socio-political dimension to the structuring and aims of educational assessments:

exams … only represent a select few skills, and ones that aren’t vital in the workplace (P.126)

Exams … are just used out of laziness because the government can’t think of another way of dividing people into ultimately what university they should go to and then what job they should get. (P.172)

Discussion

This survey research has revealed, both between and within respondents, a balance of views on the relative advantages and disadvantages of GCSE assessment by examination and by teacher-led assessment. Perceived advantages of assessment by examination included greater reliability of the assessment and a sense of control and motivational focus for the candidate; perceived disadvantages of assessment by examination included the reliance upon limited syllabus content sampling and one-off (‘snapshot’) assessment events, and the conflated demands of memorisation, writing speed and examination technique. Perceived advantages of teacher-led assessment included greater accuracy of the assessment, incorporated considerations of capability and work ethic, and the reduction/removal of examination stress/test anxiety; perceived disadvantages of teacher-led assessment included potential teacher judgement bias. Interestingly, teacher-led assessment was at the same time legitimised and de-legitimised by merit of its assumed ‘personal’ relationship to the student. Most students acknowledged that different students may perceive a different balance of advantages and disadvantages attaching to each assessment medium, possibly across different programmes of study. Many students identified potential alternative assessment schemes including combined assessment media, modular/continuous assessment, controlled assessments, or assessment according to student preference. Some students also evidenced a socio-political awareness about educational assessment, for example, in relation to the attested selective function of high stakes assessments, grade calibration, and the influence of school resource levels. There are resonances within these findings to previous research on aspects of GCSE assessment, including: the motivating effects of examinations (Elwood, Citation2012); the disproportionate memorisation demands of linear/terminal examinations compared to modular assessment (Barrance, Citation2019; Elwood, Citation2012); potential teacher judgement bias (Barrance, Citation2019; Elwood, Citation2012); and, the potential affordances of a range of assessment opportunities (Barrance & Elwood, Citation2018b; Elwood, Citation2012).

Whilst all of the views expressed by students may to some degree or other have been affected by the particular circumstance and impact of a late-notified change of assessment scheme for GCSE programmes ending in 2020, some of the views expressed clearly related quite specifically to the change per se rather than the medium of assessment more generally; for example, reaction to the unforeseen possibility that mock exam performance might now influence final grade award, or that the intended ‘last big push’ in the run up to exams between March and May was obviated.

Woods, McCaldin, et al. (Citation2018) have linked a mandate for incorporating student views on educational assessment with the progressive promotion of children’s rights within education and the democratic development of educational assessments; as exemplified in UNICEF’s ‘Rights Respecting Schools’ programme (UNICEF UK, Citation2022), the UK government Department for Education’s support for stakeholder participation and involvement (DfE, Citation2014b, Citation2022a) and the UK assessment and qualifications regulator’s (Ofqual) annually commissioned ‘Perceptions’ public surveys relating to GCSE, AS-level, A-level and vocational qualifications (Ofqual, Citation2022). The researchers here argue that the present findings confirm engagement with student stakeholders not only as a rightful, democratic exercise but also one which yields meaningful and insightful considerations that are, necessarily, theirs alone to give (cf. also Barrance & Elwood, Citation2018a; Elwood & Lundy, Citation2010). A further benefit of such student engagement is its significant congruence with students’ development of understandings of governance, power sharing and participation within compulsory citizenship foundation curriculum requirements for this age group (AQA, Citation2022; DfE, Citation2022b).

Woods, James, et al. (Citation2018) call for ‘regular system-wide gathering of feedback from children on their high-stakes educational assessment experiences and needs, and the promotion of their participation in decision-making about the structures and processes of such assessments’ (p. 96); similarly, Barrance and Elwood (Citation2018a) call for student participation in assessment planning. Notably, however, Brown and Woods (Citation2022) documented the remarkably low numbers of GCSE and A-level student respondents (around 0.04% of the candidate population) through Ofqual’s annually commissioned Perceptions surveys 2003–20, in contrast to the response in late 2020 of over 48,000 students to an ad hoc Ofqual consultation on how GCSE and A-level grades should be awarded in 2021 (Department for Education DfE/Ofqual, Citation2021). Brown and Woods (Citation2022) speculated whether ‘this instance will prompt a “paradigm shift” in student stakeholder engagement in high-stakes educational assessment in the UK’ (p.54); however, the following Ofqual Perceptions survey for GCSEs, AS-levels, A-levels and vocational qualifications in 2021 returned only 309 responses from the 14–18 year age band, representing under 0.05% of the relevant candidate cohorts (Ofqual, Citation2022). So, whilst Woods, James, et al. (Citation2018), and others (e.g. Barrance & Elwood, Citation2018a; Philips, Citation1994), have specifically argued the relevance of Articles 12, 13, 14, and 29 of the United Nations Convention on the Rights of the Child (UN, Citation1989) in relation to making educational assessment processes child-centred, relevant to later life, and promoting the child’s best interests, there is perhaps also a broader relevance of Article 42 which stipulates that all duty bearers should ‘make the principles and provisions of the Convention widely known, by appropriate and active means, to adults and children alike’ (p.12); in short, examination candidates will need to be able to access their right to a say on this aspect of their education in order for them to be able to exercise that right.

The present findings, however, reveal the possibility that some feedback from students might raise challenging questions for the GCSE assessment system. Woods, James, et al. (Citation2018), Barrance and Elwood (Citation2018a), and Hall (Citation2017) all advocate a more dialogic and transformational approach to student engagement, which moves beyond ‘receiving feedback’, particularly when this is gathered in response to questions most often pre-determined by only one stakeholder group. (Notably, many of the student insights within the present research were offered through the ‘open comments’ section of the survey, and at something of a tangent to what the researchers had thought to ask.) Student respondents in this research raised the matter of time pressures and handwriting speed demands in GCSE examinations, issues that are, arguably, ‘in plain sight’ and that previous research has identified as an extensive problem (e.g. Woods, Citation2000, Citation2003), but that have not been addressed either within GCSE assessment schemes or through the provisions of the examination sittings. Similarly, student respondents in this research espoused the benefit of teacher-led assessment as being flexible to incorporate due consideration of their capabilities, commitment and work ethic, all of which might be seen as secondary to subject-based assessment objectives for an academic attainment award. Opening up dialogue between students, awarding bodies, schools and regulators on such issues would likely develop clarified understandings, and in some cases, revised assessment arrangements and provisions.

Optimistically, there is clear evidence from student responses in this research of students’ well developed intersubjective perspective, which would be essential to a productively developing dialogue between educational assessment stakeholders. Students understood that their own perspectives might be different from those of other students, perhaps even within a minority; they recognised the realistic constraints upon awarding bodies within a context of unforeseeable school closures (cf. also McCaldin et al., Citation2023); they acknowledged the tension between student support and objective attainment evaluation that is inherent to teacher-led assessment; they anticipated the essential feasibility considerations in offering students choice in the medium of assessment within a single programme of study unrealistic, and some students even apprised socio-political factors that might have a bearing upon the way in which assessments and awards are structured.

In conclusion, our recommendation is that future research should contribute to the processes supporting dialogue between educational assessment stakeholders by furnishing the views and feedback of GCSE candidates upon their study programmes, support, assessment processes and outcomes, with particular focus upon aspects of such that are known, from previous or preliminary work, to be a priority to them.

Acknowledgments

This research was part-funded by the DfE’s ITEP (Initial Training in Educational Psychology) award 2020–2025.

Thanks go to the students who shared their views with us for the purposes of this study.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The work was supported through England’s government Department for Education ITEP award 2019.

Notes on contributors

Kevin Woods

Kevin Woods, a registered practitioner educational psychologist, works at The University of Manchester Institue of Education as Professor of Educational and Child Psychology and director of initial professional training in educational psychology.

Tee McCaldin

Tee McCaldin, a former high school teacher, works at The University of Manchester Institute of Education as senior lecturer and director of teaching and learning, a former high school teacher, works at The University of Manchester Institute of Education as senior lecturer and director of teaching and learning.

Kerry Brown

Kerry Brown, a former secondary science techer, is a registered practitioner educational psychologist working at Trafford MBC Educational Psychology Service.

Rob Buck

Rob Buck, a former secondary science teacher, works at The University of Manchester Institute of Education as senior lecturer with the secondary science PGCE team.

Nicola Fairhall

Nicola Fairhall, a former primary school teacher, is a registered practitioner educational psychologist working at Trafford MBC Educational Psychology Service.

Emma Forshaw

Emma Forshaw, a former secondary school teacher, is a registered practitioner educational psychologist, working at Catalyst Psychology, Manchester.

David Soares

David Soares, a former high school teacher, is a registered practitioner educational psychology working as a consultant with DSPsychology, Wirral.

Notes

1. Terminal, or linear, assessment refers to that which takes place at the end of the whole programme of study, in comparison to modular or continuous assessment which takes place at intervals within the programme of study. For GCSE programmes, terminal/linear exam-based assessments are typically standardised at between 1 hour and 2.5 hours duration.

2. Findings from the first part of this survey, relating to students’ feelings about the experience of moving from the expected exam-based assessments to teacher-led assessments are reported elsewhere – McCaldin et al. (Citation2023).

References

  • AQA. (2022). Citizenship GCSE specification. Retrieved June 15, 2023, from https://www.aqa.org.uk/subjects/citizenship/gcse/citizenship-studies-8100/subject-content/politics-and-participation
  • Ball, S., Maguire, M., Braun, A., Perryman, J., & Hoskins, K. (2011). Assessment technol- ogies in schools: ‘Deliverology’ and the ‘play of dominations’. Research Papers in Education, 27(5), 513–533. https://doi.org/10.1080/02671522.2010.550012
  • Barrance, R. (2019). The fairness of internal assessment in the GCSE: The value of students’ accounts. Assessment in Education Principles, Policy & Practice, 26(5), 563–583. https://doi.org/10.1080/0969594X.2019.1619514
  • Barrance, R., & Elwood, J. (2018a). National assessment policy reform 14–16 and its consequences for young people: Student views and experiences of GCSE reform in Northern Ireland and Wales. Assessment in Education Principles, Policy & Practice, 25(3), 252–271. https://doi.org/10.1080/0969594X.2017.1410465
  • Barrance, R., & Elwood, J. (2018b). Young people’s views on choice and fairness through their experiences of curriculum as examination specifications at GCSE. Oxford Review of Education, 44(1), 19–36. https://doi.org/10.1080/03054985.2018.1409964
  • Benchaim, D., & Zoller, U. (1994). Examination-type preferences of students and their teachers in the science disciplines. Instructional Science, 25(5), 347–367. https://doi.org/10.1023/A:1002919422429
  • Benn, M. (2017). We are going in the wrong direction and we don’t know how to stop it. Keynote paper presented to the British Psychological Society (BPS) Division of Educational and Child Psychology.
  • Brown, K.-A., & Woods, K. (2022). Thirty years of GCSE – student views. Assessment in Education Principles, Policy & Practice, 29(1), 51–76. https://doi.org/10.1080/0969594X.2022.2053946
  • Buck, R. (2018). An Investigation of Attentional Bias in Test Anxiety [ PhD thesis]. The University of Manchester (School of Environment, Education and Development).
  • Cassady, J. C., & Finch, W. H. (2020). Revealing nuanced relationships between cognitive test anxiety, motivation and self-regulation through curvilinear analysis. Frontiers in Psychology, 11, Article 1141. https://doi.org/10.3389/fpsyg.2020.01141
  • Cohen, L., Manion, L., & Morrison, K. (2018). Research methods in education (8th ed.). Routledge.
  • Denscombe, M. (2000). Social conditions for stress: Young people’s experience of doing GCSEs. British Educational Research Journal, 26(3), 359–374. https://doi.org/10.1080/713651566
  • Department for Education. (2014a). Assessment curriculum and qualifications: Research priorities and questions. HMSO.
  • Department for Education. (2014b). Special educational needs and disability code of practice. HMSO.
  • Department for Education. (2017). Developing new GCSEs, as and a levels for first teaching in 2017: Consultation outcome. Retrieved June 16, 2018, from https://www.gov.uk/government/consultations/developing-new-gcses-as-and-a-levels-for-first-teaching-in-2017
  • Department for Education. (2021). Consultation on how GCSE, as and a level grades should be awarded in summer 2021. UK HM Government (DfE/Ofqual).
  • Department for Education. (2022a). Consultation Hub. Retrieved December 7, 2022, fromhttps://consult.education.gov.uk
  • Department for Education. (2022b). The national curriculum. Retrieved December 12, 2022, from https://www.gov.uk/national-curriculum/key-stage-3-and-4
  • Department of Education and Science (DES). (1985). GCSE national criteria. HMSO.
  • Eilersten, T. V., & Valdermo, O. (2000). Open-book assessment: A contribution to improved learning? Student Educational Evaluation, 26(2), 91–103. https://doi.org/10.1016/S0191-491X(00)00010-9
  • El Rassi, M. A. B. (2019). Assessing open-book-open-web exam in high schools: The case of a developing country. International Association for Development of the Information Society, Paper presented at the International Association for Development of the Information Society (IADIS) International Conference on e-Learning, Porto, Portugal. Jul 16-19, 2019
  • Elwood, J. (2012). Qualifications, examinations and assessment: Views and perspectives of students in the 14–19 phase on policy and practice. Cambridge Journal of Education, 42(4), 497–512. https://doi.org/10.1080/0305764X.2012.733347
  • Elwood, J. (2013). The role(s) of student voice in 14-19 education policy reform: Reflections on consultation and participation. London Review of Education, 11(2), 97–111. https://doi.org/10.1080/14748460.2013.799807
  • Elwood, J., Hopfenbeck, T., & Baird, J. (2017). Predictability in high-stakes examinations: Students’ perspectives on a perennial assessment dilemma. Research Papers in Education, 32(1), 1–17. https://doi.org/10.1080/02671522.2015.1086015
  • Elwood, J., & Lundy, L. (2010). Revisioning assessment through a children’s rights approach: Implications for policy, process and practice. Research Papers in Education, 25(3), 335–353.
  • Erlingsson, C., & Brysiewicz, P. (2017). A hands-on guide to doing content analysis. African Journal of Emergency Medicine, 7(3), 93–99. https://doi.org/10.1016/j.afjem.2017.08.001
  • Gilham, B. (2008). Developing a questionnaire (2nd ed.). Continuum International Publishing Group.
  • Gove, M. (2013). https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/529404/2013-02-07-letter-from-michael-gove-reform-of-ks4-qualifications.pdf
  • Graneheim, U. H., & Lundman, B. (2004). Qualitative content analysis in nursing research: Concepts, procedures and measures to achieve trustworthiness. Nurse Education Today, 24(2), 105–112. https://doi.org/10.1016/j.nedt.2003.10.001
  • Hall, V. (2017). A tale of two narratives: Student voice - what lies before us?. Oxford Rreview of Education, 43(2), 180–193.
  • Hipkiss, A., Woods, K., & McCaldin, T. (2021). Students’ use of GCSE access arrangements. British Journal of Special Education, 48(1), 50–69. https://doi.org/10.1111/1467-8578.12347
  • Hsieh, H.-F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–1288. https://doi.org/10.1177/1049732305276687
  • Isaacs, T. (2010). Educational assessment in England. Assessment in Education Principles, Policy & Practice, 17(3), 315–334. https://doi.org/10.1080/0969594X.2010.491787
  • Marshall, B. (2017). The politics of testing. English in Education, 51(1), 27–43. https://doi.org/10.1111/eie.12110
  • Mathews, J. C. (1985). Examinations: A commentary. George Allen & Unwin.
  • McCaldin, T. (2019). The Experiences of Students Preparing for GCSE Examinations: An Interpretative Phenomenological Analysis [ PhD thesis]. The University of Manchester (School of Environment, Education and Development).
  • McCaldin, T., Woods, K., Brown, K., Buck, R., Fairhall, N., Forshaw, E., & Soares, D. (2023). Student experiences of the 2020 cancellation of England’s General Certificates of Secondary Education (GCSEs). The Psychology of Education Review, 47(1), 61–70. https://doi.org/10.53841/bpsper.2023.47.1.61
  • The Office of Qualifications and Examinations Regulation. (2022). Perceptions Survey – Wave 20. Retrieved December 7, 2022, from https://www.gov.uk/government/statistics/perceptions-of-a-levels-gcses-and-other-qualifications-wave-20
  • The Ofqual Blog. (2023). Teacher Judgements in 2021 – What We Can Learn from 2020. Retrieved June 26, 2023, from https://ofqual.blog.gov.uk/2021/05/17/teacher-judgements-in-2021-what-we-can-learn-from-2020/
  • Pekrun, R., Goetz, T., Perry, R. P., Kramer, K., Hochstadt, M., & Molfenter, S. (2004). Beyond test anxiety: Development and validation of the test emotions questionnaire (TEQ). Anxiety, Stress & Coping, 17(3), 287–316. https://doi.org/10.1080/10615800412331303847
  • Philips, S. E. (1994). High-stakes testing accommodations: Validity vs disability rights. Applied Measurement in Education, 7(2), 93–120.
  • Putwain, D., Connors, L., Woods, K., & Nicholson, L. J. (2012). Stress and anxiety surrounding forthcoming standard assessment tests in English schoolchildren. Pastoral Care in Education, 30(4), 289–302. https://doi.org/10.1080/02643944.2012.688063
  • Putwain, D. W., Nicholson, L. J., Connors, L., & Woods, K. (2013). Resilient children are less test-anxious and perform better in tests at the end of primary schooling. Learning & Individual Differences, 28, 41–46. https://doi.org/10.1016/j.lindif.2013.09.010
  • Putwain, D. W., & Symes, W. (2018). Does increased effort compensate for performance debilitating test anxiety? School Psychology Quarterly, 33(3), 482–491. https://doi.org/10.1037/spq0000236
  • Qualtrics. (2023). QualtricsXM Platform. Retrieved December 23, 2023, from https://www.qualtrics.com/uk/lp/uk-ppc-experience-management/?utm_source=google&utm_medium=ppc&utm_campaign=UKI%7CSRC%7CBRD%7CQualtricsPlus%7CBroadMatchTest&campaignid=20837135245&utm_content=&adgroupid=157138495878&utm_keyword=qualtrics&utm_term=qualtrics&matchtype=b&device=c&placement=&network=g&creative=679794367900&gad_source=1&gclid=EAIaIQobChMIk5ir6oOmgwMVhKSDBx2PCAHVEAAYAyAAEgIPofD_BwE
  • Reid, A., & Jones, M. (2002). Learning from GCSE coursework. Teaching Geography, 27(3), 120–125.
  • Rodway, C., Tham, S. G., Ibrahim, S., Turnbull, P., Windfuhr, K., Shaw, J., Kapur, N., & Appleby, L. (2016). Suicide in children and young people in England: A consecutive case series. The Lancet Psychiatry, 3(8), 751–759. https://doi.org/10.1016/S2215-0366(16)30094-3
  • Soares, D., & Woods, K. (2020). An international systematic literature review of test anxiety interventions 2011–2018. Pastoral Care in Education, 38(4), 311–334. https://doi.org/10.1080/02643944.2020.1725909
  • Stobart, G. (2008). Testing times: The uses and abuses of assessment. Routledge.
  • The Sutton Trust. (2019). https://www.suttontrust.com/wp-content/uploads/2019/12/MakingtheGrade2019.pdf
  • Tattersall, K. (1994). The role and functions of public examinations. Assessment in Educations: Principles, Policy & Practice, 1(3), 293–304. https://doi.org/10.1080/0969594940010305
  • The Telegraph. (2020). It’s Time to Scrap GCSEs – They Serve No Good Purpose in the 21st Century (Published 1st October). Retrieved June 15, 2023 https://www.telegraph.co.uk/education-and-careers/2020/10/01/time-scrap-gcses-serve-no-good-purpose-21st-century/
  • The Times Educational Supplement. (2020). Three Reasons GCSEs Need to Change – and Three Alternatives (Published 27/09/20). Retrieved February 15, 2021, from https://www.tes.com/news/gcses-2021-change-alternatives
  • Torrance, H. (2018). The return to final paper examining in English National Curriculum assessment and school examinations: Issues of validity, accountability and politics. British Journal of Educational Studies, 66(1), 3–27. https://doi.org/10.1080/00071005.2017.1322683
  • Tyrell, B., & Woods, K. (2018). Facilitating the involvement of young people with ASD in organising their examination access arrangements. Support for Learning, 33(4), 388–406. https://doi.org/10.1111/1467-9604.12226
  • UNICEF UK. (2022). Rights Respecting Schools. Retrieved June 15, 2023, from https://www.unicef.org.uk/rights-respecting-schools/
  • United Nations (UN). (1989). Convention on the rights of the child. United Nations.
  • von der Embse, N. P., Jester, D., Roy, D., & Post, J. (2018). Test anxiety effects, predictors, and correlates: A 30-year meta-analytic review. Journal of Affective Disorders, 227, 483–493. https://doi.org/10.1016/j.jad.2017.11.048
  • Waters, M. (2015). The gove legacy: Where policy meets the pupil. In M. Finn (Ed.), The gove legacy: Education in Britain after the coalition (pp. 63–74). Palgrave Pivot.
  • Wolf, A. (2009, October 19). The role of the state in educational assessment. Address to the Cambridge Assessment Annual Conference, Cambridge, UK.
  • Woods, K. (2000). Assessment needs in GCSE examinations: Some student perspectives. Educational Psychology in Practice, 16(2), 131–140. https://doi.org/10.1080/02667360050122389
  • Woods, K. (2003). Equality of opportunity within the examinations of the general certificate of secondary education. The Psychology of Education Review, 27(2), 3–16.
  • Woods, K., James, A., & Hipkiss, A. (2018). Best practice in access arrangements made for England’s General Certificates of Secondary Education (GCSEs): Where are we 10 years on? British Journal of Special Education, 45(3), 236–255. https://doi.org/10.1111/1467-8578.12221
  • Woods, K., Lewis, S., & Parkinson, G. (2010). Investigating access to educational assessment for students with disabilities. School Psychology International, 31(1), 21–41. https://doi.org/10.1177/0143034310341622
  • Woods, K., McCaldin, T., Hipkiss, A., Tyrell, B., & Dawes, M. (2018). Linking rights, needs and fairness in educational assessment. Oxford Review of Education, 45(1), 86–101. https://doi.org/10.1080/03054985.2018.1494555
  • Zeidner, M. (1994). Reactions of students and teachers towards key facets of classroom testing. School Psychology International, 15(1), 5–95. https://doi.org/10.1177/0143034394151003