714
Views
1
CrossRef citations to date
0
Altmetric
Research articles

Awarding digital badges: research from a first-year university course

ORCID Icon, ORCID Icon & ORCID Icon
Pages 640-656 | Received 28 Jun 2023, Accepted 24 Jan 2024, Published online: 25 Mar 2024

ABSTRACT

This paper reports on a study that replaced marks with digital badges in an undergraduate Initial Teacher Education course. The aim was to examine the impact of badging on students, and on the quality of university provision. Data collection consisted of surveys and focus groups. The study found that while digital badges had considerable potential to improve the student experience in terms of engagement with feedback, motivation and reducing ‘grade anxiety’, delaying the awarding of marks caused significant student anxiety. The effect on university provision was more positive, and digital badges promoted constructive alignment between university-based assessment tasks and external professional standards frameworks. The student findings pointed to an underlying tension associated with implementing student-centred, outcomes-based assessment methods in contexts designed to accommodate traditional high-stakes assessment. Five recommendations for future practice were made: (1) Remove marks altogether where digital badges are used; (2) Do not substitute written feedback with digital badges; (3) Developing a badge framework is important for both students and staff; (4) Inform students about what to expect; (5) Develop a ‘Badge Tree’.

Introduction

Arguably, traditional methods of assessment, such as numeric marks, are unable to assess and reflect the full range of competencies that students hold (Bassett, Citation2015; Robinson & Aronica, Citation2016). Moving away from marksFootnote1 to micro-credentials may help to address this issue. A micro-credential is ‘a certification of assessed learning that is additional, alternate, complementary to or a formal component of a formal qualification’ (Oliver, Citation2019, p. i). Digital badges are electronic symbols used as micro-credentials to document achievement or skills mastered (Stefaniak & Carey, Citation2019), and can be described as ‘a representation of an accomplishment, interest or affiliation that is visual, available online, and contains metadata including links that help explain the context, meaning, process, and result of an activity’ (Gibson et al., Citation2015, p. 404). Following from this, micro-credentials (to be referred to as digital badges) can be linked to specific competencies and used to assess skills, dispositions, and applied knowledge. Digital badges can also contribute to a more comprehensive and fine-grained record of students’ achievements by providing a badge that summarises the competencies that have been achieved, and provides a record of the criteria that have been met and the context in which they have been demonstrated (Elliot et al., Citation2014). However, assessment reform within existing institutions takes place within a context of traditional high stakes assessment. Such assessment is often reductionist, focusing on marks rather than competencies. These existing assessment modes frame both institutional processes and student expectations. Aligning with the aims of this special issue, this project examined the impact of assessment reform in higher education through the awarding of digital badges instead of assessment marks within an undergraduate Initial Teacher Education (ITE) course.Footnote2

When presented with marks/grades, students often do not engage with other feedback (Gibbs & Simpson, Citation2005; Wotjas, Citation1998), and when marks are presented on their own, or even with written feedback, they undermine engagement (Black & Wiliam, Citation1998). When marks are withheld, students engage more with written feedback (Black & Wiliam, Citation1998). As such, we envisaged that replacing marks with digital badges mapped to competencies (Figures 2–4) would enhance student motivation and self-efficacy, and provide additional insight into performance, as digital badges can act as guidance mechanisms (representing what learners are working towards) and facilitate psychological flow by showing how performance (indicated by the badge) relates to the competencies achieved within the degree as a whole (Hamari, Citation2017). Digital badges may also have positive effects on motivation and self-efficacy, as they help to anchor performance expectations higher, assist with goal setting, goal commitment and goal completion, provide social proof (a record of achievement that can be shared), and facilitate motivating social comparisons (Hamari & Eranti, Citation2011; Hamari, Citation2017).

With respect to alignment between university assessment and workplace standards frameworks, learning outcomes are often prescribed by professional bodies in the form of professional standards. Within a degree that spans multiple years and multiple courses (developed and taught by different staff) there is a risk that standards may not be interpreted consistently, and the program may lack continuity. While curriculum mappings against professional standards may help students know which courses address which standards, digital badges and associated frameworks help students know which competencies they have actually demonstrated. Developing a digital badge framework aligned with these standards may also help address the issue of cross-program consistency by providing clear criteria against each standard which can be used whenever it is being assessed (Hennah, Citation2018). By making it clear to students what credentials are needed to meet the standards, and which courses award them, digital badges can enhance learner agency (Selvaratnam & Sankey, Citation2021).

While digital badges may address many of the issues associated with traditional forms of assessment and the use of marks/grades (Butler, Citation1988; Gibbs & Simpson, Citation2005; Wotjas, Citation1998; Hamari & Eranti, Citation2011; Hamari, Citation2017; Hennah, Citation2018; Martin, Citation2019), there is a ‘dearth of available academic research on micro-credentials’ (Selvaratnam & Sankey, Citation2021, p. 3), and in particular the tensions generated between old and new modes of assessment as they interact. This research aimed to address that gap through an empirical study focusing on two major stakeholders of undergraduate education – students and the staff who teach them.

This research aimed to: examine student perceptions of being awarded digital badges instead of marks within individual assignments; examine the impact of awarding digital badges instead of marks on students; and examine the impact that developing a badging program had on university provision.

A significant aspect of this project that differentiates it from others is that digital badges were awarded on the level of achievement against specific criteria within an individual assignment, and not the level of the course or degree. The rationale for this was twofold. First, to ensure that each badge was clearly linked to a particular competency – rather than simply an indication that an entire course has been passed – as the latter approach still does not provide specific information about which competencies within the course have been achieved. Second, badges mapped to specific competencies within a course presented potential motivational benefits to students as they progressed through the course (as opposed to once it had been completed).

Material and methods

A modified version of Stefaniak and Carey’s (Citation2019) Framework for Successful Badge Program Implementation – framed around ‘Badge Instructional Design’, ‘Badge System Platform’ and ‘Badge Program Implementation’ – was adopted () to structure the project. Our approach was distinct in that we added an iterative ‘research and reflection’ component, which occurred between each phase of the project.

Figure 1. Adapted from Stefaniak & Carey’s (2019) Framework for Successful Badge Program Implementation.

Figure 1. Adapted from Stefaniak & Carey’s (2019) Framework for Successful Badge Program Implementation.

Badge instructional design involved defining relevant assessment frameworks to which the badges can be mapped (establishing criteria). The relevant frameworks included workplace professional standards and university-based academic standards. With respect to ITE, the Australian Professional Standards for Teachers (APST)Footnote3 were adopted. With respect to academic standards, a variation of the CAPRI Framework was employed (Thompson, Citation2016), which details five domains of academic skills (Twining, Citation2022): Communication and Collaboration; Attitudes and Values; Practical and Professional; Research and Critical Thinking; and Innovation and Creativity. The existing course criteria were interpreted through these frameworks. The Structure of Observed Learning (SOLO) Taxonomy (Biggs & Collis, Citation1982), which details progressions and levels, was used to knit the two frameworks together into a coherent hierarchy of competencies (Appendix 1).

Badge development was concerned with badge design (), types, and levels. Three levels of badges were developed (), which corresponded with the SOLO taxonomy’s ‘Multistructural’ (Pass), ‘Relational’ (Credit or Distinction), and ‘Extended Abstract’ (High Distinction) descriptors, and each badge was mapped against either the APST or the CAPRI framework. The levels of badges were explained to students at the start of the course, prior to each assignment, and in response to questions raised by students at any point.

Figure 2. Digital badge design elements.

Figure 2. Digital badge design elements.

Figure 3. Badge levels.

Figure 3. Badge levels.

The course contained three assessment tasks. Seven badges were available for awarding across assignments 1 and 2, and eight were available for assignment 3. These badges were displayed for students in a ‘badge tree’ ().

Figure 4. Badge Tree displaying all badges available in the course.

Figure 4. Badge Tree displaying all badges available in the course.

The badge system platform consisted of the Learning Management System (Canvas) and an external provider (My eQuals). Badging data was exported from Canvas and converted into a digital badge in the My eQuals system, where students could view their badges.

Badge program implementation involved replacing marks with digital badges on the level of their individual assignments. Replacing marks/grades on the level of the course was not possible due to university requirements. As such, two actions were being performed: the withholding of a mark, and the introduction of a digital badge. Delineating the effects of these two actions was an important aspect of data analysis.

Project phases

The project proceeded in three phases. Phase 1 involved the development of the assessment frameworks and digital badges, and undertaking a small pilot of six students prior to semester 1. Phase 2 involved implementing the digital badges and revised assessment framework in a large undergraduate ITE course of 860 students in semester 1. Phase 3 was conducted with a smaller course of 66 students in semester 2, and involved withholding marks but not awarding digital badges – this was to delineate the impact of digital badges versus the withholding of marks alone. The course content for each phase was identical and the research developed iteratively across the three phases.

Data collection and analysis

Data collection consisted of online surveys and a semi-structured focus group interview. The surveys captured numeric data (Likert scales) and non-numeric data (free-text responses), and consisted of a baseline survey (capturing relevant background information) and three post-assignment surveys. The focus group interviews (recorded and transcribed) occurred at the conclusion of the course and captured overall impressions of the digital badges. Non-numerical data was analysed using Emergent Theme Analysis , which involved identifying themes inductively from the transcripts, and then iteratively coding from broad down to specific themes. Data was coded using NVivo.

Participant recruitment

Undergraduate students and teaching staff (tutors and lecturers) were recruited for the research. Recruitment involved advertising participation directly through presentations to individual classes ().

Table 1. Recruitment and attrition: students.

Table 2. Recruitment and attrition: staff.

Results: student perceptions of digital badges

A key focus of data analysis was tracking student perceptions of the digital badges. Factors that impacted student perceptions either positively or negatively were identified inductively, and each ‘code’ referred to a statement indicating either a positive or negative view of the digital badges. Overall, student perceptions of the digital were more negative (273) than positive (179). The specific categories of ‘positive’ and ‘negative’ factors, and the number of codes against them are illustrated in and below.

Table 3. Factors that positively influenced student perceptions of digital badges (ranked).

Table 4. Factors that negatively influenced student perceptions of digital badges (ranked).

Positive student perceptions

The positive factors identified supported the initial hypothesis that digital badges would improve student engagement with feedback. The most numerous category was linked with improved student engagement with feedback (ME:53), and is illustrated by the comment below:

[It] was probably useful spending more time on the rubric … It made me kind of reflect on my work a little bit more … I kind of had to go check what the badge was for, and then read what I actually achieved during that badge … So I was checking it a lot more than usual … it helped me improve with my next few assignments in this course. (ME. FG. R3)

The second most numerous positive category related to the quality of the feedback provided (GQF:40); specifically, the badges indicated to students where they went well and where they could improve: ‘I found it fairly helpful as it allowed me to be able to easily see what areas need work and what are up to standard already’ (GQF. A1. R5). The next category, general positive comments (GP:26), evidenced a general positive view of the badges: ‘I think the badges are great’ (GP. A2. R1). There was also evidence to show that badges increased student engagement with the APST (GTS:21): ‘I think some of the strengths is it not just a number … but it gives an overall approach of becoming what your degree actually involves … connecting it to the graduate standards’ (GTS. FG11. R2), and that student perceptions of digital badges also became more positive as the course progressed (VCP:18): ‘as it progressed, the badges turned more into a positive experience for me’ (VCP. FG2. R4). The other positive coding categories – including a reduction in grade anxiety (GA:15) and an increase in student self-efficacy (SE:6) – will be considered later.

Negative student perceptions

The most significant negative codes related to uncertainty due to lack of marks (NRM:83), and a desire for more explicit feedback (MEF:82). These two codes were related, and also the most numerous across all surveys.Footnote4 For example:

So far I am not a big fan. I would prefer to know my marks as well, that way I am aware of what I need in the next assignment in order to pass the course. Not knowing this is actually making me very anxious. (NRM. A1. R4)

I didn’t find the feedback very helpful … I don’t exactly know where I went wrong and what I can do to improve for next time. (MEF. A1. R16)

Other negative categories included badges being hard to interpret (BHI:67), which included difficulty in making connections between the digital badge and their performance on the assignment:

I was very confused about my results. I could access the badges but was unsure what they each meant, if [the] colours meant anything different or if I had passed the assignment in general. (BHI. A1. R14).

General negative comments (GN: 9) were coded as negative but not tied to any specific aspect of the badges: ‘I did not like the digital badges’ (GN. A1. R3). Feedback also indicated that students required more initial support (MIS:10), reflecting a desire for more orientation before receiving badges: ‘more explanation about how to read it, maybe. Just as an introduction in the course’ (MIS. FG11. R1). Other comments pointed to the lack of consistency between courses, noting that other courses awarded marks and not badges: ‘Quite confusing because every other class I have just has regular marks’ (RGO. A1. R2).

To summarise, results revealed a number of positive and negative student self-identified factors associated with digital badges, and that the negative outweighed the positive by 53%.

Other variables influencing student perceptions

Phase 2 results suggested that the digital badges were better received, and more motivating, for higher-achieving students. The most positive data collected was from the focus groups, which may have been self-selecting, as students who progressed through the surveys and completed the focus group may have done so due to more positive perceptions of the digital badges. Students who attended focus groups also performed better in the course on average, with an average mark of 81% compared to 65% for the course as a whole. One potential reason that higher achieving students were more positive about the badges is that they may have been less worried about failing the course.

Survey data also showed that the quality of the written feedback provided by teaching staff impacted upon student perceptions of badges. Perceptions were more negative when tutor feedback did not align with what was displayed on the badges: ‘It was not helpful at all. It was very vague, saying I did well, and yet when I compared the comment to the digital badges I received, I did not understand why I did not do better in the assignment’ (MEF. A1. R2). Perceptions also tended to be more negative when insufficient tutor feedback was provided: ‘I think it would be beneficial to offer the personalised feedback at the same time as the digital badges … to identify areas that need more development’ (MEF. A1. R10).

These comments speak to the significant function of written feedback alongside digital badges – where badges focus on what has been achieved, written feedback must address what remains to be done, and a lack of such feedback undermines the effectiveness of the assessment feedback as a whole.

Results: the impact of digital badges on students

Impact differs from perceptions in that perceptions were primarily self-reported (‘I liked this’, ‘I didn’t like that’ etc.), whereas impact was deduced based on student responses. This section includes a consideration of the extent to which the positive and negative effects are linked to the awarding of digital badges as opposed to withholding marks. The data for this section was qualitative and drawn from the focus groups (n = 15).

Positive impact on students

In relation to positive impact, the four main factors were: (1) active engagement with assignment feedback, (2) increasing self-efficacy, (3) reducing grade anxiety, and (4) increased motivation.

Active engagement with assignment feedback

The principle survey question used to deduce positive impact was ‘please explain the process you went through after getting this assignment back’. Responses identified as signalling ‘active engagement’ have been collated in below, and include use of the rubric (IR), use of the badge tree (IBT), and conferring with others about interpreting badges (ICO).

Table 5. Coding for active engagement with feedback.

IR refers to active use of badges with the rubric to determine performance on an assessment task. For example:

Once I got my assignment back I made sure to have the original marking criteria open as well as my badge backpack. This made it … easier … to check what each badge actually meant. (IR. A2. R3)

IBT referred to students who actively used the badge tree provided as a tool to reflect on their results:

After the assignment was returned, I read all comments … I put my badges into my personal badge tree where I could compare my results to my last assignment, which I felt there was a positive improvement from my last results. (IBT. A2. R6)

The comment suggests that displaying badges in a badge tree () provided a useful tool to assist students in interpreting their results. The final category of active engagement, conferring with others (ICO), refers to students who had conversations with others to help determine their results:

I referred the badges I received to the original marking rubric and also asked my tutor and peers for assistance in deciphering what my badges meant and what mark this meant for me and if it was a pass. (ICO. A1. R1)

It is argued here that comments across each of these three categories (IR, IBT, ICO) are evidence for active student engagement with assignment feedback as a result of receiving digital badges and not receiving marks. Unpacking the relative weighting of each factor will be discussed in a later section.

Reducing mark anxiety

Mark, or grade, anxiety refers to the negative emotion (stress) some students experience when receiving a mark for an assignment, whether high or low. Some students in the focus groups (GA: 15) indicated that they preferred receiving badges instead of marks because they found it less stressful:

It felt almost relieving in a little way … without having to get the black and white grade … you could look at your badges and you could be like, okay, it’s going to be okay. And then you could … move on without getting caught up in a number. So you knew that you were fine and you wouldn’t fixate on it. (GA. FG. R2)

This finding was unexpected, and is currently not well documented in research. However, an important caveat is that the students who took part in the focus groups (from which this data was collected) tended to have higher marks on average than the course as a whole. It may be that students who know they are going to pass the course were more likely to find the badges less stressful.

Increasing self-efficacy

For a number of students digital badges appeared to enhance self-efficacy (SE:6), as not receiving a mark increased awareness of achievements within the assignment:

I think digital badges focuses more on what you’ve achieved rather than the mark you’ve got … I think it’s really good to have the badges that are saying, ‘ … you’ve achieved this,’ and you can really hone in and focus on what you’ve done and be like, ‘I’ve actually done that. It’s really good.’ … the positive is it’s more achievement-focused rather than numbers-focused. (SE. FG. R3)

This coding category is similar to reducing grade anxiety (GA:15), but also distinct in that, rather than feeling less anxious for not receiving a mark, they felt ‘more able’ by seeing what they had achieved.

Student motivation

Consistent with existing research (Hamari, Citation2017), there was also evidence to suggest that digital badges had a positive impact on student enjoyment and motivation (coded as GE):

I found that it was a lot more engaging, especially between peers, but also for myself. I was actually excited for the grades … I find that majority of [first] year students are giant kids. They enjoy the novelty of it while still being very, very accessible and inclusive. (GE. FG1. R2)

The novelty of receiving digital badges seemed to have a positive effect on student motivation, though the majority of the detailed comments showing increased motivation came from the higher achieving students in the focus group interviews.

An interesting feature of this data is that students who perceived aspects of the badging process negatively may have also been positively impacted through more active engagement with assignment feedback: [It] was probably useful spending more time on the rubric, even though I might have found it a bit stressed, trying to figure it out. It made me kind of reflect on my work a little bit more’ (ME. FG. R3).

In summary, four positive impacts of digital badges were identified in the current study. While several are well documented in research (increased engagement with feedback, motivation and self-efficacy), reducing grade anxiety has not (to our knowledge), and may warrant further investigation.

Negative impact on students

The most significant negative factor was the elevated level of stress felt due to not being given numerical marks and finding badges difficult to interpret. Following from this, uncertainty due to lack of a mark (NRM:83) was the second most numerous coding category. Students who found the badges difficult to interpret also experienced additional stress:

It was slightly stressful not knowing how to navigate the website. It was difficult to read the badges as the section which tells you what the badge is for is hard to get to … Also, the yellow colour is not a very strong gold colour, which creates stress. I was worried because I thought, ‘What does yellow mean?’ Also the badges are small on the screen and hard to read. (BHI. A1. R12)

The word ‘stress’ appeared 54 times in student responses across surveys and focus groups, and the anxiety felt by students who did not know their mark (NRM:83) and felt unable to infer it using badges (BHI:67) and feedback (MEF:82) was one of the defining features of the data collected.

While it was shown that badges did have some significant (and surprising) positive effects on students, the overall impact of introducing badges and withholding marks was negative.

Awarding badges or withholding marks?

An important question that arose was whether the positive and negative impacts on students was caused by the awarding of digital badges or the withholding of marks. To shed light on this, a follow-up study was conducted (phase 3) where marks were withheld but badges were not awarded. Data from this study suggested that some of the negative effects observed in phase 2 (student anxiety) may be linked to badges and not the withholding of marks, and, conversely, some of the benefits observed in the phase 2 (reduced mark/grade anxiety) may be linked to the withholding of marks and not the badges.

For example, when a tutor (who had taught across phases 2 and 3) was asked whether students responded positively or negatively to the withholding of marks in phase 3, they responded: ‘Neutral. They’re not super happy, they’re not super sad or upset’ (WG.T1). This was supported by student interview data. However, when the tutor was asked about how students responded to the digital badges in phase 2 they replied: ‘when we used badges, they [were] … a bit more worried … but this semester … they’re not worried … they’re just thinking … "I wish I could see my marks’’' (WG.T1). The tutor was then asked to elaborate on the differences between the two phases:

I found that [the students are] more … positive in terms of withholding grades, they don’t mind. And they can see the feedback, they’re happy with that. With [digital badges] there is like two types of worry. One is ‘What is the grade I’m getting and what is my marks?’, and another worry [was] ‘I don’t understand the badge’. (WG.T1)

The comments here suggest that at least some of the negative emotion experienced in phase 2 may be due to student difficulty in interpreting the badges rather than having marks withheld. The tutor also noted that the badges may be adding to student uncertainty rather than reducing it: ‘So there is two things going on in their mind. ‘I can’t see my marks. Did I pass?’, and ‘What does this badge mean?’’ (WG.T1)

Evidence from the tutor interview in phase 3 also suggested that some of the benefits observed in phase 2 (i.e., the reduction in mark/grade anxiety) may also be attributable to withholding marks, but not awarding badges:

When someone is getting bad [marks] … they feel like very demotivated in-between the semesters when they’re getting the first grades for the first assignments. And in this course [phase 3], as we don’t have any grades during the classes or during the semester, I found them very enthusiastic during the classwork (WG.T1)

This was supported by student data collected in phase 3:

You just engage more with the course in that way, instead of just focusing on a certain mark. You’re just looking at trying to learn what you can in the course … I think it’s a good way to just be more immersed in the course without the grades. (WG.S1)

Significantly, the tutor’s comments here contrast with the findings on student performance considered earlier, where it was suggested that higher achieving students benefitted the most from receiving badges and withholding marks.

An important caveat to phase 3 data is the sample size, which consisted of two student interviews and one tutor interview, making it difficult to fully delineate the effects of badges and withholding marks. Another factor is that much of the data from phase 2 suggests either a combined effect or an effect different from that described in phase 3.

In addition, there appeared to be several positive factors, including motivation and self-efficacy, that are related to specific aspects of the badges, and it is possible that the higher levels of engagement with assignment feedback may be a consequence of students seeking to interpret the badge.

In summary, the phase 3 data raised interesting questions about the relative impacts of awarding digital badges and withholding marks. While it may not be possible to fully explore this in the current study due to the small sample, the available evidence does indicate that for some students withholding marks alone may have had a more positive impact than replacing marks with digital badges.

The impact of digital badges on university provision

While the results of implementation of badging in undergraduate assessments was mixed, the impact upon provision was more positive – reworking assessment tasks in light of the badges enhanced their alignment with both university and professional standards frameworks (APST):

By thinking about issuing badges against the graduate teaching standards, it would force [lecturers] … to make sure that they were actually assessing the students against the graduate teaching standards … and … become a vehicle for improving the quality of our provision. (SP.T1)

Evidence of greater understanding and engagement with the APST, as well as greater constructive alignment, was also found in student data (GTS:21).

Discussion

It was argued that moving away from awarding marks and towards competency-based digital badges had the potential to enhance the student experience with respect to both motivation and engagement with feedback. In some important respects our findings support these claims, and the evidence suggests that awarding digital badges did lead to greater motivation and active engagement with feedback (ME:53) for some students. The findings around self-efficacy and motivation are also consistent with existing research on the awarding of badges: Hamari (Citation2017) identifies motivating social comparisons (‘I found that it was a lot more engaging, especially between peers’ GE. FG1. R2), social proof, (‘It’s like a medal at a sports carnival’ GE. FG1. R3) and anchoring performance expectations higher (‘you can really hone-in and focus on what you’ve done and be like, "I’ve actually done that. It’s really good"’. SE.FG4.R3). There was also evidence to suggest that digital badges led to students gaining greater insight into their performance, primarily through better contextualising feedback: ‘I kind of had to go check what the badge was for, and then read what I actually achieved during that badge’ (ME. FG. R3). The final, and somewhat surprising, positive finding was that for some students digital badges reduced anxiety around receiving marks.

However, it was also noted that any assessment reform within existing institutions must take place within a context of traditional, high stakes assessment. The tensions between old and new modes of assessment had a significant effect on both student perceptions of digital badges as well as their impact, and uncertainty around marks (NRM:83), and the associated stress, was the most significant recorded influence on students. On balance, the intervention had more perceived negative than positive effects on students.

The role of feedback in either enhancing or detracting from student perceptions of the digital badges was also significant. More engagement with feedback (ME:53) and good quality feedback (GQF:40) were the two most significant positive factors identified. Conversely, the need for more explicit feedback (MEF: 82) was the second most significant negative factor identified. Finally, the quality of written feedback was one of two key variables that impacted up upon student perceptions of the digital badges. These findings suggest that feedback, as an overlapping category, cuts both ways – when it aligns with the information contained on the digital badge the overall experience is enhanced compared to traditional mark and feedback, however when it does not align it adds confusion and anxiety and is potentially less helpful than traditional mark and feedback.

Significance

The current intervention had more perceived negative effects on students than positive, however the study provided valuable insights into some of the challenges, and opportunities, associated with assessment reform in higher education.

The majority of research on digital badges has been conducted on the level of course as a whole (typically short courses), or for elements within traditional courses, existing side by side with traditionally graded elements (Newby & Cheng, Citation2020). We awarded digital badges instead of marks within existing assignments and, to our knowledge, this is the first study to report on such an approach. This study is also the first to report on the use of digital badging in association with ‘reduced grading’ (and reducing grade anxiety). Existing research on reduced grading focuses on removing or delaying an assessment element (Normann et al., Citation2023); this research is distinct in that, after removing one element (marks), another (digital badges) was introduced.

Reflections on practice

The final element of the discussion relates back to the key research question around implementing assessment reform within contexts geared toward traditional assessment modes, and consists of five reflections for future practice.

First, if digital badges are to replace marks on assignments, then remove marks altogether. In our research the tension between these two elements was (necessarily) unresolved, and led to a significant amount of anxiety for students. As such, an important consideration is only using digital badges on assignments where they can practically replace marks – not where marks will ultimately be used as the indicator of student performance.

Second, digital badges are not a substitute for high quality written feedback. If digital badges are going to be used on the level of individual assignments they should complement written feedback, not act as a substitute for it. When combined with written feedback, digital badges made the feedback more meaningful. Conversely, when digital badges are awarded without written feedback, or where the feedback does not align with the badges, the utility of the badge is diminished. Assignment level badges ought to be paired with relevant feedback to maximise their effectiveness.

Third, developing a badge framework is important for both students and teaching staff. As was seen, implementing badging programs can precipitate constructive alignment in assessment design, as well as enhancing the meaning and utility of the task for students.

Fourth, students need to know exactly what to expect. For many students’ digital badges are unfamiliar, so it is important to explain why they are being used and how to interpret them before they are implemented. A clear understanding of the badging system, processes and symbols was a predictor for positive student perceptions. Conversely, not knowing what to expect led to a significant number of negative codes. Continuing with the digital badge framework throughout the whole degree is likely to increase student acceptance of and positivity towards this approach to providing feedback and marks on assignments.

The final reflection is the importance of developing a ‘Badge Tree’. Having a high-level mapping of all the badges that can be awarded helped students to track their progress, reflect on their performance, and contextualise the various assessments.

Areas for future research

Following from this discussion, a number of areas for future research are evident. The first is the function of digital badges in ‘reduced grading’, which is potentially significant and not well documented in current literature. The second is how digital badges can be meaningfully embedded within traditional modes of assessment without detracting from the student experience – unless marks/grades are to be removed altogether (not a practical proposition for most universities) digital badges will necessarily co-exist with existing forms of assessment. Given the current levels of interest in digital badging, optimising integration is important. The final point relates to inclusivity, which is vital in any education setting. While visual representation is a key feature of digital badging, badges can be made more accessible to visually impaired individuals through the use of assistive technologies such as screen readers, as well as simplified graphics. This was not a focus of the current study, however it represents an important area for future research.

Conclusion

Assessment reform in higher education presents both opportunities and challenges. The findings presented here have shown that awarding digital badges has significant potential to enhance student engagement with feedback, encourage the linking of work in specific assignments to career goals and outcomes, promote constructive alignment, and even reducing student grade anxiety and increasing self-efficacy. However, it was also seen that these benefits can be eclipsed by student anxiety and dissatisfaction if the badging system is not optimally conceptualised and executed, especially given the constraints associated with traditional assessment modes.

Supplemental material

Supplemental Material

Download MS Word (461.8 KB)

Acknowledgements

This work was supported by the NCFE Assessment Innovation Fund under grant number G2101122.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by NCFE Assessment Innovation Fund: [Grant Number G2101122].

Notes

1 In this paper, the term mark is used when referring to the numeric score for an individual assessment task and grade is used to refer to the final result for a course/subject.

2 In the paper, the term course is used to describe a semester-long, single unit or subject that contributes to the awarding of a degree.

4 It is noteworthy that for Assignment 1 these two categories of comment may have been exacerbated by a delay in the release of feedback – without a specific mark to indicate performance students relied more heavily on assignment feedback to ascertain performance (as per the project rationale). However, when the feedback is not received concurrently with the badges (due to a technical issue there was a delay of about one day) students are left with both insufficient feedback and no mark to calibrate their expectations.

References

  • Australian Institute for Teaching and School Leadership. (n.d.). Australian Professional Standards for Teachers. https://www.aitsl.edu.au/standards.
  • Bassett, D.2015). The future of assessment: 2025 and beyond. AQA. Retrieved from: https://filestore.aqa.org.uk/content/about-us/AQA-THE-FUTURE-OF-ASSESSMENT.PDF.
  • Biggs, J., & Tang, C. (2011). Teaching for quality learning at university (4th ed.). McGraw-Hill Education.
  • Biggs, J. B., & Collis, K. F. (1982). Evaluating the quality of learning: The SOLO taxonomy. Academic Press.
  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. https://doi.org/10.1080/0969595980050102
  • Butler, R. (1988). Enhancing and undermining intrinsic motivation: The effects of task-involving and ego-involving evaluation on interest and performance. British Journal of Educational Psychology, 58(1), 1–14. https://doi.org/10.1111/j.2044-8279.1988.tb00874.x
  • Elliot, R., Clayton, J., & Iwata, J. (2014). Exploring the use of micro-credentialing and digital badges in learning environments to encourage motivation to learn and achieve. In B. Hegarty, J. McDonald, & S. K. Loke (Eds.), Rhetoric and reality: Critical perspectives on educational technology (pp. 703–707). Australasian Society for Computers in Learning in Tertiary Education (ASCILITE). https://www.ascilite.org/conferences/dunedin2014/files/concisepapers/276-Elliott.pdf.
  • Galli, L., & Fraternali, P. (2014). Achievement systems explained. In Trends and applications of serious gaming and social media (pp. 25–50). Springer.
  • Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1, 3–31.
  • Gibson, D., Ostashewski, N., Flintoff, K., Grant, S., & Knight, E. (2015). Digital badges in education. Education and Information Technologies, 20(2), 403–410. https://doi.org/10.1007/s10639-013-9291-7
  • Hamari, J. (2017). Do badges increase user activity? A field experiment on the effects of gamification. Computers in Human Behavior, 71, 469–478. https://doi.org/10.1016/j.chb.2015.03.036
  • Hamari, J., & Eranti, V. (2011, September). Framework for designing and evaluating game achievements. In Digra conference (Vol. 10, No. 1.224, pp. 9966).
  • Hennah, N. (2018). Open badges: What, why, how? School Science Review, 100(371), 76–80.
  • Krathwohl, D., Anderson, L., & Bloom, B. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Longman.
  • Martin, J. E. (2019). Reinventing crediting for competency-based education: The mastery transcript consortium model and beyond. Routledge.
  • Marzano, R. J., & Kendall, J. S. (2008). Designing and assessing educational objectives: Applying the new taxonomy. Corwin Press.
  • Miller, G. E. (1990). The assessment of clinical skills/competence/performance. Academic Medicine, 65(9), S63–S67. https://doi.org/10.1097/00001888-199009000-00045
  • Newby, T. J., & Cheng, Z. (2020). Instructional digital badges: Effective learning tools. Educational Technology Research and Development, 68(3), 1053–1067. https://doi.org/10.1007/s11423-019-09719-7
  • Normann, D.-A., Sandvik, L. V., & Fjørtoft, H. (2023). Reduced grading in assessment: A scoping review. Teaching and Teacher Education, 135. https://doi.org/10.1016/j.tate.2023.104336
  • Oliver, B. (2019). Making micro-credentials work, for learners, employers and providers. Deakin University. https://dteach.deakin.edu.au/wp-content/uploads/sites/103/2019/08/Making-micro-credentials-work-Oliver-Deakin-2019-full-report.pdf.
  • Oxley, K., & Van Rooyen, T. (2021). Making micro-credentials work: A student perspective. Journal of Teaching and Learning for Graduate Employability, 12(1), 44–47. https://doi.org/10.21153/jtlge2021vol12no1art1321
  • Perkins, J., & Pryor, M. (2021). Digital badges: Pinning down employer challenges. Journal of Teaching and Learning for Graduate Employability, 12(1), 24–38. https://doi.org/10.21153/jtlge2021vol12no1art1027
  • Robinson, K., & Aronica, L. (2016). Creative schools: The grassroots revolution that's transforming education. Penguin.
  • Selvaratnam, R. M., & Sankey, M. (2021). An integrative literature review of the implementation of micro-credentials in higher education: Implications for practice in Australasia. Journal of Teaching and Learning for Graduate Employability, 12(1), 1–17. https://doi.org/10.21153/jtlge2021vol12no1art942
  • Stefaniak, J., & Carey, K. (2019). Instilling purpose and value in the implementation of digital badges in higher education. International Journal of Educational Technology in Higher Education, 16(1), 1–21. https://doi.org/10.1186/s41239-019-0175-9
  • Thompson, D. (2016). Marks should not be the focus of assessment – but how can change be achieved? Journal of Learning Analytics, 3(2), 193–212. https://doi.org/10.18608/jla.2016.32.9
  • Twining, P. (2022, October). SOLO 2.0. The halfbaked education blog. https://halfbaked.education/solo2-0/.
  • Wotjas, O. (1998). Feedback? No, just give us the answers. Times Higher Education Supplement, 25(7).