1,871
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Bad News in the civics classroom: How serious gameplay fosters teenagers’ ability to discern misinformation techniques

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Received 26 Sep 2023, Accepted 29 Mar 2024, Published online: 19 Apr 2024

Abstract

Although the serious game Bad News has been used to inoculate citizens against misinformation, it has not been formally evaluated in traditional classrooms. We therefore evaluated its impact on 516 upper-secondary Swedish students playing individually, paired, or with the whole class. Results show that students improved their ability to discern manipulation techniques in social media posts. Students with prior positive attitudes to credible news sources were better discerners, and this attitude became significantly more positive post-intervention. Rationales for identifying manipulative techniques increased among those who improved their credibility ratings pre-to post-intervention. Lastly, enjoyment of and interest in the intervention was higher in the whole-class setting. This study offers insights for educators on using serious games in formal teaching to foster media and information literacy.

Introduction

Researchers and democratic institutions recommend serious games to improve citizens’ ability to judge misinformation (Ecker et al., Citation2022; European Commission, Citation2018, Citation2022; Kiili et al., Citation2024; Roozenbeek, Van Der Linden, et al., Citation2022). For instance, Schrier (Citation2021) outlines how games in civics education can empower students and teach them critical thinking in a world of digital misinformation. The national Swedish curricula for upper-secondary schooling underscore the importance of teaching students a “critical and responsible approach to digital technologies, to recognize opportunities and understand risks, and to evaluate information” (Swedish National Agency for Education, Citation2011, our translation). The American Psychological Association’s recent report on misinformation (APA, Citation2023) also stresses the need to “prebunk” and “inoculate” misinformation early in schools. European Commission experts explicitly recommend using the serious game Bad News in classrooms to educate teenagers against disinformation (European Commission, Citation2022) However, how well Bad News and inoculation theory work at schools in formal education has thus far been unstudied. The focus of prior research has been on self-motivated adult players who opt in to contribute to online research.

We therefore set out to explore the game’s impact across 26 civics classrooms with 17 teachers instructing teenagers through game-play individually, in pairs, or in pairs with a public leaderboard. The participants were 516 teenagers from four upper-secondary schools in Sweden. We used a new version of Bad News, which can be played individually or in pairs and with a public leaderboard that can be projected in a classroom. The new version enabled us to compare individual gameplay with playing in pairs or competing with the whole class to explore if there are effects of collaboration or competition that can influence performance post-gameplay. As this serious game is recommended in formal learning for inoculating children and teenagers against misinformation, it must be tested under representative classroom conditions.

In our mixed design study focusing on the theory-driven serious game Bad News, we aim to:

  • Evaluate the game’s inoculation effect on students’ performance and attitudes through a pre-and posttest design with manipulative and credible social media items (within-subject).

  • Examine student gameplay experience in three distinct classroom settings: individual play, paired collaboration, and paired collaboration with a public leaderboard (between-subject).

  • Investigate relationship between students’ verbal rationale and their reliability ratings pre-and post-intervention (quantitative content analysis).

Extending existing research on educational strategies to combat misinformation, this study offers valuable insights into using Bad News in formal civics education to engage upper-secondary students in critical media and information literacy practices. We delve into the effects of our intervention on teenagers’ ability to discern and articulate their reasoning against misinformation, setting the stage for a broader discussion on the importance of such interventions in formal education.

Background

The global spread of misinformation threatens both individuals’ well-being and the health of democratic societies (APA, Citation2023; Lewandowsky et al., Citation2017). A primary concern is the manipulation of public opinion through online platforms (e.g. Bradshaw & Howard, Citation2019). Misinformation can also be linked to forms of social misconduct, including online hate-speech and cyberbullying (Castaño-Pulgarín et al., Citation2021; Giumetti & Kowalski, Citation2022). For teenagers, misinformation may lead to disengagement from politics (Gunther et al., Citation2019), foster vaccine hesitancy (Loomba et al., Citation2021), or potentially increase engagement in violent behavior (Jolley & Douglas, Citation2014a, Citation2014b; Jolley & Paterson, Citation2020). Fake AI-generated images and texts are now also easy to create and are becoming more challenging to detect, complicating the picture even further (Altman et al., Citation2023; Future of Life Institute, Citation2023).

But misinformation is not only a technological problem. It has been shaped by national political landscapes (Humprecht, Citation2019) and has become global in scope. For example, the World Health Organization (WHO) has declared that we are facing a global “infodemic” (Roozenbeek, Schneider, et al., Citation2020; Zarocostas, Citation2020), and the most recent World Economic Forum Global Risks report stated that misinformation is the top risk facing society in the next two years (WEF, Citation2024). Although there has been an increase in the establishment of fact-checking organizations worldwide, most misinformation will not be fact-checked. And even when exposed, misinformation can often spread faster and deeper than subsequent corrections (Vosoughi et al., Citation2018; Zollo et al., Citation2017).

Dishonest fact-checking has now become an established practice among misinformers (Silverman & Kao, Citation2022) and Hameleers (Citation2022) found that study participants trusted fact-checkers when they were confirming false information.

A fruitful approach to combating misinformation in modern democracies is therefore rooted in making citizens better at identifying manipulative, misleading, and false information (Kozyreva et al., Citation2020). To this end, fact-checking efforts and misinformation policies are necessary but insufficient. With early educational efforts, teenagers could be prepared in schools to cope with misinformation.

Media information literacy and teenagers

Young people are known to use overly superficial strategies when trying to judge the credibility of news, which leaves them vulnerable to manipulation (Shtulman, Citation2024). They tend to struggle to determine the credibility of digital news when it is designed to deceive (e.g. Breakstone et al., Citation2021; Nygren et al., Citation2022) and their ability to navigate credible, biased, or false news has been linked to certain knowledge, skills, and attitudes (Breakstone et al., Citation2021; Ku et al., Citation2019; Nygren et al., Citation2022). UNESCO has labeled this set of abilities Media and Information Literacy (MIL). Today, there is a loud call from governments and international institutions for educational interventions to improve students’ MIL (European CommissionFootnote1, UNESCOFootnote2, OECDFootnote3, and Council of EuropeFootnote4). Individuals using social media as their primary source for news consumption are more prone to believing in myths about COVID-19 (Roozenbeek, Schneider, et al., Citation2020). Overconfidence in one’s ability to identify misleading information is also linked to insufficient MIL (Mahmood, Citation2016) and spreading misinformation (Lyons et al., Citation2021). Being confident but lacking the skills to judge the credibility of news has been noted among Swedish teenagers (Nygren & Guath, Citation2019). There are further complications regarding a divide in public MIL stemming from socio-economic differences and language competencies (van Dijk, Citation2020; Nygren et al., Citation2022). Thus, teenagers’ lack of MIL is a well-documented issue, and MIL is described as key to safeguarding democracy from the negative impacts of misinformation (Ecker et al., Citation2022; Kozyreva et al., Citation2020).

Education with serious games against misinformation

Education against misinformation has been a goal for many researchers, and research with engaging educational interventions is necessary (Ecker et al., Citation2022; Roozenbeek et al., Citation2023). Several studies have been conducted where citizens are taught fact-checking skills and literacy tools (e.g. Axelsson et al., Citation2021; Breakstone et al., Citation2021; McGrew, Citation2020; Werner Axelsson et al., Citation2024; Wineburg et al., Citation2022) or educated about propaganda and misinformation strategies (Traberg et al., Citation2022), with promising results. Teaching people how to identify manipulative information through pre-bunking interventions is seen as necessary by many researchers as a way to slow down the spread of misinformation (Ecker et al., Citation2022; Roozenbeek, Van Der Linden, et al., Citation2022; Traberg et al., Citation2022).

Several MIL interventions have proven impact on people’s ability to identify misinformation. However, impact studies that include randomized trials for serious games such as BBC iReporter’s fake news gameFootnote5, FactitiousFootnote6, and NewsFeed DefendersFootnote7 are, to the best of our knowledge, lacking. A game with a focus on teaching fact-checking called Misinformation is Contagious, was recently tested on a relatively small sample of teenagers with a significant impact on their ability to separate accurate from inaccurate information (Barzilai et al., Citation2023). The effects of playing the serious game FakeyFootnote8 have been noted in one conference paper (Micallef et al., Citation2021) and some case studies exist for Cranky Uncle (e.g. Cook et al., Citation2022) and Trustme! (Yang et al., Citation2021).

The term “serious game” does not have a strict consensus definition. However, most scholars would agree that serious games are not necessarily meant for entertainment but should primarily be educational (e.g. Bellotti et al., Citation2013; Michael & Chen, Citation2005; Ritterfeld et al., Citation2009) rather than just fun (Landers, Citation2014; Susi et al., Citation2007). The adaptability of serious games across different subjects, especially when developed and evaluated by experts in the field, underscores their utility in educational settings. Serious games can streamline pre-bunking processes, alleviating the burden on educators through the standardization of skill acquisition. They can provide uniform learning experiences in MIL, leveraging the computer as both a learning and application tool. By simulating real-world scenarios, serious games prepare students, allowing educators to focus on facilitating discussions. The classroom shares a common experience, which can allow for a discussion from a meta-cognitive level focusing on one’s MIL and engaging critical thinking skills. This study’s starting point is the challenge of supporting teenagers’ abilities to navigate misleading information in a formal setting. In line with Corti (Citation2006, p. 1), we see serious game-based learning as a way to use “the power of computer games to captivate and engage end-users for a specific purpose, such as to develop new knowledge and skills.” Although a serious game’s primary purpose is education, engagement still plays an important role in which entertainment has value.

The Bad News game and inoculation theory

Bad NewsFootnote9 is a 15-minute online game about misinformation strategies designed primarily to educate people. In the game, developed by Roozenbeek and Van der Linden (Citation2019) in collaboration with the Dutch media platform DROG, players are exposed to weakened doses of propaganda strategies by allowing them to take on the role of a social media influencer spreading misinformation. Bad News is a choice-based game where the player is guided by a text-based narrator suggesting strategies to amass a social media following using manipulation techniques in a simulated social media setting. There are six techniques used in the game: (1) discrediting opponents (by using denial and sowing doubt), (2) using emotionally manipulative language (e.g. appeal to fear), (3) fueling polarization (“us vs. them” framing), (4) impersonating people (e.g. doctors or politicians), (5) floating conspiracy theories, and (6) manipulating public discourse through trolling (see Roozenbeek & Van der Linden, Citation2019, for more details).

Theoretically, the game’s design is grounded in a framework from social psychology called “inoculation theory” (McGuire, Citation1961), which follows the biomedical analogy. By forewarning people about potential exposure to misleading content and by exposing and preemptively refuting a weakened dose of misinformation, people can build psycho- logical resistance against future manipulation attempts. Similar to how vaccines help the body build resistance against future infections (Lewandowsky et al., Citation2021; Traberg et al., Citation2022). Importantly, by refuting and deconstructing the techniques used to dupe people online in advance (i.e. the “prebunk”), people can become more resistant to misinformation that makes use of these tactics (Basol et al., Citation2021; Lewandowsky et al., Citation2021; Roozenbeek, van der Linden, et al., Citation2020). Scholars have increasingly differentiated classic “passive” inoculation strategies (where people are provided with the refutations by the experimenter or teacher) from “active” inoculation, interventions where participants generate their own media content and develop resistance to misinformation through active and experiential learning (Basol et al., Citation2021; Trecek-King & Cook, Citation2024).

Active inoculation is often considered more effective because it improves people’s agency and control over the content they create, instilling higher motivation to learn and remember the material (Basol et al., Citation2021; Trecek-King & Cook, Citation2024). Bad News is a form of active inoculation, providing players with a simulated setting to experiment with weakened doses of propaganda techniques—based on fictional content—in a controlled environment. The game’s efficacy in inoculating adult players against these techniques has been investigated across several studies ( for a summary).

Table 1. Summary of Results from Studies of the Bad News Game.

Testing Bad News in the classroom

Previous studies with Bad News have primarily recruited volunteers, usually through the game’s website, where players can opt into a research survey, or through recruitment via crowdsourcing services (). Yet, its effectiveness needs to be assessed in formal education now that serious MIL games are being considered for integration into national media and information literacy curricula. An initial attempt to use Bad News in an informal peer-education setting with at-risk teenagers provided inconclusive pilot results due to the pandemic and complex instructional design (Nygren et al., Citation2021). Notably, despite limited improvements in identifying manipulative information, an enhanced appreciation for reliable news access—termed “credibility importance”—emerged, previously found to correlate with better judgments (Nygren & Guath, Citation2019; Nygren et al., Citation2022). This finding hints that the game may nurture positive MIL attitudes and that further formal research is needed. Serious games like Bad News offer a robust learning environment characterized by complexity and opportunities for learning from mistakes. They encourage active and experiential learning centered around problemsolving tasks and may provide valuable feedback to the player (Chiu & Cheng, Citation2017; Eseryel et al., Citation2014; Shute & Rahimi, Citation2017). These combined features can make Bad News a potent tool for effective and engaging MIL learning experiences.

Collaboration and competition

Based upon previous research on learning with games and computers (Chen et al., Citation2020; Chen et al., Citation2018; Clark et al., Citation2016; Lou et al., Citation2001; Sailer & Homner, Citation2020), we decided to test whether educational designs, including collaboration and competition with leaderboard in a classroom setting, are more effective than playing Bad News individually. Previous studies (Chen et al., Citation2018; Lou et al., Citation2001) have highlighted that collaborative learning in small groups might be more beneficial when using computers than individual learning. However, collaborative learning in digital environments may be complex and challenging, and interventions must be carefully designed and studied (Chen et al., Citation2018; Kreijns et al., Citation2003). In addition, Bachen et al. (Citation2012) found that serious games can have similar and positive effects on students playing individually and in pairs in social studies classrooms.

Competition has been shown to influence learning significantly (Sailer & Homner, Citation2020). However, its effectiveness varies with different game mechanics (Clark et al., Citation2016). Researchers have considered competition in serious games productive as it may enhance motivation and learning (Admiraal et al., Citation2011; Burguillo, Citation2010; Cagiltay et al., Citation2015; Chen et al., Citation2018; Julian & Perry, Citation1967; Malone & Lepper, Citation2021). Yet, competition may also make students feel pressure (Nemerow, Citation1996), influence them to make risky choices (Foo et al., Citation2017), and make people contrast themselves with others (Stapel & Koomen, Citation2005). Advocates of serious games underscore how the component of competition, intertwined with the sense of challenge, may further contribute to the engaging nature of these educational tools (Vandercruysse et al., Citation2013). Yet, competition may have limited effects depending on what and how students learn. For instance, (Chen et al., Citation2020) found that competition in digital gamebased learning only sometimes works in social science classrooms. Other scholars have highlighted how too much focus on competition in education may have a negative emotional impact on some students and distract them from what they are supposed to learn (Cheng et al., Citation2009; Liu et al., Citation2013). Therefore, we were interested in examining if collaboration and competition while playing Bad News are successful educational strategies.

Intervention and research gaps

A recent systematic review highlights the potential of using games against misinformation and calls for more formal evaluative research in classroom settings (Kiili et al., Citation2024). This study investigates how serious gameplay impacts teenagers’ ability to evaluate misinformation in the form of misleading social media headlines and their ability to discern credible information from manipulative techniques. To further extend prior research, we also investigate whether the game elicits constructive and engaged attitudes. Constructive attitudes are positive attitudes toward access to credible news but not overconfidence in one’s MIL (Nygren et al., Citation2022). It has been linked to students’ and adults’ abilities to separate credible information from misinformation (Guath & Nygren, Citation2022; Nygren et al., Citation2022). Previous research has also lacked data on how engaged students are by Bad News and its entertainment value. In terms of engagement, we explore whether participants find it interesting and fun to play the game in the classroom and whether they feel that they have learned anything new.

We conducted an intervention to understand how this new technology works in “complex, messy classrooms and schools” (Shavelson et al., Citation2003, p. 25). Using a quasiexperimental design, we set up the study with three lesson designs to investigate how playing the game individually, in pairs, or competitively in class influences the intervention’s effectiveness and teenagers’ attitudes and satisfaction. Finally, we explore students’ verbal rationale about their credibility ratings and how these change postintervention. Although previous research on Bad News has explored quantitative changes in participants’ reliability ratings, they have not investigated what participants pay attention to when rationalizing their decisions before and after playing the game. These written verbalizations can help us understand what successful students pay attention to when deciding on the reliability of headlines. With no evaluative research in ordinary classrooms, we set out to investigate the impact of a Bad News in a formal educational intervention.

Research hypotheses

Based on previous research on Bad News (), we test the following hypotheses:

H1: Playing and discussing Bad News helps participants identify misinformation insofar they rate misleading social media headlines as less reliable post-gameplay (Roozenbeek & Van der Linden, Citation2019).

H2: The impact in a classroom setting will be lower than previously measured effect sizes (Nygren et al., Citation2021).

H3: Participants will not rate credible information as less reliable after playing Bad News (Roozenbeek, van der Linden, et al., Citation2020).

H4: Playing collaboratively and competitively will improve participants’ performance more than playing individually (Chen et al., Citation2018; Lou et al., Citation2001).

H5: Playing Bad News in a classroom will positively impact participants’ attitudes toward access to credible news (Nygren et al., Citation2021).

Exploratory research questions

Previous research has found that people who consider it important to have access to credible news sources are better at fact-checking (Guath & Nygren, Citation2022) and that teenagers tend to overestimate their fact-checking abilities (Nygren & Guath, Citation2019). Furthermore, noting how gamification may be effective but also impact students’ enjoyment (Bai et al., Citation2020), we wish to explore if settings affect students’ enjoyment and motivation. In addition to the above hypotheses, we therefore formulate the following exploratory research questions:

RQ1: Do participants who value access to credible news perform better when rating the reliability of (mis)information than participants who consider it less important?

RQ2: Do participants overrate their abilities to determine the reliability of information; how do self-reported skills relate to participants’ abilities to rate the reliability manipulative and credible content?

RQ3: Does the gameplay setting (individual, pair, class) affect participants’ enjoyment and motivation?

RQ4: What cues do participants consider when rating reliability? Previous research has highlighted the impact of playing Bad News as an inoculation but has not explored participants’ rationale for their ratings. This may provide new insights into what misinformation cues the game teaches.

Methods

Our starting point is rooted in pragmatism, and by combining different methods, we capture the complexity of effects when introducing the game in ordinary classrooms (Johnson & Onwuegbuzie, Citation2004). We use (a) validated measurements from previous research, (b) a mixed between-subject (3 conditions) and within-subject (pre-and post-test) experimental design, and (c) systematic randomization of groups based upon a purposive sampling of students from national upper-secondary programs (with different subject orientations) in all three conditions (). We also include a quantitative content analysis of students’ rationales. This allows us to combine data to better understand how and why Bad News works in potentially helping students navigate misinformation. Our research design aligns with calls for more rigorous research about gamification (Landers et al., Citation2018) and effective educational interventions to enhance MIL among young learners (Ecker et al., Citation2022; Kiili et al., Citation2024; Roozenbeek et al., Citation2023).

Sample

771 Upper-secondary students from Sweden aged 16–19 participated in the experiment. Analyses were performed on data collected from the 516 participants who completed both pre-and post-test questionnaires (Nfemale = 247, Nmale = 263, Nother/undisclosed = 6). Based on previous research, we deemed it essential to include students from various programs and with various grade point averages (Nygren et al., Citation2022). Participants included students from upper-secondary vocational programs (Nagriculture= 16, Nelectrician= 14, Ntransport= 18), and theoretical programs with a focus on social sciences (N = 215), economics (N = 109), and natural sciences (N = 144). Our participants originate from four different schools and have diverse backgrounds and grades. Nationally, more male students (51.7%) than female students (48.3%) attend upper-secondary schooling. The Social Science Program has the highest number of students (17.6%), followed by the Economics Program (14.6%), and the Natural Science Program (12.6%). Vocational programs have fewer students and 4.4% study to become Electricians, 3.8% study Transport, and 2.9% study Agriculture. Our sample is thus much in line with the national distribution of students (), but students in vocational progams, especially females in vocational training, are under represented.

Teachers and students were randomly assigned to the three conditions (Individual, Pair, and Class). Using a quasi-experimental method, we ensured that all school and program students were distributed across all three conditions before randomization. In the Individual and Pair conditions, students either played the Bad News game (in its Swedish translation, see ) individually (N = 133) or in pairs (N = 170). The final group played the new version of Bad News in pairs, which used a public leaderboard projected in the classroom (N = 213). The number of students without completed pre-and post-tests was higher in the Individual and Pair conditions. This research was approved by the Swedish Ethical Review Authority (Dnr 2021-01340), and participants gave their informed consent to participate. The interventions were conducted between August 2021 and March 2022.

Figure 1. Screenshot of Bad News’ user interface.

Figure 1. Screenshot of Bad News’ user interface.

Research context

In Sweden, media and information literacy have been part of compulsory schooling since 2011 (Sundin, Citation2015). The syllabus in civics, a foundational subject in uppersecondary school, underscores how all students should be able to evaluate information from different sources and media formats (Swedish National Agency for Education, Citation2011). The Swedish Schools Inspectorate (Citation2018) has criticized schools for not teaching students to scrutinize their social media posts critically. Thus, the intervention fits well into the curriculum and what is mandated in formal education, in theory. What is lacking is empirical evidence.

The intervention was conducted across 26 civics classrooms by 17 teachers guided through a compendium of information about Bad News, general background information about misinformation, a description of the six manipulation techniques featured in the game, and lesson instructions. The teachers were also given a set of anonymous codes to distribute to the students so that pre-and post-test questionnaire answers could be paired during analysis. Pretest questionnaires were distributed to the participants in a lesson before the gameplay session, and post-test questionnaires were provided in the next scheduled lesson. The reason for not distributing pre-and post-test questionnaires near the intervention was not to fatigue participants. Participants in the Individual condition were instructed to play independently, and participants in the Pair and Class conditions were asked to team up with a peer and play together. In the Class condition, the number of amassed followers was public information, projected on the game leaderboard alongside team names chosen by the participants. The gameplay was paused after each level, noting the score and the manipulation technique used in the last level to gain followers.

The intervention was completed within a one-hour lesson. The students were given about half an hour to complete game-play in all three conditions, followed by a discussion. Noting the importance of active teachers to promote learning with digital games (Clark et al., Citation2016), we asked the teachers to lead an open discussion with the students about their experiences (see questions in Appendix C).

Skill measures

Reliability ratings

We used test items in the pre-and post-test questionnaires to measure students’ ability to rate manipulative and credible content. The items were taken from previous research (Roozenbeek & Van der Linden, Citation2019; Roozenbeek et al., Citation2021). Eight Twitter-like posts were presented randomly, with six tweets using the manipulation techniques (strategies) taught in the game and two credible tweets (not containing any misinformation), asking participants, “How reliable do you believe this tweet to be?” Answers were collected using an interval scale ranging from 1 to 7, with 1 being “very unreliable” and 7 being “very reliable” (see for tweets used).

Rating Score: The Reliability Rating of the manipulative and credible tweets was recalculated as a correct ratio between 0 and 1, using the mean of all eight items as a performance measure for each participant.

Improvement Score: Participants’ Rating Score on the pre-test was subtracted from their post-test score to measure how much their average ratings improved from pre-to post-intervention.

Discernment Score: To investigate participants’ ability to discern between misleading and factual social media headlines effectively, we calculated the Area Under Curve (AUC). This measure is often used in signal detection theory (SDT). It has been suggested by Batailler et al. (Citation2022) to consider participants’ inherent judgment bias and ensure that misinformation interventions do not make participants overly critical of all information (Modirrousta-Galian & Higham, Citation2023) but rather facilitate discriminant trust (Moore & Hancock, Citation2022) and so-called (manipulation) technique discernment (Roozenbeek, Traberg, et al., Citation2022). AUC is the area under the Receiver Operating Characteristic (ROC) curve. The ROC curve is created by plotting the true positive rate (rating credible tweets as reliable) against the false positive rate (rating manipulative tweets as reliable) at various threshold settings (1–7 in our case). The AUC represents how well the participants can discern manipulative tweets from credible ones. An AUC of 1.0 indicates perfect discernment, while an AUC of 0.5 suggests random guessing.

Verbal rationale

With each Reliability Rating, an open-ended follow-up question was posed: “What makes the tweet more or less reliable? Briefly justify your answer.” Written answers to this question served to capture the participants’ rationale for their rating. This can be considered a verbal off-load of working memory from the decision process (similar to verbalization protocols, see Ericsson and Simon, Citation1980). The verbal rationales were used in a quantitative content analysis, described below.

Attitude measures

In the pre-and post-test questionnaires, we used the following measures from previous research (Guath & Nygren, Citation2022; Nygren & Guath, Citation2019; Nygren et al., Citation2022) to investigate participants’ attitudes toward credible news and potential overconfidence:

Credibility importance: “How important is it to you to have access to reliable news?,” interval scale range: 0–10.

Internet skill: “How good do you think you are at finding and reviewing information online?,” interval scale range: 0–10.

Social media skill: “How well do you understand how social media works?,” interval scale range: 0–10.

Satisfaction measures

The post-test questionnaire included the following additional measures of attitudes toward the learning experience:

Fun to participate: “Was it fun to play and discuss the contents of the Bad News game?,” interval scale range: 1–7.

Interesting to participate: “Was it interesting to play and discuss the contents of the Bad News game?,” interval scale range: 1–7.

Learned anything new: “Do you think that by playing the Bad News game, you learned something new?,” binary choice: Yes or No.

Quantitative content analysis

Do participants’ rating scores agree with their rationales for choosing a particular rating? The participants’ verbal rationales can illustrate which cues (e.g. emotionality, source credibility) participants are attentive to when rating the tweets. More importantly, this method gives insight into possible changes in participants’ attentiveness toward tweet reliability cues after playing the game compared to before.

We conducted a content analysis of the rationales provided by the students with the largest positive change in Improvement Score (Engagers) and those with the largest negative change in Improvement Score (Disengagers). This sub-sample includes the top 10% of participants with a positive Improvement Score (Nengagers= 50) and the top 10% with a negative Improvement Score (Ndisengagers= 50). This means we get the participants with the largest absolute difference between pretest and post-test scores. This helps us understand which cues are relevant and whether there are patterns in rationales when participants’ reliability ratings are good versus poor.

We conducted an exploratory content analysis of the verbal rationales of the eight items rated pre-and post-intervention for each of the 100 participants included in this analysis, totaling 1600 written statements (i.e. rationales for the ratings of each tweet). As a coding scheme, we used established themes from previous research about how upper-secondary students may judge the credibility of misinformation (Nygren et al., Citation2020). The process of coding and analyzing students’ responses was conducted in a problem-and theory-driven approach (Krippendorff, Citation2018; Namey et al., Citation2008). Our coding was informed by previous research underscoring the importance of scrutinizing multiple facets of information, including the authority and context of the information provider (who is behind this?), the content basis of claims made by the information (what is the evidence?), and its design and presentation, with the underlying intentions driving its creation (how is this text designed to inform or manipulate me?; Fogg, Citation2003; McGrew et al., Citation2017; Metzger, Citation2007; Nygren et al., Citation2020). Evaluating the reliability of online information is a complex process, and we focus on three dimensions noted to be central in previous research about judging credibility online (Choi & Stvilia, Citation2015), aligning with Fogg’s (Citation2003) web credibility framework, emphasizing the importance of evaluating a website’s operator, content, and design. In an abductive process, we construed themes specifically designed to categorize and analyze students’ rationales, namely Content (the subject matter of the tweet), Source (the entity behind the tweet), which is relevant to identifying fake accounts, and Strategy (informational design and intent of the source).

The Content theme captures comments about the information concerning the claim being made and relates to the question, “What is the content, and how does it relate to things I already know?” The Source theme captures comments about the origin of the information, going below the surface of the tweet, and relates to the question, “Who is behind this information?” Finally, the Strategy theme captures comments about how the message is designed, arguing for the source’s intention, and relates to the question, “How and why is this message designed like this?” These themes also capture in what depth participants scrutinize the information anatomy, whereas Content is more superficial, Source and Strategy signals deeper reasoning. We also included the themes Unsure and Nonsense, where we categorized those comments where participants explicitly wrote that they did not know or were unsure what to write (Unsure), or responses that were unrelated to the task, nonsensical or whimsical, or arbitrary sequences of alphanumeric characters (Nonsense). We coded the mode for each of the three themes regarding positively (+) or negatively (−) laden rationales to identify if a theme was used to praise or discredit a tweet. Appendix D provides the coding scheme used. Initially, two researchers coded 10% of a random sample of student rationales based on the theme coding scheme to ensure consistency in the analysis. There was a 94% inter-rater consensus between the two researchers. One researcher then coded the remaining rationales.

Results

Reliability ratings

The overall mean of the Rating Scores from the pre-and post-test are presented in . A mixed ANOVA was conducted using the Rating Score with time (pre-and postintervention) as a within-subject repeated measure variable and condition (Individual, Pair and Class) as a between-subject variable. The analysis revealed a statistically significant main effect of time (F(1,513) = 53.10, η2 = 0.02, p <.001) but not of condition (p = .61), nor any interaction effects between time and condition (p = .71). illustrate Reliability Ratings of the manipulative and credible items, respectively, across conditions both pre-and post-intervention, underscoring the main effect of time on the compound Rating Score. These results align with H1 as students improved in their ability to identify manipulative social media posts. However, the results do not support H4 as there were no differences in this ability between the conditions of different educational settings. The overall effect size across the three conditions was d = 0.30 (CI95%[0.22, 0.37]), a similar effect size reported in previous studies using Bad News (e.g. Roozenbeek & Van der Linden, Citation2019), which does not support H2 as the Bad News game yields a comparable effect size in performance between participants in the current as compared to previous studies.

Figure 2. Violin plots showing the distribution of Reliability Rating based on the manipulative (Panel a) and credible (Panel b) items, pre-and post-intervention, across conditions. The superimposed box and point range plots illustrate the interquartile ranges and mean with a 95% confidence interval, respectively.

Figure 2. Violin plots showing the distribution of Reliability Rating based on the manipulative (Panel a) and credible (Panel b) items, pre-and post-intervention, across conditions. The superimposed box and point range plots illustrate the interquartile ranges and mean with a 95% confidence interval, respectively.

Table 2. Mean Rating Score and Mean Reliability Rating of Manipulative and Credible Items in Pre-and Post-Test Questionnaires with Standard Deviation in Parentheses Averaged Across All Participants.

Overall, participants rated credible tweets more reliable in the post-test (). In other words, participants did not consider the credible items less reliable after the intervention, in line with H3 and Lu et al. (Citation2023). This was further corroborated by a paired Student’s t-test, resulting in a statistically significant (and not hypothesized) increase in Reliability Ratings of credible tweets from pre-to post-intervention (t(515) = 3.60, p < 0.001, Mdiff= 0.15, CI95%[0.07, 0.24]), albeit with small practical significance (d = 0.13,CI95%[0.06, 0.21]). Pretest Discernment Score (Appendix E) was calculated at .87, suggesting that participants were much better than chance at discernment pre-intervention. The post-test Discernment Score was calculated at 0.91, signaling an improvement in discernment. A DeLong test revealed that the differences from the pre-to post were statistically significant (p < .001), albeit again with small practical significance (d = 0.06), these results contrast with previous research, which found no overall improvement in AUC discernment for the Bad News game (Modirrousta-Galian & Higham, Citation2023).

Attitudes

Rating averages and standard deviations of the measured attitudes are tallied in . Credibility Importance ratings were plotted against mean Rating Scores across pre-and post-tests (). Visually, it can be argued that there is a linear relationship between the importance of credible news and one’s ability to rate its reliability, affirming RQ1. This finding was corroborated by two linear regression analyses showing a statistically significant relationship between the two variables in both the pretest (F(1,514) = 11.52, R2 = 0.02, p < .001) and post-test (F(1,514) =43.31, R2 = 0.08, p < .001). The results further showed a difference in ratings of Credibility Importance from the pretest (M = 8.12, SD = 1.93) to the post-test (M = 8.49, SD = 1.93) resulting in a statistically significant increase in Credibility Importance ratings after the intervention as evidenced by a paired Student’s t-test (t(515) = 4.48, p < .001, Mdiff= 0.36, CI95%[0.20, 0.52]) in line with H5. The effect size of improvement in Credibility Importance was d = 0.19 (CI95%[0.10, 0.27]).

Figure 3. Mean Rating Score for each credibility importance point pre-and postintervention.

Figure 3. Mean Rating Score for each credibility importance point pre-and postintervention.

Table 3. Mean Attitude Ratings Pre-and Post-Intervention with Standard Deviation in Parentheses Averaged Across all Participants.

Regarding Internet Skills and Social Media Skills, there was only a statistically significant relationship between participants’ ability to rate the reliability of content with Internet Skill on the post-test (F(1,514) = 7.40, R2 = 0.01, p < .01). These results suggest no evident relationship between participants’ attitudes toward their internet and social media skills and their ability to rate the reliability of tweets (RQ2).

Satisfaction

On average, the participants rated Fun to Participate 5.56 (SD = 1.55) and Interesting to Participate 5.43 (SD = 1.53) on a 7-point scale. show these two measurements in each condition. Judging by the measures’ means, participants in the Class condition enjoyed participating more than those in the other two conditions. We conducted significance tests using non-parametric methods because both measures are skewed due to their high ratings (i.e. ceiling effects). A Kruskal-Wallis rank sum test revealed statistically significant differences for both Fun to Participate (H(2) = 25.72, p < .001) and Interesting to Participate (H(2) = 14.29, p < .001). Separate

Figure 4. Violin plots showing the distribution of Fun to Participate (Panel a) and Interesting to Participate (Panel b) across conditions collected post-intervention. The superimposed box and point range plots illustrate the interquartile ranges and mean with a 95% confidence interval, respectively. (a) Participant responses to “Was it fun to play” and (b) Participant responses to “Was it interesting to play discuss the contents of the Bad News game?” and “discuss the contents of the Bad News game?”

Figure 4. Violin plots showing the distribution of Fun to Participate (Panel a) and Interesting to Participate (Panel b) across conditions collected post-intervention. The superimposed box and point range plots illustrate the interquartile ranges and mean with a 95% confidence interval, respectively. (a) Participant responses to “Was it fun to play” and (b) Participant responses to “Was it interesting to play discuss the contents of the Bad News game?” and “discuss the contents of the Bad News game?”

Wilcoxon rank sum test between conditions showed statistically significant differences in these measurements between both Class and Individual conditions (Fun to Participate: W = 11,496, p < .01; Interesting to Participate: W = 10,758 p < .001) and Class and Pair conditions (Fun to Participate: W = 14,668, p < .001; Interesting to Participate: W = 13,384, p < .001) but no significant differences between the Individual and Paired conditions. These results indicate that participants in the Class condition enjoyed and found playing and discussing the game more interesting than participants in the Individual and Paired conditions (RQ3).

When asked whether they learned anything new when playing Bad News, nearly 60% of participants answered yes (Yes = 304, No = 212). This was consistent across the three conditions (Yes: Individual = 60.15%, Pair = 54.71%, Class = 61.50%). The Improvement Score is illustrated with the Learned Anything New measurement in . The bar graph indicates that participants who felt they learned something new also improved the most (about 4%) in their reliability ratings from pre-to postintervention; this was confirmed with a statistically significant Welch’s Two Sample t-test (t(430.83) = 2.08, p = .04).

Figure 5. The Improvement Score divided between participants who felt they learned something and those who did not, with error bars representing the standard error.

Figure 5. The Improvement Score divided between participants who felt they learned something and those who did not, with error bars representing the standard error.

Quantitative content analysis

Themes identified in the rationale behind participants’ ratings of each tweet in the pretest were matched against the themes identified for the same tweet in the posttest. This allowed us to illustrate thematic flow through a Sankey diagram in our investigation of RQ4 (). The figure shows four separate Sankey diagrams with the 1600 rationales divided across manipulative and credible tweets and Engagers and Disengagers (participants with the highest 50 positive and 50 negative Improvement Scores, respectively).

Figure 6. Participants’ thematic drift based on the content analysis of verbal rationale of reliability ratings pre-and post-intervention. The analysis was conducted on participants with the top 50 positive (Engagers) and top 50 negative (Disengagers) Improvement Scores. Each group is also split between rationales of manipulative and credible items.

Figure 6. Participants’ thematic drift based on the content analysis of verbal rationale of reliability ratings pre-and post-intervention. The analysis was conducted on participants with the top 50 positive (Engagers) and top 50 negative (Disengagers) Improvement Scores. Each group is also split between rationales of manipulative and credible items.

Among both Engagers and Disengagers, pretest rationales pertained mostly to negatively coded themes for manipulative tweets. As engagers improved their ratings post-intervention, positive comments decreased further (i.e. they used less positive language to rate manipulative tweets), with the largest increase in negative rationales about Strategy (i.e. the manipulation strategies covered in the game). Disengagers, who performed worst post-intervention, increased their positive rationales and Nonsense, although most comments were still negative. Regarding credible tweets, an overwhelming majority of rationales were Source-themed and positive across the board. For Engagers, these comments increased post-gameplay while they stayed the same for Disengagers; instead, their Nonsense comments increased again.

Separating Engagers and Disengagers from the rest of participants gives us a clearer picture of the effects of the intervention (). The Engagers are illustrated in blue in the top right panel (N = 43) and in the bottom right panel (N = 7), split across their post-test performance. The Disengagers are illustrated in pink in the top left panel (N = 9) and the bottom left panel (N = 41), split across their post-test performance. The plot reveals that a majority of the students (N = 306, 59%) have a positive development in their ability to rate (mis)information in the form of social media headlines (see right-hand panels). A minority of participants (N = 30, 6%) perform the same pre-to post-intervention. And a larger group is disengaged (180, 35%). An interesting aspect is that credible items are generally easier for the participants to detect as they produce a better Rating Score (represented by the circle in each panel). Participants with poorer performance after the intervention begin with a better rating score on average (left-hand panels). Thus, they dropped from pretty good rating scores toward more average scores, and as mentioned previously, this group of participants gave more Nonsense responses.

Figure 7. Violin and boxplot of performance on pre-and post-test of Engagers and Disengagers of the content analysis related to the rest of the participants. The grey line in each panel represents the mean post-test performance of all participants. The “x” and circle show mean rating scores on manipulative and credible tweets, respectively. Each panel indicates the number of participants (n), changes in score from pre-to post (Δ), and proportion of participants who reported that they learned something new through the intervention (L).

Figure 7. Violin and boxplot of performance on pre-and post-test of Engagers and Disengagers of the content analysis related to the rest of the participants. The grey line in each panel represents the mean post-test performance of all participants. The “x” and circle show mean rating scores on manipulative and credible tweets, respectively. Each panel indicates the number of participants (n), changes in score from pre-to post (Δ), and proportion of participants who reported that they learned something new through the intervention (L).

Our findings suggest that Disengagers used more superficial arguments and Nonsense post-intervention, hinting that some individuals were not engaged with the game or bored by the post-test questionnaire. Overall, both Engagers and Disengagers mostly used negative rationales when rating manipulative tweets and positive rationales when rating credible tweets. Still, those who improve their ratings in the intervention also delve deeper into the information anatomy as evidenced by their increase in negative strategy-themed rationales of manipulative tweets.

Discussion

As part of a novel classroom evaluation, this study finds that the Bad News game positively boosts students’ ability to discern misinformation from credible news on social media across 26 unique civics classrooms. Playing Bad News and teacher-guided discussions significantly impacted students’ ratings, rationales and discernability of (mis)information techniques while retaining trust in credible sources. In contrast to our hypothesis (H2), this first field experiment shows that students can be actively inoculated against misinformation techniques in formal education with effect sizes on par with previous research using Bad News (Roozenbeek & Van der Linden, Citation2019; Roozenbeek, van der Linden, et al., Citation2020; Iyengar et al., Citation2022). Indeed, this result indicates that Bad News can be used in mixed groups of upper-secondary students in ordinary, messy classrooms and have an overall positive effect, even outside the original UK/US setting in which it was initially tested. It is important to verify that the impact lines up with studies where participants individually sign up to play the game or participate in paid studies (Traberg et al., Citation2022). We therefore see this as an essential test of the reliability of using this game in educational curricula using computer-mediated learning and a promising result for future research on serious games against misinformation.

Interestingly, we found that the impact on students’ skills was similar in all three conditions (Individual, Pair, and Class), a comparison that has not been made before in the context of misinformation interventions. In contrast to some previous research (Chen et al., Citation2018; Lou et al., Citation2001), we find that collaboration when playing the game did not significantly affect inoculation. In addition, we also find that the effect of gamification with an increased focus on competition did not increase the impact on students’ performance. The lack of influence from competition may be explained by factors that distract the students from learning when competing with others in the classroom, such as feeling uncomfortable when poor achievements are displayed and compared with others (e.g. Cheng et al., Citation2009; Liu et al., Citation2013; Stapel & Koomen, Citation2005). It could also relate to a previous finding that digital game-based learning may be less effective in social science classrooms, potentially because of social science curricula’s relatively unstructured and complex nature compared to the highly structured learning environments that characterize math and science education (Chen et al., Citation2020). Our findings align with Bachen et al. (Citation2012) highlighting how a well-designed intervention may positively impact the complex world of social studies classrooms, regardless of whether students play individually or in collaboration.

However, conditions with more collaboration and competition with a public leaderboard did impact students’ attitudes. We find that students in the Class condition perceived the intervention as more interesting than those in other educational settings. Students also perceived to have more fun participating through collaboration and competition. We speculate that the motivation for future learning may be supported by collaboration and competition, but this should be confirmed by future research. However, in light of previous research, noting that not all students may benefit from competition, we also wish to exercise some caution here. For example, some students may not find gamification and competition motivating and enjoyable. Thus, playing individually and without competition may be better for some students, and other students may need a different kind of educational intervention all together to support their abilities to identify misleading information.

Results indicated that participants found access to credible news sources more important after the intervention. This is an important finding because previous research has noted how this is a very constructive attitude when navigating online information among teenagers and adults (Guath & Nygren, Citation2022; Nygren & Guath, Citation2019; Nygren et al., Citation2022). Credibility importance may be linked to actively open-minded thinking, which is another constructive attitude implicated in the ability to identify misinformation (Roozenbeek, Maertens, et al., Citation2022). This could partly explain why Bad News impacts people’s skills and attitudes.

Notably, the participants were confident and good at evaluating manipulative and credible content (in the form of social media headlines) pre-intervention. This contrasts with previous research showing overconfidence problems linked to information literacy and teenagers’ abilities to judge digital news (Mahmood, Citation2016; Nygren & Guath, Citation2019). The intervention scenarios point out why students might be vulnerable to misinformation, so the game may help participants evaluate their skills in the complicated environment of navigating misinformation on social media, making them somewhat humbler. The fact that students perceived that they learned from the intervention indicates that Bad News may be especially beneficial to play with students unaware of online misinformation strategies.

Previous research has not directly investigated the rationales behind participants’ quantitative ratings. The content analysis of students’ rationales for what drove their decisions on a particular rating gives us new insights into the mechanisms behind the game’s effectiveness. One potential risk identified is that a minority of participants may become disengaged, as evidenced by their worse performance and increased nonsense comments. However, the fact that the vast majority of participants improved their rating score post-intervention and that those who improved the most provided rationales related to negative strategies of manipulative content attest to the ability of the game to teach recognition of manipulative content through active learning. For example, Engagers wrote more critical rationales for strategies used in manipulative tweets postintervention. The fact that both Engagers’ and Disengagers’ verbal rationales strongly focus on positive sources post-intervention indicates that they are also aware of the importance of reliable sources without making them overly skeptical. The game itself does not teach participants directly about source credibility. Still, the impersonation level does illustrate the danger of fake sources and fake experts, which may indirectly attune students to the importance of verifying expertise and source credibility. In sum, the game may encourage players to focus more on how social media content might manipulate them and less on the content’s veracity without losing sight of the importance of judging the source of information. This is an important finding in light of previous research highlighting how teenagers with poor skills in navigating information need to be more media and information-literate to become informed and active citizens (Nygren et al., Citation2021; Wineburg et al., Citation2022).

Limitations and future research

In this study, we used a new version of Bad News, and in the class setting, it had a leaderboard that all players could see. Although the content and narrative are the same, their designs differ slightly. From the results, we noticed that this had no impact on the player learning experience, but there were differences in participants’ self-reported satisfaction levels. We attribute these differences to using a leaderboard with implied competition and interspersed discussions. However, there could also be some other aspects of the design that impact these differences. The current study was conducted with schools and upper-secondary programs associated with varied socioeconomic backgrounds. We can determine that the intervention was successful with our sample of students, who were representative of Swedish upper-secondary schools. However, further studies are needed to investigate the specific impact on students in other countries, cultures, and areas of low socioeconomic status. More studies are necessary to see how learned skills and attitudes transfer into teenagers’ ordinary social media practices. We also recognize that repeated testing could impact the observed learning results. However, previous psychometric evaluations of the Bad News game have found some item effects but no testing effects (Roozenbeek et al., Citation2021), which should help alleviate such concerns. Lastly, we acknowledge that rating the credibility of headlines embedded in social media posts is not the same as assessing the credibility of an entire news article, as is done in other relevant educational research (McGrew, Citation2020) but presenting participants with social media headlines is common practice for misinformation studies (e.g. Pennycook & Rand, Citation2021) and has some ecological validity given that estimates suggest nearly 60% of Twitter users share news media without clicking or reading the full article (Gabielkov et al., Citation2016). Nonetheless, future research could build on our work by testing a full-article approach in classroom settings.

Conclusions

There is a growing need for educational interventions to support young people’s ability to identify misinformation online. This study is the first to investigate how a serious game (Bad News) can be implemented in different classroom settings to impact students’ ability to rate and rationalize online (mis)information as well as their attitudes toward credible news sources. We have presented data collected from 516 Swedish upper-secondary students. Participants were assigned to three conditions (playing individually, in pairs, or competing with the rest of the class) to study how playing the game in different formal classroom settings influences the effectiveness of the intervention. We also used content analysis to explore why students become better at identifying misinformation after playing the game. Playing Bad News significantly improves teenagers’ ability to identify manipulative content presented as social media headlines (d = 0.30, p < .001). In contrast to previous results, we also demonstrated that students rate credible content as more reliable post-gameplay, leading to increased discernment. The learning effects of the game held regardless of whether students played alone, in pairs, or competitively with the whole class. However, participants in the whole-class setting found it more enjoyable and interesting to partake. An important finding of the present study was that participants with a positive attitude toward the importance of having access to credible news produced better reliability ratings, and, even more importantly, we find that the game also fostered positive attitudes toward credible news sources. Our novel content-analysis approach revealed that success in rating reliability after playing Bad News could be attributed to participants focusing more on strategies and less on the content of the information post-intervention without losing focus on the source of information. The game thus seems to successively inoculate students against misinformation as commonly encountered on social media and foster skills and attitudes for improved media and information literacy.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Additional information

Funding

This work was supported by the Swedish Institute for Educational Research (Skolforskningsinstitutet).

Notes on contributors

Carl-Anton Werner Axelsson

Carl-Anton Werner Axelsson is an associate lecturer in didactics of digital literacy and lifelong learning in computer science at the School of Innovation, Design and Engineering at Mälardalen University and an associate researcher at the Department of Education at Uppsala University. His doctoral degree is in human-computer interaction and his research is focused on computer-supported learning and decision making.

Thomas Nygren

Thomas Nygren is Professor in History and Civics Education at the Department of Education at Uppsala University. His research interests focus on history and civics education, the digital impact on education, critical thinking and human rights education.

Jon Roozenbeek

Jon Roozenbeek is a British Academy Postdoctoral Fellow at the Department of Psychology at the University of Cambridge. His research focuses on misinformation, intervention design, polarisation, and information warfare.

Sander van der Linden

Sander van der Linden is Professor of Social Psychology in Society and director of the Social Decision-Making Lab at the University of Cambridge. His research focuses on the psychology of human judgment, decision-making, and misinformation.

Notes

References

  • Admiraal, W., Huizenga, J., Akkerman, S., & Dam, G. T. (2011). The concept of flow in collaborative game-based learning. Computers in Human Behavior, 27(3), 1185–1194. https://doi.org/10.1016/j.chb.2010.12.013
  • Altman, S., Brockman, G., & Sutskever, I. (2023). Governance of superintelligence. https://openai.com/blog/governance-of-superintelligence.
  • APA. (2023, November). Using psychological science to understand and fight health misinformation (report). American Psychological Association. https://www. apa.org/pubs/reports/health-misinformation.
  • Axelsson, C. A. W., Guath, M., & Nygren, T. (2021). Learning how to separate fake from real news: Scalable digital tutorials promoting students’ civic online reasoning. Future Internet, 13(3), 60. https://doi.org/10.3390/fi13030060
  • Bachen, C. M., Hernández-Ramos, P. F., & Raphael, C. (2012). Simulating real lives: Promoting global empathy and interest in learning through simulation games. Simulation & Gaming, 43(4), 437–460. https://doi.org/10.1177/1046878111432108
  • Bai, S., Hew, K. F., & Huang, B. (2020). Does gamification improve student learning outcome? evidence from a meta-analysis and synthesis of qualitative data in educational contexts. Educational Research Review, 30, 100322. https://doi.org/10.1016/j.edurev.2020.100322
  • Barzilai, S., Mor-Hagani, S., Abed, F., Tal-Savir, D., Goldik, N., Talmon, I., & Davidow, O. (2023). Misinformation is contagious: Middle school students learn how to evaluate and share information responsibly through a digital game. Computers & Education, 202, 104832. https://doi.org/10.1016/j.compedu.2023.104832
  • Basol, M., Roozenbeek, J., Berriche, M., Uenal, F., McClanahan, W. P., & Linden, S. V. D. (2021). Towards psychological herd immunity: Cross-cultural evidence for two prebunking interventions against COVID-19 misinformation. Big Data & Society, 8(1), 205395172110138. https://doi.org/10.1177/20539517211013868
  • Basol, M., Roozenbeek, J., & Van der Linden, S. (2020). Good news about bad news: Gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition, 3(1), 2. https://doi.org/10.5334/joc.91
  • Batailler, C., Brannon, S., Teas, P., & Gawronski, B. (2022). A signal detection approach to understanding the identification of fake news. Perspectives on Psychological Science, 17(1), 78–98. https://doi.org/10.1177/1745691620986135
  • Bellotti, F., Kapralos, B., Lee, K., Moreno-Ger, P., & Berta, R. (2013). Assessment in and of serious games: An overview. Advances in Human-Computer Interaction, 2013, 1–11. https://doi.org/10.1155/2013/136864
  • Bradshaw, S., & Howard, P. N. (2019). The global disinformation order: 2019 global inventory of organised social media manipulation (Working Paper 2019.2). Project on Computational Propaganda.
  • Breakstone, J., Smith, M., Connors, P., Ortega, T., Kerr, D., & Wineburg, S. (2021). Lateral reading: College students learn to critically evaluate internet sources in an online course. Harvard Kennedy School Misinformation Review, 2(1). https://doi.org/10.37016/mr-2020-56
  • Burguillo, J. C. (2010). Using game theory and competition-based learning to stimulate student motivation and performance. Computers & Education, 55(2), 566–575. https://doi.org/10.1016/j.compedu.2010.02.018
  • Cagiltay, N. E., Ozcelik, E., & Ozcelik, N. S. (2015). The effect of competition on learning in games. Computers & Education, 87, 35–41. https://doi.org/10.1016/j.compedu.2015.04.001
  • Castaño-Pulgarín, S. A., Suárez-Betancur, N., Vega, L. M. T., & López, H. M. H. (2021). Internet, social media and online hate speech. Systematic review. Aggression and Violent Behavior, 58, 101608. https://doi.org/10.1016/j.avb.2021.101608
  • Chen, C.-H., Liu, J.-H., & Shou, W.-C. (2018). How competition in a game-based science learning environment influences students’ learning achievement, flow experience, and learning behavioral patterns. Journal of Educational Technology & Society, 21(2), 164–176.
  • Chen, C.-H., Shih, C.-C., & Law, V. (2020). The effects of competition in digital gamebased learning (dgbl): A meta-analysis. Educational Technology Research and Development, 68(4), 1855–1873. https://doi.org/10.1007/s11423-020-09794-1
  • Chen, J., Wang, M., Kirschner, P. A., & Tsai, C.-C. (2018). The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: A metaanalysis. Review of Educational Research, 88(6), 799–843. https://doi.org/10.3102/0034654318791584
  • Cheng, H. N. H., Wu, W. M. C., Liao, C. C. Y., & Chan, T.-W. (2009). Equal opportunity tactic: Redesigning and applying competition games in classrooms. Computers & Education, 53(3), 866–876. https://doi.org/10.1016/j.compedu.2009.05.006]
  • Chiu, P. H. P., & Cheng, S. H. (2017). Effects of active learning classrooms on student learning: A two-year empirical investigation on student perceptions and academic performance. Higher Education Research & Development, 36(2), 269–279. https://doi.org/10.1080/07294360.2016.1196475
  • Choi, W., & Stvilia, B. (2015). Web credibility assessment: Conceptualization, operationalization, variability, and models. Journal of the Association for Information Science and Technology, 66(12), 2399–2414. https://doi.org/10.1002/asi.23543
  • Clark, D. B., Tanner-Smith, E. E., & Killingsworth, S. S. (2016). Digital games, design, and learning: A systematic review and meta-analysis. Review of Educational Research, 86(1), 79–122. https://doi.org/10.3102/003465431558206
  • Cook, J., Ecker, U. K. H., Trecek-King, M., Schade, G., Jeffers-Tracy, K., Fessmann, J., Kim, S. C., Kinkead, D., Orr, M., Vraga, E., Roberts, K., & McDowell, J. (2022). The cranky uncle game—combining humor and gamification to build student resilience against climate misinformation. Environmental Education Research, 29(4), 607–623. https://doi.org/10.1080/13504622.2022.2085671
  • Corti, K. (2006). Games-based learning; a serious business application (report). https://www.cs.auckland.ac.nz/compsci777s2c/lectures/Ian/serious%20games% 20business%20applications.pdf.
  • Ecker, U. K., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., Kendeou, P., Vraga, E. K., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13–29. https://doi.org/10.1038/s44159-021-00006-y
  • Ericsson, K. A., & Simon, H. A. (1980). Verbal reports as data. Psychological Review, 87(3), 215–251. https://doi.org/10.1038/s44159-021-00006-y
  • Eseryel, D., Law, V., Ifenthaler, D., Ge, X., & Miller, R. (2014). An investigation of the interrelationships between motivation, engagement, and complex problem solving in game-based learning. Journal of Educational Technology & Society, 17(1), 42–53.
  • European Commission. (2018, December). Action plan against disinformation (tech. rep. No. JOIN(2018) 36 final). High Representative of the Union for Foreign Affairs and Security Policy. https://ec.europa.eu/information society/newsroom/image/document/2018- 49/action plan against disinformation 26A2EA85-DE63-03C0-25A096932DAB1F95 55952.pdf.
  • European Commission. (2022). Final report of the commission expert group on tackling disinformation and promoting digital literacy through education and training (Tech. rep. No. NC-03-22-016-EN-N). Directorate-General for Education, Youth, Sport and Culture.
  • Fogg, B. (2003). Persuasive technology: Using computers to change what we think and do. Morgan Kaufmann. https://doi.org/10.1145/764008.763957
  • Foo, J. C., Nagase, K., Naramura-Ohno, S., Yoshiuchi, K., Yamamoto, Y., & Morita, K. (2017). Rank among peers during game competition affects the tendency to make risky choices in adolescent males. Frontiers in Psychology, 8, 16. https://doi.org/10.3389/fpsyg.2017.00016
  • Future of Life Institute. (2023). Pause giant AI experiments: An open letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
  • Gabielkov, P., Ramachandran, A., Chaintreau, A., & Legout, A. (2016). Social clicks: What and who gets read on twitter? ACM SIGMETRICS / IFIP Performance 2016, June 2016, Antibes Juan-les-Pins, France, hal–01281190. https://doi.org/10.1145/2896377.2901462
  • Giumetti, G. W., & Kowalski, R. M. (2022). Cyberbullying via social media and well-being. Current Opinion in Psychology, 45, 101314. https://doi.org/10.1016/j.copsyc.2022.101314
  • Gunther, R., Beck, P. A., & Nisbet, E. C. (2019). “Fake news” and the defection of 2012 obama voters in the 2016 presidential election. Electoral Studies, 61, 102030. https://doi.org/10.1016/j.electstud.2019.03.006
  • Guath, M., & Nygren, T. (2022). Civic online reasoning among adults: An empirical evaluation of a prescriptive theory and its correlates. Frontiers in Education, 7, 721731. https://doi.org/10.3389/feduc.2022.721731
  • Hameleers, M. (2022). Separating truth from lies: Comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the us and Netherlands. Information, Communication & Society, 25(1), 110–126. https://doi.org/10.1080/1369118X.2020.1764603
  • Humprecht, E. (2019). Where ‘fake news’ flourishes: A comparison across four western democracies. Information, Communication & Society, 22(13), 1973–1988. https://doi.org/10.1080/1369118X.2018.1474241
  • Iyengar, A., Gupta, P., & Priya, N. (2022). Inoculation against conspiracy theories: A consumer side approach to India’s fake news problem. Applied Cognitive Psychology, 37(2), 290–303. https://doi.org/10.1002/acp.3995
  • Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33(7), 14–26. https://doi.org/10.3102/0013189X033007014
  • Jolley, D., & Douglas, K. M. (2014a). The effects of anti-vaccine conspiracy theories on vaccination intentions. PLOS One, 9(2), e89177. https://doi.org/10.1371/journal.pone.0089177
  • Jolley, D., & Douglas, K. M. (2014b). The social consequences of conspiracism: Exposure to conspiracy theories decreases intentions to engage in politics and to reduce one’s carbon footprint. British Journal of Psychology, 105(1), 35–56. https://doi.org/10.1111/bjop.12018
  • Jolley, D., & Paterson, J. L. (2020). Pylons ablaze: Examining the role of 5g covid-19 conspiracy beliefs and support for violence. The British Journal of Social Psychology, 59(3), 628–640. https://doi.org/10.1111/bjso.12394
  • Julian, J. W., & Perry, F. A. (1967). Cooperation contrasted with intra-group and intergroup competition. Sociometry, 30(1), 79–90. https://doi.org/10.2307/2786440
  • Kiili, K., Siuko, J., & Ninaus, M. (2024). Tackling misinformation with games: A systematic literature review. Interactive Learning Environments, 1–16. https://doi.org/10.1080/10494820.2023.2299999
  • Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). Citizens versus the internet: Confronting digital challenges with cognitive tools. Psychological Science in the Public Interest, 21(3), 103–156. https://doi.org/10.1177/1529100620946707
  • Kreijns, K., Kirschner, P. A., & Jochems, W. (2003). Identifying the pitfalls for social interaction in computer-supported collaborative learning environments: A review of the research. Computers in Human Behavior, 19(3), 335–353. https://doi.org/10.1016/S0747-5632(02)00057-2
  • Krippendorff, K. (2018). Content analysis: An introduction to its methodology. Sage Publications.
  • Ku, K. Y., Kong, Q., Song, Y., Deng, L., Kang, Y., & Hu, A. (2019). What predicts adolescents’ critical thinking about real-life news? The roles of social media news consumption and news media literacy. Thinking Skills and Creativity, 33, 100570. https://doi.org/10.1016/j.tsc.2019.05.004
  • Landers, R. N. (2014). Developing a theory of gamified learning: Linking serious games and gamification of learning. Simulation & Gaming, 45(6), 752–768. https://doi.org/10.1177/1046878114563660
  • Landers, R. N., Auer, E. M., Collmus, A. B., & Armstrong, M. B. (2018). Gamification science, its history and future: Definitions and a research agenda. Simulation & Gaming, 49(3), 315–337. https://doi.org/10.1177/1046878118774385
  • Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369. https://doi.org/10.1016/j.jarmac.2017.07.008
  • Lewandowsky, S., & Van Der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology, 32(2), 348–384. https://doi.org/10.1080/10463283.2021.1876983
  • Liu, D., Li, X., & Santhanam, R, University of Kentucky. (2013). Digital games and beyond: What happens when players compete? MIS Quarterly, 37(1), 111–124. https://doi.org/10.25300/MISQ/2013/37.1.05
  • Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K., & Larson, H. J. (2021). Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the UK and USA. Nature Human Behaviour, 5(3), 337–348. https://doi.org/10.1038/s41562-021-01056-1
  • Lou, Y., Abrami, P. C., & d’Apollonia, S. (2001). Small group and individual learning with technology: A meta-analysis. Review of Educational Research, 71(3), 449–521. https://doi.org/10.3102/00346543071003449
  • Lu, C., Hu, B., Li, Q., Bi, C., & Ju, X.-D. (2023). Psychological inoculation for credibility assessment, sharing intention, and discernment of misinformation: Systematic review and meta-analysis. Journal of Medical Internet Research, 25, e49255. https://doi.org/10.2196/49255
  • Lyons, B. A., Montgomery, J. M., Guess, A. M., Nyhan, B., & Reifler, J. (2021). Overconfidence in news judgments is associated with false news susceptibility. Proceedings of the National Academy of Sciences, 118(23), e2019527118. https://doi.org/10.1073/pnas.2019527118
  • Maertens, R., Roozenbeek, J., Basol, M., & van der Linden, S. (2021). Long-term effectiveness of inoculation against misinformation: Three longitudinal experiments. Journal of Experimental Psychology. Applied, 27(1), 1–16. https://doi.org/10.1037/xap0000315
  • Mahmood, K, University of the Punjab. (2016). Do people overestimate their information literacy skills? A systematic review of empirical evidence on the Dunning-Kruger effect. Comminfolit, 10(2), 199. https://doi.org/10.15760/comminfolit.2016.10.2.24
  • Malone, T. W., & Lepper, M. R. (2021). Making learning fun: A taxonomy of intrinsic motivations for learning. In R. E. Snow & M. J. Farr (Eds.), Aptitude, learning, and instruction (pp. 223–254). Routledge.
  • McGrew, S. (2020). Learning to evaluate: An intervention in civic online reasoning. Computers & Education, 145, 103711. https://doi.org/10.1016/j.compedu.2019.103711
  • McGrew, S., Ortega, T., Breakstone, J., & Wineburg, S. (2017). The challenge that’s bigger than fake news: Civic reasoning in a social media environment. American Educator, 41(3), 4.
  • McGuire, W. J. (1961). The effectiveness of supportive and refutational defenses in immunizing and restoring beliefs against persuasion. Sociometry, 24(2), 184–197. https://doi.org/10.2307/2786067
  • Metzger, M. J. (2007). Making sense of credibility on the web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, 58(13), 2078–2091. https://doi.org/10.1002/asi.20672
  • Micallef, N., Avram, M., Menczer, F., & Patil, S. (2021). Fakey: A game intervention to improve news literacy on social media. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1–27. https://doi.org/10.1145/3449080
  • Michael, D. R., & Chen, S. L. (2005). Serious games: Games that educate, train, and inform. Muska & Lipman/Premier-Trade.
  • Modirrousta-Galian, A., & Higham, P. A. (2023). Gamified inoculation interventions do not improve discrimination between true and fake news: Reanalyzing existing research with receiver operating characteristic analysis. Journal of Experimental Psychology. General, 152(9), 2411–2437. https://doi.org/10.1037/xge0001395
  • Moore, R., & Hancock, J. (2022). A digital media literacy intervention for older adults improves resilience to fake news. Scientific Reports, 12(1), 6008. https://doi.org/10.1038/s41598-022-08437-0
  • Namey, E., Guest, G., Thairu, L., & Johnson, L. (2008). Data reduction techniques for large qualitative data sets. Handbook for Team-Based Qualitative Research, 2(1), 137–161.
  • Nemerow, L. G. (1996). Do classroom games improve motivation and learning? Teaching and Change, 3(4), 356–366.
  • Nygren, T., & Guath, M. (2019). Swedish teenagers’ difficulties and abilities to determine digital news credibility. Nordicom Review, 40(1), 23–42. https://doi.org/10.2478/nor-2019-0002
  • Nygren, T., & Guath, M. (2022). Students evaluating and corroborating digital news. Scandinavian Journal of Educational Research, 66(4), 549–565. https://doi.org/10.1080/00313831.2021.1897876
  • Nygren, T., Wiksten Folkeryd, J., Liberg, C., & Guath, M. (2020). Students assessing digital news and misinformation. In Multidisciplinary international symposium on disinformation in open online media (pp. 63–79). Springer International Publishing.
  • Nygren, T., Guath, M., Axelsson, C.-A., Werner. (2021). Bad news game in a peer education intervention: Impact on attitudes but not on skills. https://www.getunderpressure.com/wp-content/uploads/2021/04/Nygren-et-al-Pegap-210331.pdf
  • Pennycook, G., & Rand, D. G. (2021). The psychology of fake news. Trends in Cognitive Sciences, 25(5), 388–402. https://doi.org/10.1016/j.tics.2021.02.007
  • Ritterfeld, U., Cody, M., & Vorderer, P. (2009). Serious games: Mechanisms and effects. Routledge.
  • Roozenbeek, J., Culloty, E., & Suiter, J. (2023). Countering misinformation. European Psychologist, 28(3), 189–205. https://doi.org/10.1027/1016-9040/a000492
  • Roozenbeek, J., Maertens, R., Herzog, S. M., Geers, M., Kurvers, R., Sultan, M., & van der Linden, S. (2022). Susceptibility to misinformation is consistent across question framings and response modes and better explained by myside bias and partisanship than analytical thinking. Judgment and Decision Making, 17(3), 547–573. https://doi.org/10.1017/S1930297500003570
  • Roozenbeek, J., Maertens, R., McClanahan, W., & van der Linden, S. (2021). Disentangling item and testing effects in inoculation research on online misinformation: Solomon revisited. Educational and Psychological Measurement, 81(2), 340–362. https://doi.org/10.1177/0013164420940378
  • Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman, A. L. J., Recchia, G., van der Bles, A. M., & van der Linden, S. (2020). Susceptibility to misinformation about COVID-19 around the world. Royal Society Open Science, 7(10), 201199. https://doi.org/10.1098/rsos.201199
  • Roozenbeek, J., Traberg, C. S., & van der Linden, S. (2022). Technique-based inoculation against real-world misinformation. Royal Society Open Science, 9(5), 211719. https://doi.org/10.1098/rsos.211719
  • Roozenbeek, J., & Van der Linden, S. (2019). Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 5(1), 1–10. https://doi.org/10.1057/s41599-019-0279-9
  • Roozenbeek, J., Van Der Linden, S., Goldberg, B., Rathje, S., & Lewandowsky, S. (2022). Psychological inoculation improves resilience against misinformation on social media. Science Advances, 8(34), eabo6254. https://doi.org/10.1126/sciadv.abo6254
  • Roozenbeek, J., van der Linden, S., & Nygren, T. (2020). Prebunking interventions based on “inoculation” theory can reduce susceptibility to misinformation across cultures. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016//mr-2020-008
  • Sailer, M., & Homner, L. (2020). The gamification of learning: A meta-analysis. Educational Psychology Review, 32(1), 77–112. https://doi.org/10.1007/s10648-019-09498-w
  • Schrier, K. (2021). We the gamers: How games teach ethics and civics. Oxford University Press.
  • Shavelson, R. J., Phillips, D. C., Towne, L., & Feuer, M. J. (2003). On the science of education design studies. Educational Researcher, 32(1), 25–28. https://doi.org/10.3102/0013189X032001025
  • Shtulman, A. (2024). Children’s susceptibility to online misinformation. Current Opinion in Psychology, 55, 101753. https://doi.org/10.1016/j.copsyc.2023.101753
  • Shute, V. J., & Rahimi, S. (2017). Review of computer-based assessment for learning in elementary and secondary education. Journal of Computer Assisted Learning, 33(1), 1–19. https://doi.org/10.1111/jcal.12172
  • Silverman, C., Kao, J. (2022). In the Ukraine conflict, fake fact-checks are being used to spread disinformation. https://www.propublica.org/article/in-the-ukraineconflict-fake-fact-checks-are-being-used-to-spread-disinformation.
  • Stapel, D. A., & Koomen, W. (2005). Competition, cooperation, and the effects of others on me. Journal of Personality and Social Psychology, 88(6), 1029–1038. https://doi.org/10.1037/0022-3514.88.6.1029
  • Sundin, O. (2015). Invisible search: Information literacy in the Swedish curriculum for compulsory schools. Nordic Journal of Digital Literacy, 10(4), 193–209. https://doi.org/10.18261/ISSN1891-943X-2015-04-01
  • Susi, T., Johannesson, M., & Backlund, P. (2007, February). Serious games: An overview (Tech. Rep. No. HS-IKI-TR-07-001). https://www.diva-portal.org/ smash/get/diva2:2416/FULLTEXT01.pdf.
  • Swedish National Agency for Education. (2011). Swedish curriculum for upper secondary schools. https://www.skolverket.se/undervisning/gymnasieskolan/laroplanprogram-och-amnen-i-gymnasieskolan/laroplan-gy11-for-gymnasieskolan.
  • Swedish National Agency for Education (2023). Elever i gymnasieskolan – l¨as˚aret 2022/23 (Descriptive Statistics) (Dnr: 2022:1564). Skolverket. https://www. skolverket.se/publikationer?id=11241.
  • Swedish Schools Inspectorate (2018). Undervisning om k¨allkritiskt f¨orh˚allningss¨att i svenska och samhällskunskap,˚arskurs 7-9 [Teaching a critical approach to sources in swedish and civics, grades 7-9]. https://www.skolinspektionen.se/beslutrapporter-statistik/publikationer/kvalitetsgranskning/2018/undervisning-omkallkritiskt-forhallningssatt-i-svenska-och-samhallskunskap/.
  • Traberg, C. S., Roozenbeek, J., & van der Linden, S. (2022). Psychological Inoculation against Misinformation: Current Evidence and Future Directions. The ANNALS of the American Academy of Political and Social Science, 700(1), 136–151. https://doi.org/10.1177/00027162221087936
  • Trecek-King, M., & Cook, J. (2024). Combining different inoculation types to increase student engagement and build resilience against science misinformation. Journal of College Science Teaching, 53(1), 1–6. 2023.2291968 https://doi.org/10.1080/0047231X
  • Vandercruysse, S., Vandewaetere, M., Cornillie, F., & Clarebout, G. (2013). Competition and students’ perceptions in a game-based language learning environment. Educational Technology Research and Development, 61(6), 927–950. https://doi.org/10.1007/s11423-013-9314-5
  • van Dijk, J. (2020). The digital divide. Polity Press.
  • Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
  • WEF. (2024, January). The global risks report 2024 (report). World Economic Forum. https://www.weforum.org/publications/global-risks-report-2024/.
  • Werner Axelsson, C.-A., & Nygren, T. (2024). The advantage of videos over text to boost adolescents’ lateral reading in a digital workshop. Behaviour & Information Technology, 1–15. https://doi.org/10.1080/0144929X.2024.2308046
  • Wineburg, S., Breakstone, J., McGrew, S., Smith, M. D., & Ortega, T. (2022). Lateral reading on the open internet: A district-wide field study in high school government classes. Journal of Educational Psychology, 114(5), 893–909. https://doi.org/10.1037/edu0000740
  • Yang, S., Lee, J. W., Kim, H.-J., Kang, M., Chong, E., & Kim, E-m (2021). (104057). Can an online educational game contribute to developing information literate citizens? Computers & Education, 161, 104057. https://doi.org/10.1016/j.compedu.2020
  • Zarocostas, J. (2020). How to fight an infodemic. The Lancet, 395(10225), 676. https://doi.org/10.1016/S0140-6736(20)30461-X
  • Zollo, F., Bessi, A., Del Vicario, M., Scala, A., Caldarelli, G., Shekhtman, L., Havlin, S., & Quattrociocchi, W. (2017). Debunking in a world of tribes. PLOS One, 12(7), e0181821. https://doi.org/10.1371/journal.pone.0181821

Appendix A.

Distribution of students, teachers and upper-secondary programs

Table A1. Summary of Recruited and Actual Teachers, Lessons, Programs, and Participants.

Appendix B.

Tweets used as stimuli

Table B1. Questionnaire Items, Manipulative and Credible Control Tweets Used Pre-and Post-Intervention Where Participants were Asked to Rate Their Reliability.

Appendix C.

Questions during open class discussions after game-play

Note, these discussions were not recorded and subsequently not addressed in the analysis. We made observations in all three conditions but did not have the resources to collect and analyze data from all the classrooms.

  • What was it like to manipulate other people?

  • What were your strategies for amassing followers?

  • Which were the six strategies that you were using to amass followers?

  • Which of the strategies do you recognize from social media? And who usually makes use of them?

  • Do you believe any strategy is more effective to deceive people?

Appendix D.

Content analysis coding scheme

Table D1. Coding Scheme Used for Inter-Rater Reliability With Several Examples for the Positive and Negative Mode of Themes.

Appendix E.

Receiver operating characteristics (ROC) plot

Pre-and post-test Discernability Score were derived from this ROC curve. It is simply a calculation of the Area Under the Curve (AUC). There is one ROC curve for pretest and one for post-test. Post-test area is larger than pretest, suggesting an improvement in teenagers’ discernment of (mis)information post-intervention.

Figure E1. Receiver operating characteristics (ROC).

Figure E1. Receiver operating characteristics (ROC).