6,114
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Misinformation, disinformation, and fake news: lessons from an interdisciplinary, systematic literature review

&
Pages 139-166 | Received 28 Nov 2022, Accepted 30 Nov 2023, Published online: 06 Mar 2024

ABSTRACT

Even though misinformation, disinformation, and fake news are not new phenomena, they have received renewed interest since political events such as Brexit and the 2016 U.S. Presidential elections. The resulting sharp increase in scholarly publications bears the risk of lack of overview, fragmentation across disciplines, and ultimately a lack of research cumulativity. To counteract these risks, we have performed a systematic research review of 1261 journal articles published between 2010 and 2021. Results show the field is mostly data-driven, frequently investigating the prevalence, dissemination, detection or characteristics of misinformation, disinformation, and fake news. There further are clear foci concerning contributing disciplines, methodologies, and data usage. Building on our results, we identify several research gaps and suggest avenues for future research.

Introduction

While history shows that false and misleading information is not a new phenomenon (Kapantai et al., Citation2021; Ortoleva, Citation2019), most observers seem to agree that misinformation, disinformation, and fake news have become much more prevalent during the last decade (Benkler et al., Citation2018; Kavanagh & Rich, Citation2018; O’Connor & Weatherall, Citation2019). Often cited reasons are the 2016 U.S. Presidential election and the Brexit referendum in 2016, that were both characterized by widespread disinformation and misinformation and – in the U.S. case – accusations of fake news. As a result, several scholars have argued that we currently live in a ‘post-truth’ era (Lewandowsky et al., Citation2017) or an ‘misinformation age’ (O’Connor & Weatherall, Citation2019).

Since then, research pertaining to disinformation, misinformation and fake news virtually seems to have exploded (Abu Arqoub et al., Citation2022; Ha et al., Citation2021; Madrid-Morales & Wasserman, Citation2022). Considering that false and misleading information in its varying forms may lead to increasing misperceptions and knowledge resistance, which in turn pose significant threats to the health and well-being of individuals as well as organizations, countries, democratic deliberation and democracy per se (Klintman, Citation2019; Krishna & Thompson, Citation2021; Rosenfeld, Citation2019; Strömbäck, Wikforss, et al., Citation2022), the increasing scholarly interest in understanding the causes, prevalence, and consequences of misinformation, disinformation, and fake news should be welcomed. At the same time, when research in a particular area increases significantly within a short time frame, there is a great risk of a lack of overview, fragmentation and research cumulativity. This holds particularly true when research is simultaneously carried out in many otherwise rather disconnected disciplines.

In the case of misinformation, disinformation, and fake news, it also appears as if different strands of literature have developed quite independently of each other. The result is a rather scattered body of work. For example, there are numerous studies investigating the automatic detection of misinformation, disinformation, and fake news within computer science, yet these are rarely referred to outside of their discipline (but see Damstra et al., Citation2021). Similarly, media literacy has been proposed as a potential mitigating factor or solution both within communication and information and library sciences. And yet, these appear to be largely separate strands of literature.

In situations such as these, there is a need for systematic literature reviews, to provide an overview as well as build on and add to extant research. Some reviews of research on misinformation, disinformation and fake news have also been conducted. However, most of them are not systematic and are limited in terms of their scope, search terms and/or time periods. For example, there are non-systematic research reviews pertaining to specific domains such as health (Krishna & Thompson, Citation2021) and related to specific terms such as ‘fake news’ (Tandoc et al., Citation2018). There are also some systematic reviews, but these have mainly focused on more narrow and specific terms such as ‘fake news’ (Arqoub et al., Citation2022) or specific topics such as countering anti-vaccination conspiracy theories and misinformation (Lazić & Žeželj, Citation2021). The most comprehensive systematic reviews thus far were done by Ha et al. (Citation2021) and Madrid-Morales and Wasserman (Citation2022), but the former only includes research on fake news and misinformation – thus leaving research on disinformation aside – while the latter does not go into depth with respect to, for example, research themes and method used within these research areas.

Hence, a key research problem is that the current state of research on contemporary misinformation, disinformation and fake news is unclear. To help remedy this situation, the purpose of this article is twofold. First, to systematically review research on misinformation, disinformation and fake news from a cross-disciplinary perspective. Second, based on the systematic literature review, to identify key research gaps and research problems, and thereby give an outlook in terms of avenues for future research. Toward that end, we have performed a systematic review of relevant peer-reviewed journal articles published between 2010 and 2021, based on a quantitative content analysis of 1261 publications.

Challenges of misinformation, disinformation, and fake news

Although there are different normative perspectives on how informed citizens in democracies should be, there is broad scholarly consensus that democratic efficiency and representation greatly benefit the more informed citizens are about politics and society (Delli Carpini & Keeter, Citation1996; Hochschild & Einstein, Citation2015). For instance, well-informed citizens are usually more accepting of democratic norms such as tolerance, exhibit greater political interest and higher rates of political participation, and a greater likelihood to have opinions about pressing issues (Clawson & Oxley, Citation2021; Delli Carpini & Keeter, Citation1996; Milner, Citation2002).

Numerous studies have also investigated how much citizens know about politics and society, as well as the antecedents and effects of political knowledge. One consistent finding in this line of research is that most citizens are not very knowledgeable about politics and society, even though there are differences between countries (Aalberg & Curran, Citation2012; Bartels, Citation1996; Delli Carpini & Keeter, Citation1996; Zaller, Citation1992).

Within this body of work, knowledge has generally been conceptualized as a dichotomy where individuals are either informed or uninformed. Consequently, an incorrect answer in a survey has been interpreted as a lack of knowledge (Bartels, Citation1996; Kuklinski et al., Citation2000; Lindgren et al., Citation2021). However, scholars are increasingly pointing toward the fact that people might just as well firmly believe in the wrong answers they give. Hence, they are not uninformed but misinformed. Opposed to a mere lack of knowledge, misperceptions are conceptualized as the belief in claims that have been proven false and/or are not substantiated by the best available evidence (Flynn et al., Citation2017; Kuklinski et al., Citation2000; Nyhan, Citation2020; Vraga & Bode, Citation2020).

This distinction is essential for several reasons. To begin with, random incorrect answers given by the uninformed are likely to cancel out in aggregate (Page & Shapiro, Citation1992), but the misinformed may consistently share the same wrong beliefs, thereby affecting the distribution of collective opinion and subsequently political outcomes based on faulty information and/or faulty conclusions (Kuklinski et al., Citation2000). In addition, the psychological mechanisms behind misperceptions and lack of knowledge differ (Gaines et al., Citation2007; Kuklinski et al., Citation2000; Taber & Lodge, Citation2006), which has important implications for how to address the problem. A key reason for why some individuals are uninformed is that they have not been exposed to or paid sufficient attention to news and other information sources, which in turn may be explained by a lack of interest in news or issues pertaining to politics and society (Prior, Citation2010; Strömbäck et al., Citation2013). In contrast, a key explanation for why some people are misinformed is that they have been exposed to false and misleading information, and that they have psychological incentives to believe in said information and hold on to their misperceptions. More specifically, research shows that people tend to prefer information that is congruent with existing beliefs and attitudes (confirmation bias, selective exposure), as well as counter-argue or avoid information that is contradictory (disconfirmation bias, selective avoidance), and that motivated reasoning biases their interpretation of information (Flynn et al., Citation2017; Kunda, Citation1990; Sude & Knobloch-Westerwick, Citation2022; Taber & Lodge, Citation2006).

Disinformation, misinformation, and fake news are hence important due to the role they play in terms of forging and sustaining misperceptions. Misperceptions, in turn, have an impact both on people’s opinions and behaviors. For example, research shows that overestimations of the share of households who receive welfare lead people to develop anti-welfare attitudes (Kuklinski et al., Citation2000), overestimations of the share of immigrants within a population lead to anti-immigration attitudes (Sides & Citrin, Citation2007), and overestimations of crime levels increase people’s fear of crime and support for punitive policies (Armborst, Citation2017). Such findings underline that ‘misperceptions threaten to warp mass opinion, undermine democratic debate, and distort public policy on issues ranging from climate change to vaccines’ (Nyhan, Citation2020, p. 220).

As part of a broader societal trend, misperceptions may also lead to knowledge resistance (Glüer & Wikforss, Citation2022), which in a narrow sense refers to ‘an epistemically irrational response to evidence that is available’ (p. 44) and in a wider sense to selective attendance to and/or avoidance of information that may challenge or contradict people’s attitudes, opinions, and perceptions (Glüer & Wikforss, Citation2022; see also Klintman, Citation2019; Strömbäck, Wikforss, et al., Citation2022). Similar to misperceptions, knowledge resistance has been observed in a variety of contexts (Flynn et al., Citation2017; Kavanagh & Rich, Citation2018; McIntyre, Citation2018; O’Connor & Weatherall, Citation2019; Strömbäck, Wikforss, et al., Citation2022). For example, knowledge resistance regarding vaccines has led to a resurgence of preventable diseases such as measles (Kubin, Citation2019; Papachristanthou & Davis, Citation2019).

Apart from the potentially dire consequences for individuals, organizations, and society at large, the development and maintenance of misperceptions and knowledge resistance can corrupt the democratic process. Meaningful democratic deliberation has to be based on a shared set of facts (Delli Carpini & Keeter, Citation1996; Hochschild & Einstein, Citation2015; McIntyre, Citation2018; Rosenfeld, Citation2019). When the facticity of information is disregarded, it becomes virtually impossible to bridge the gaps between varying sides in an argument, solve issues within society, and uphold the legitimacy of the democratic process itself. A case in point is the widespread misperception that Donald Trump won the 2020 U.S. Presidential election.

Challenges of researching disinformation, misinformation, and fake news

While the increasingly interdisciplinary scholarly interest in understanding the causes, prevalence and consequences of misinformation, disinformation, and fake news is welcome from both a theoretical and societal perspective, it has also resulted in a very scattered research field (Ha et al., Citation2021) that suffers from a lack of conceptual clarity and lack of overview.

With respect to the lack of conceptual clarity, at present there are several closely related terms used to denote different types of false and misleading information. Aside from misinformation, disinformation and fake news, other terms include (computational) propaganda, malinformation, alternative facts, and rumors (Benkler et al., Citation2018; Egelhofer & Lecheler, Citation2019; Tandoc et al., Citation2018; Wardle, Citation2018). Despite conceptual differences, sometimes scholars use the same term to denote different types of false and misleading information, and other times different terms to denote the same type of false and misleading information. At yet other times, the terms are conceptualized differently in different studies or used interchangeably. Perhaps the best example is the term ‘fake news,’ which refers to two different phenomena. One the one hand, ‘fake news’ has become a weaponized label that critics of the news media use to attack and undermine the legitimacy of the news media (Carlson et al., Citation2021; Egelhofer & Lecheler, Citation2019; Ross & Rivers, Citation2018). At the same time, ‘fake news’ refers to intentionally false or misleading information that is ‘trying to appear like real news’ while being ‘low in facticity and high in the immediate intention to deceive’ (Tandoc et al., Citation2018, pp. 147, 148; see also Lazer et al., Citation2018). These are very different meanings of the term, but even when ‘fake news’ is used to refer to ‘deliberately created, pseudojournalistic disinformation’ (Egelhofer & Lecheler, Citation2019, p. 98), some use the term ‘misinformation’ (Lazer et al., Citation2018) when discussing it whereas others use the term ‘disinformation’ (McNair, Citation2017). However, disinformation does not equal misinformation. Instead, a key difference between misinformation and disinformation is whether the false and misleading information is intended to mislead (Humprecht et al., Citation2020). In the words of Wardle (Citation2018), disinformation is ‘false information that is deliberately created or disseminated with the express purpose to cause harm’ (p. 4) whereas misinformation is ‘information that is false, but not intended to cause harm’ (p. 5). From that perspective, ‘fake news’ should be conceptualized as one form of disinformation, but not as misinformation.

One reason for the lack of conceptual clarity may be that research has expanded so quickly and is conducted in many different disciplines (Arqoub et al., 2020; Ha et al., Citation2021; Madrid-Morales & Wasserman, Citation2022). Another reason may be that scholarly interest in misinformation, disinformation and fake news studies has grown exponentially since the 2016 U.S. Presidential election with its repeated allegations of interference and meddling (Freelon & Wells, Citation2020; Madrid-Morales & Wasserman, Citation2022). This suggests that many research projects were planned, designed, and conducted around the same time, without knowledge of other similar projects. Hence, there was little room for scholars to agree on how key concepts should be conceptualized and operationalized. This situation is exacerbated by the fact that research was being done not only in several different disciplines that often are interested in the same or similar phenomena, such as communication and political science, but also disciplines more disparate from these, such as computer science and information and library sciences (Ha et al., Citation2021).

Insights from extant reviews

While this is not the first review of research on misinformation and related areas, most reviews thus far have neither been systematic nor interdisciplinary (e.g. Krishna & Thompson, Citation2021; Tandoc et al., Citation2018). Some have also focused on more conceptual issues such as creating a unified taxonomical framework (Kapantai et al., Citation2021), a certain discipline such as Jerit and Zhao’s (Citation2020) overview of political misinformation, or specific topics such as countering anti-vaccination conspiracy theories and misinformation (Lazić & Žeželj, Citation2021). Jerit and Zhao (Citation2020), for instance, provide a detailed (non-exhaustive) account of sources of and solutions to political misinformation, with a particular focus on the importance of corrections. Lazer et al. (Citation2018) follow a similar strategy, examining the history and prevalence of as well as interventions to fake news. They especially highlight means to empower individuals in terms of media literacy and algorithmic detection. On the more empirical side, there are several small-n studies, especially in specific topic areas such as health. Suarez-Lledo and Alvarez-Galvez (Citation2021), for example, assessed the prevalence of health misinformation on social media, conducting a review of 69 studies. Similarly, Swire-Thompson and Lazer (Citation2020), in a non-systematic review, investigate how individuals interact with health misinformation, its consequences for health outcomes as well as potential countermeasures. Both theoretical and empirical small-scale reviews are of course highly valuable. They provide a window to specific subsets of misinformation, disinformation, and fake news research. What is harder to come by in extant research are, however, larger systematic research reviews. Three important exceptions are reviews by Arqoub et al. (2020), Ha et al. (Citation2021), and Madrid-Morales and Wasserman (Citation2022). The review by Arqoub et al. (2020) is based on 103 studies on fake news that were published between 2000 and 2018. Among other things, their findings show that research on fake news increased sharply in 2017–2018, that research in the area is done in several disciplines but dominated by communication research, that most research is quite atheoretical, and that qualitative methods were more than twice as common as quantitative methods. It also found that by far the most studies focused on North America, with Europe as a far second.

The review by Ha et al. (Citation2021) is based on 142 journal articles on fake news and misinformation published between 2008 and 2017. In terms of scope, number of investigated articles and investigated aspects, their study is more comprehensive. Among other things, their findings confirm that research in this area is interdisciplinary, but mostly published in communication and psychology journals. In contrast to Arqoub et al. (2020), they find that quantitative methods are the most common, followed by conceptual articles and articles applying qualitative methods. The most common topic was ‘effects of fake news/misinformation,’ followed by ‘solutions/strategies to combat or reduce fake news/misinformation’ and ‘audience’s determination/recognition of fake news/misinformation.’ Similar to the findings by Arqoub et al. (2020), most studies focused on North America.

The third and final exception is a review by Madrid-Morales and Wasserman (Citation2022). It is based on an analysis of about 3,800 articles that mention misinformation, disinformation or fake news in the abstracts, and that were published between 2000 and 2020. It is thus the most comprehensive review in terms of time period and number of articles. However, they focused on the discipline of the journals publishing the articles, how often the key terms were used, and whether the articles refer to countries in the Global North versus the Global South. Among other things, their findings show a large increase in the number of published articles, that most articles are published in communication journals, and that most research focuses on countries in the Global North.

While these systematic and interdisciplinary literature reviews have provided many important insights, from a contemporary vantage point, they suffer from two shortcomings. First, only one of them includes research on disinformation, which can be argued to be a very – if not the most – important type of false and misleading information. Second, none of them include as recent research as our review. Third, overall, they are rather descriptive, stopping short of using the results to identify key research gaps and outstanding research problems.

Research questions

To help remedy the current lack of systematized and interdisciplinary knowledge about contemporary research on disinformation, misinformation, and fake news, the purpose of this study is – as mentioned in the introduction – twofold. The first is to conduct a systematic and interdisciplinary review of research on contemporary disinformation, misinformation, and fake news. To address this part, we will largely follow in the footsteps of reviews Ha et al. (Citation2021) and answer the following research questions pertaining to research on misinformation, disinformation, and fake news:

RQ1: How has the number of publications developed over time?

RQ2: What is the geospatial distribution of publications?

RQ3: Which disciplines contribute the most to research?

RQ4: What is the distribution of theoretical, empirical and review articles?

RQ5: What themes does research focus on, and are there any themes that cluster together?

RQ6: What methodologies are used?

RQ7: What media data is used?

RQ8: What is the distribution of empirical themes and methods across disciplines?

The second part of the purpose is to identify key research gaps and outstanding research problems, and thereby give an outlook in terms of avenues for future research. To be clear, by research gaps we refer to areas of research where there is insufficient research, and by research problems, we refer to puzzles or things we do not know, despite all previous theory and research, that is of theoretical importance. To address this, our final research question is:

RQ9: What key research gaps and research problems can be identified?

Methods and data

To answer our research questions, we performed quantitative content analysis of all relevant research articles published between 2010 and 2021. Relevant publications were defined as English-language articles published in peer-reviewed journals within this timeframe. To identify relevant articles, we used Google Scholar. The two main reasons for using Google Scholar were that we wanted to follow the same approach as Ha et al. (Citation2021), and that other academic databases tend to have a publication time lag of up to a year while Google Scholar includes the most recent articles (see also Ha et al., Citation2021).

The articles were compiled using the keywords ‘misinformation,’ ‘disinformation,’ and ‘fake news,’ which we deem are the most prominent within this research area.Footnote1 At least one of the keywords had to be present in title, keywords or abstract. Initially we found that we got fewer hits while using the full time period when running the search string compared to when searching for one year at a time. Hence, we decided to search for one year at a time. By doing that, we also avoid potential ceiling effects, i.e. that there are too many hits for Google Scholar to return appropriately.

Our initial sampling yielded 4088 articles. These were subsequently screened for relevance through reading the abstract and introduction and applying several selection criteria. First, we filtered in articles if they addressed the communication and/or processing of public misinformation, disinformation, and fake news. Second, we employed a range of out-filters to filter out publications that were not relevant given the purpose of our review, such as articles that do not correspond to the peer-reviewed journal article format (e.g. book reviews, editorials, commentaries), articles that do not target contemporary conditions (e.g. propaganda during the cold war), articles that by ‘fake news’ refer to satire, articles that investigate judicial regulation or minors, articles that do not deal with the public communication of misinformation, disinformation or fake news, articles that only mention any of the key terms in passing, and articles from predatory journals.

In total, this sampling process yielded 1261 articles. These were treated as the unit of analysis and assessed utilizing the coding scheme described below. This coding scheme was devised and tested by both authors. More specifically, we read the title, abstract, introduction and methodology section of all articles, and if needed to code a particular variable, other parts of the articles. Whenever uncertainty or ambiguity occurred during the selection of articles or the coding, the coding instructions were discussed and refined, and consensual coding applied.

Formal variables: (a) title of the article, (b) keywords as listed in the article, (c) journal the article was published in, (d) year of publication, € discipline of lead author, (f) country of lead author, and (g) gender of lead author.

Article type: This variable focuses on whether the article is based on a conceptual/theoretical, empirical or a review approach. Review articles include meta-analyses as well as systematic and non-systematic reviews. To reduce ambiguity in relation to the conceptual/theoretical category, articles were coded as review articles whenever the respective author referred to them as such (using terms such as review). Otherwise, they were coded as theoretical/conceptual.

Empirical themes: A set of variables focusing on whether the respective article empirically investigates the following categories in a substantial manner. These variables were only coded for empirical articles, on a yes/no basis, and are not mutually exclusive.

  • - Prevalence/dissemination: whether the article investigates the pervasiveness or spreading of misinformation, disinformation, and/or fake news in any given context. This includes articles that investigate prevalence and/or dissemination on different topics or media outlets, but also among individuals or groups (both online and offline).

  • - Exposure: whether the article investigates encountering/being confronted with misinformation, disinformation, and/or fake news in a natural setting. This includes articles that examine both inter-personal communication or exposure through varying media or other third parties. It also includes perceived exposure. Importantly, this variable was not coded for ‘artificial’ or forced exposure in experimental settings.

  • - Impact: whether the article investigates the impact of misinformation, disinformation, and/or fake news. This includes impact in terms of actual effects on, for instance, political participation, but also cross-sectional survey findings in terms of correlations and associations, and broader impact on group or societal level.

  • - Correction: whether the article investigates efforts to correct misinformation, disinformation, and/or fake news and/or their effects. This includes a variety of measures such as fact-checking, altering design and format of a piece of information, provision of sources, among others.

  • - Strategic use: whether the article investigates the strategic use of misinformation, disinformation, and/or fake news. This includes articles that investigate state-sponsored strategic use as well as that of other political elites and/or opinion leaders. It can also refer to strategic use by individuals or within groups both online and offline.

  • - Detection/characteristics: whether the article investigates formal, thematic, or theoretical characteristics of misinformation, disinformation, and/or fake news (e.g. linguistic features) as well as means of detecting them. This can include detection/characteristics regarding specific topics (e.g. Covid-19) or media, but also perceived characteristics and means of detection by professionals such as journalists.

  • - Solutions: whether the article is addressing solutions to the problem of misinformation, disinformation, and/or fake news. Solutions can include a variety of countermeasures such as media literacy programs, fact-checking, or legislation. However, this variable was only coded if any type of solution is addressed empirically in a substantial way and applied under realistic circumstances.

  • - Processing: whether the article investigates the mechanisms of information processing involved when consuming/being exposed to misinformation, disinformation, and/or fake news. This spans a variety of psychological and identity-related mechanisms and biases such as confirmation bias and motivated reasoning.

Methods: A set of variables focused on what methodology or empirical data the respective article utilizes. The options included (a) surveys, (b) interviews and/or focus groups, (c) experiments, (d) quantitative content analysis, (e) qualitative content analysis, and (f) computational methods. These variables were only coded for empirical articles, on a yes/no basis, and are not mutually exclusive.

Media Data: A set of variables that were only coded when articles utilized content analysis. These variables focus on what media data were used. The coding was based on platform/format. Hence, all data taken from social media was coded as social media even though it might include data from mainstream media accounts on social media. On the other hand, if data was taken from the official website of a mainstream media outlet, it was coded as mainstream media. To reduce ambiguity regarding fake media and alternative media variables, which are difficult to classify objectively, we coded based on the terminology utilized for the outlet(s) in the respective article. For example, if an article conceptualized a certain media as alternative media, we coded it as such. All these variables were coded on a yes/no basis. The categories are not mutually exclusive; multiple options could be selected if an article utilizes multiple types of data sources. The following types of media data were analyzed:

  • - Social media refers to all data that is gathered from social media outlets such as Twitter. This includes for instance comments and shares, but also data that in terms of content corresponds to mainstream media, alternative media or fake news media but published on social media.

  • - Mainstream media refers to traditional or legacy news media, including tabloid news media, both in their traditional and online formats (except social media).

  • - Alternative media refers to media characterized by ideological or political alignment (e.g. Breitbart) in traditional or online formats (except social media). This category thus includes alternative media when authors describe them as such.

  • - Fake news media refers to media outlets that produce and distribute ‘fake news’ in traditional or online formats (except social media).

  • - Internet refers to media data that is gathered from online sources, but the nature of these sources remains undisclosed or cannot be discerned (e.g. datasets for fake news detection).

We performed a series of descriptive analyses to investigate the data in terms of the research questions, including identifying whether certain variables cluster together. To that end we utilized cross tabulations.

Results

Turning to the results, RQ1 asked how the number of publications on misinformation, disinformation and fake news has developed over time. The results (, Table A in the Online Appendix) show that there has been an exponential growth of publications over time, but especially in recent years (see also Madrid-Morales & Wasserman, Citation2022). The publication of studies on misinformation, disinformation and fake news appears to take off after coinciding social events such as Brexit and the U.S. Presidential election in 2016. After that, the number of publications doubles or even triples every year, reaching a record in 2021 with more than half of our dataset (647 out of 1261) stemming from this year alone.

Figure 1. Number of publications across time.

Figure 1. Number of publications across time.

Addressing RQ2, asking about the geospatial distribution of publications, the results show that our sample includes articles from 69 countries in total.Footnote2 While this may suggest diversity, it is clear that the U.S. records by far the most publications (436) (). A distant second is the U.K. (91), followed closely by China (81) and India (68). These results largely align with what previous research has found (Abu Arqoub et al., Citation2022; Ha et al., Citation2021; Madrid-Morales & Wasserman, Citation2022). Apart from this, several European countries also have a larger research output with between 30 and 53 publications, as do Canada, Brazil, and Australia. Beyond the top ten countries, as described in , we have identified a multitude of countries with a few publications each. There are even several countries for which we have recorded just one country. The overall picture is thus that there are great geospatial imbalances within research on misinformation, disinformation, and fake news, and that there are many geographic areas – not least in the Global South – where research as published is quite scarce. At least in terms of research published in journal articles (Wassermann & Madrid-Morales, Citation2022). These distributions are moreover quite stable across time.

Table 1. Geographical distribution of articles.

Turning to RQ3, asking about the distribution of publications across disciplines, the largest number of studies stems from Communication, with 380 (30.1%) articles (see ). In fact, the second most active discipline – Computer Science – is about 150 publications behind. Together, about half of all studies stem from these two disciplines alone. However, Psychology, Political Science, Economy as well as Health Science all boast a larger body of work as well, with around 100 publications each. The rest of disciplines identified remain below the 50 publications mark. Finally, 126 publications (9.9%) remain within the ‘Other’ category, which illustrates the great diversity of disciplines investigating this topic. Thus, beyond the more commonly identified disciplines, our sample includes publications from law, history, or social work, to provide some examples. In addition, a small number of publications (21, 1.7%) are not associated with an academic discipline, but stem from professionals within different industries (e.g. economics).

Table 2. Number of publications across disciplines.

With two exceptions, the distribution of articles across disciplines is also stable over time (see Online Appendix, Figure A and Table B). Firstly, the number of publications in Computer Science accelerated at a greater rate between 2020 and 2021 in comparison to other disciplines, jumping from 46 to 146 – thus more than tripling. Secondly, the number of publications in Health Sciences increased in a similar manner in both 2020 and 2021. This is most likely due to the Covid-19 pandemic.

RQ4 asked about the distribution of theoretical, empirical and review articles. The results show that the field is quite data driven. More specifically, empirical articles make up two-thirds of our data set (), with 932 (73.9%) articles being empirical. On the other hand, theoretical and conceptual articles are also well represented, accounting for 239 (19%) articles within our sample (e.g. Ball, Citation2021; Freelon & Wells, Citation2020; Egelhofer & Lecheler, Citation2019; McKay & Tenove, Citation2020; Tandoc et al., Citation2018; Wasserman, Citation2020). Review articles were identified more rarely, but still account for 90 (7.1%) of the articles in our sample (e.g. Arqoub et al., 2020; Di Domenico et al., Citation2021; Ha et al., Citation2021; Kapantai et al., Citation2021; Lewandowsky, Citation2020; Tsfati et al., Citation2020). These distributions are largely stable over time (see Online Appendix, Figure B and Table C). The exception is review articles which increase more rapidly after 2019. A likely reason is that a sizable body of work to review had developed by that time.

Table 3. Distribution of publications across article types.

With respect to empirical themes (RQ5), the results show that the by far most prominent ones were ‘detection and characteristics’ and ‘prevalence and dissemination’ (; see also Table D in the Online Appendix), which were present in 429 and 356 articles respectively. Combined, this exceeds 50% of all thematic codes. Most articles that study ‘detection and characteristics’ primarily investigate automated detection, often through computational methods, as well as issue-specific characteristics (e.g. Almaliki, Citation2019; Braşoveanu & Andonie, Citation2021; Pham et al., Citation2019; Pham et al., Citation2020). Pham et al. (Citation2019), for instance, analyze the effectiveness of an algorithm to curtail diffusion of cross-topic misinformation. The majority of articles studying ‘prevalence and dissemination,’ on the other hand, assess patterns of dissemination and/or prevalence in terms of specific topics or news outlets (e.g. Del Vicario et al., Citation2016; Guo, Citation2020; Humprecht et al., Citation2020; Nsoesie et al., Citation2020). Nsoesie et al. (Citation2020), for example, investigate patterns of dissemination of different Covid-19 misinformation pieces across different countries, finding that the 5G conspiracy theory spread differently than other pieces of misinformation. Del Vicario et al. (Citation2016) take a more focused approach, demonstrating that misinformation on Facebook spreads within highly homogenous and polarized groups.

Figure 2. Evolution of empirical themes across time.

Figure 2. Evolution of empirical themes across time.

Many articles further investigate the processing of misinformation, disinformation, and fake news. This theme was present in 220 (14.9%) of the articles. Typically, these studies are experimental and geared toward explaining causal mechanisms behind information processing, sometimes in conjunction with the assessment of corrections and impact (e.g. Bode & Vraga, Citation2015; Hameleers et al., Citation2020; Swire et al., Citation2017) Similarly, there is a sizable body of work investigating ‘impact,’ ‘correction,’ and ‘exposure.’ More seldom analyzed are the strategic use of and solutions to misinformation, disinformation, and fake news.

As demonstrates, the development of empirical themes across time is fairly balanced up until 2017 (see also Table E in the Online Appendix). All themes increase in conjunction with the overall expansion of publications, however some much more rapidly than others. This is especially true for articles examining ‘prevalence and dissemination,’ and ‘detection and characteristics,’ with the latter jumping from 96 articles in 2021 to 257 in 2021.

With respect to whether any themes cluster together, we found two such clusters. More specifically, there are 134 articles investigating both prevalence and/or dissemination, and detection and/or characteristics simultaneously (e.g. Gutiérrez-Coba et al., Citation2020; Kwanda & Lin, Citation2020; Moscadelli et al., Citation2020). That corresponds to 37.5% and 31.2% of the total number of publications in either category. For instance, Kwanda and Lin (Citation2020) investigate the life cycle of misinformation during an earthquake in Indonesia, and how journalistic responses differed depending on varying characteristics of the misinformation pieces.

In addition, ‘processing’ quite often clusters together with impact and correction in a ‘causal mechanism’ cluster. For instance, 38.3% of all processing articles also investigate impact, and 39.6% investigate corrections. If reversed, this makes up over 50% of all articles dealing with impact, and more than 75% of articles examining corrections. A sizable number of studies analyzes all three themes: 19.8% of articles investigating ‘processing’ also assess both impact and corrections. To provide some examples, a study may investigate the effects of misinformation on attitudes but also predictors that enable this mechanism (e.g. Bastick, Citation2021; Hameleers et al., Citation2020; Thorson, Citation2016). Similarly, corrections are typically investigated both in terms of whether the correction is successful but also which factors lead to a successful outcome and why (e.g. Lewandowsky et al., Citation2012; Rich & Zaragoza, Citation2021; Vraga et al., Citation2020; Walter & Salovich, Citation2021).

Turning to methods (RQ6), the results show that computational methods are most frequently employed (), used in 293 (28.2%) of the articles. These studies primarily use computational methods for the development of algorithms for the purposes of detection of misinformation, disinformation and/or fake news (e.g. Goldani et al., Citation2021; Sahoo & Gupta, Citation2021). However, experiments, surveys, and quantitative content analysis are also popular methods among scholars, used in between 16.1% and 19.5% of the articles. Such articles span a wide array of topics. For example, experimental studies largely address processing (e.g. Maertens et al., Citation2021; Vraga & Bode, Citation2017), while surveys typically investigate impact or perceived exposure (e.g. Yang & Tian, Citation2021). Qualitative methods are much less frequently utilized, accounting for only 15.7% of the empirical articles. To provide an example, Balod and Hameleers (Citation2021), for instance, convey how Filipino journalists perceive and negotiate their roles in the post-truth era – a task essential to the mitigation of the issue.

Table 4. Distribution of methods.

In terms of development across time, it is noteworthy that the use of computational methods increases sharply in 2021 (see Online Appendix, Figure C and Table F). While computational methods were utilized in 58 articles in 2020, they were employed in 180 articles in 2021.

With regard to methods, we could not identify clusters, with one exception: Quantitative and qualitative content analysis are occasionally used in tandem (e.g. Nounkeu, Citation2020). More specifically, about a third of articles using qualitative content analysis (35.4%) also use its quantitative counterpart. This corresponds to 20.2% of the total number of studies using quantitative content analysis articles.

In terms of what media data is being used (RQ7), the results show that about a quarter (251, 26.9%) of the empirical articles (n = 932) made use of some kind of media data. As is shown in , the vast majority of these utilize data from social media. More specifically, 141 (56.2%) of the articles that used media data used social media data. Only a small fraction fall within the category of using either mainstream or fake news media data, while only 2 articles in our sample fall within the alternative media category. In the first case, Vargo et al. (Citation2018) investigate, among other things, the extent to which fake news shifts journalistic attention on varying issues, in particular considering what they label partisan media. They utilize a computational methods approach toward network agenda setting, finding that partisan media outlets are especially susceptible to the ‘fake news agenda’ in comparison to other types of media. In the second case, Pyrhönen and Bauvois (Citation2020) analyze how media users as (mis)information producers contributed to shaping misinformation content and channeling that content between mainstream and what they refer to as ‘countermedia.’ Focusing on the ‘Pizzagate,’ ‘voter fraud’ and ‘Macronleaks’ conspiracies, they argue that media users facilitated the transfer of such conspiracies from countermedia across the threshold of mainstream media gatekeeping (p. 727). We also identified 71 articles using media data that fall within the internet category. This accounts for 28.4% of the articles using media data, and demonstrates that quite many articles use media data that is not easily classifiable, such as campaign websites or online archives. However, it should be noted that a lot of publications are quite vague in their description of their media data and its origins in general, making it virtually impossible to know what their ‘internet data’ actually consists of.

Table 5. Use of different types of media data.

Considering the use of media data over time, it stands out that media data was essentially not used at all before 2015 (see Online Appendix, Figure D and Table G), and that the use of social media data increased sharply since 2019, conveying an ever-growing interest in the dynamics of social media. In fact, the numbers have more than doubled every year since 2019. Beyond this, we find that if any media data is used, it is typically analyzed on its own (e.g. just social media data).

Finally, we also investigated both the distribution of empirical themes and the use of methods across different disciplines (RQ8). Most notably in terms of themes () is the prominence of studies investigating detection and/or characteristics within Computer Science, (168 out of 351 codes), although communication has also produced many studies in this area of research. In fact, they lead in terms of output, when it comes to prevalence and dissemination (122 out of 276 codes), making these two categories the research focus for the communication discipline. Finally, psychology also stands out considering themes, mainly focusing on information processing (71 out of 157 codes for psychology in total). All other disciplines demonstrate a more even distribution among different areas of research.

Table 6. Distributions of themes across disciplines.

As for methods (), computer science has used computational methods the most by far (180 out of 232 codes). This of course chimes well with the topical preference for detection and characteristics, and especially reflects the large body of work on detection via computational means. Psychology, on the other hand, has a predilection for experiments (69 out 199 codes), which makes sense considering their focus on information processing for which causal interference is of course needed. Communication has the largest share of content analyses (both qualitative and quantitative, with 111 out of 204 codes), and subsequently a tendency toward content-driven analysis of detection and/or characteristics, and prevalence and/or dissemination. Although, it is noteworthy, that health sciences also are heavily focused on quantitative content analysis. While this is not reflected in absolute numbers due to the small share of studies in health sciences investigated in this article, content analyses account for over 50% of methods used in these studies. A second focus within health sciences are experiments.

Table 7. Distribution of methods across disciplines.

In terms of qualitative research, most studies come from Communication. This is likely due to the predilection for investigating media content on the one hand, and interview/focus group studies of newsrooms, on the other. Out of all disciplines investigated, political science and economy appear most balanced, using an array of methods without any clear focus.

Summing up, this systematic review offers six empirical take-aways. Most apparent is the sheer number of articles that have been published in a very short amount of time. This shows just how quickly the field has grown in recent years, which speaks to the importance of gaining a better overview of what is out there. Second, there appears to be great disciplinary diversity. Of course, a large swath of articles does stem from expected disciplines such as Communication. However, many articles ended up within the ‘Other’ category. Hence, the field is not only expanding at a very fast rate: this expansion is also driven by various different areas of scientific inquiry. Third, it is noteworthy that the empirical focus by and large appears to be detection and characteristics as well as prevalence and dissemination. Here we have the largest body of work thus far, but also the one where we lack overview the most, especially considering that this is quite a recent development. A fourth key take-away, albeit perhaps expected, is the strong regional concentration of research output. Publications first and foremost come from the U.S. and the Global North (see also Madrid-Morales & Wasserman, Citation2022) and so does the data that is being analyzed in these studies. As such, we thus have only a very limited insight in terms of geographical differences at the time of writing. The fifth empirical take-away is related to methods. More specifically, the increasing usage of computational methods is noteworthy, as is the finding that this is the most widely utilized set of methods. Additionally, there is an imbalance between quantitative and qualitative methods, with quantitative methods being much more prevalent. Finally, our findings convey that the great majority of articles that use media data, use social media data. On the one hand, this is indicative of the general research interest in all things digital when it comes to misleading information. On the other hand, other media types have thus far received very limited scholarly attention, even though they may be important in terms of prevalence, dissemination, exposure, and effects of misinformation, disinformation, and fake news (Benkler et al., Citation2018; Tsfati et al., Citation2020).

Given these findings, a key question is what larger lessons can be learned, and how research on misinformation, disinformation, and fake news should move forward. To address this, we will next turn to our final research question: What key research gaps and research problems can be identified (RQ9)?

Research gaps, outstanding research problems and avenues for future research

While major progress has been made with respect to our knowledge about various aspects of misinformation, disinformation and fake news, this systematic review in conjunction with a closer reading of the articles published between 2010 and 2021 points to several key research gaps and research problems that remain (RQ9). These are summarized in , and then further elaborated upon.Footnote3

Table 8. Overview of key research gaps and research problems.

Beginning with the most frequently coded themes in our sample, prevalence and dissemination, and detection and characteristics, there are clear foci both between and within categories. Firstly, while many publications address detection, they primarily do so in artificial settings. Since the majority of these articles assess the detection of misinformation, disinformation, and fake news through computational methods, using publicly available datasets (such as Buzzfeed Webis Fake News Corpus 2016), we know very little about the performance of these methods under realistic circumstances. The development of algorithms that are able to detect misleading information has come a long way (e.g. Bondiello & Marcelloni, Citation2019; Zhang & Ghorbani, Citation2020). And any algorithm that can do so accurately in a specific dataset is highly valuable due to its potential for prevention and mitigation. However, at this point, the ones that exist – and there are many – are not being tested in terms of their capacity to detect information beyond benchmarking tests. Moving forward, an important next step is thus to trial various computational methods concerning their performance outside of preexisting datasets, and also across languages given that a majority of studies focus on English. On a different note, examining detection outside of computational social science in more depth is highly important, as detection by computational means is not sufficient to address the issue. Individuals too should be able to identify faulty claims when exposed to them. While there certainly is work out there examining precisely this issue (e.g. Aoun Barakat et al., Citation2021), the investigation of the role of interpersonal communication and personal communication networks is thus far underrepresented. People tend to put more trust in their social connections (e.g. Metzger et al., Citation2010), hence analyzing the social dimension of vulnerability to misinformation, disinformation, and fake news is an important task for the future.

Considering dissemination, it is further noteworthy that individuals also take cues from opinion leaders and political elites more broadly (Zaller, Citation1992). Yet, considering their role as potential key disseminators of disinformation and misinformation (e.g. Ross & Rivers, Citation2018; Van Duyn & Collier, Citation2019), more research in this area is clearly warranted (see also section on strategic use). Relatedly, the question to which extent dissemination of misinformation, disinformation, and fake news is concentrated among certain actors within the information environment from a broader perspective (including not just politicians but media users, interest groups, or corporate actors) presents a key research problem. In terms of dissemination, we finally identify a need to further develop our understanding of how misinformation, disinformation, and fake news flow throughout the information environment, adding to the work of, for instance, Allcott and Gentzkow (Citation2017) and Del Vicario et al. (Citation2016).

In terms of prevalence, there is a sizable number of studies investigating these empirical themes, however in a rather narrow sense. Typically, they analyze a certain topic or a particular medium (e.g. Bryant et al., Citation2014; Kata, Citation2010; Pickles et al., Citation2021). Of course, insights into the prevalence of misinformation, disinformation and/or fake news on the Covid vaccine in a particular Facebook group, for instance, are highly valuable, and we do need more of these kinds of studies in the future. However, more generalizable insights into the prevalence of such information is thus far limited and would only add to the value of more narrowly focused studies. Naturally, broader mappings are extremely challenging, and access to social media data is a major hurdle, but we still deem this an important avenue of research for the future. In addition to the cross-media prevalence of misinformation, disinformation, and fake news, we deem it important to devote more attention to issue characteristics such as salience or contestation, and the question whether some topics are easier targets than others. Finally, it would be extremely valuable to gauge prevalence of misinformation, disinformation, and fake news across time, in order to, for instance, determine the importance of external events for increases and decreases.

Similar to prevalence, characteristics are usually investigated in a narrow sense as well – in terms of certain topics or linguistic attributes in various languages (e.g. Charquero-Ballester et al., Citation2021; Haupt et al., Citation2021). We believe it to be important to further build up this line of research and to generate an understanding of potentially universal linguistic or stylistic characteristics of misinformation, disinformation, and/or fake news in general, but especially beyond the English context. Additionally, deepfakes and other kinds of visual misinformation, disinformation, and fake news are becoming more prominent both in public and scientific debate, hence we also deem it important to develop a fuller understanding of underlying attributes.

In terms of exposure, it became evident during coding that the majority of these studies do not measure actual exposure. They are typically survey-based and rely on self-reported measures targeting perceptions of exposure to misinformation, disinformation, or fake news (e.g. Hjorth & Adler-Nissen, Citation2019; Liu & Huang, Citation2020; Wassermann & Madrid-Morales, Citation2019). An exception is Guess, Nyhan and Reifler’s study (Citation2020) on exposure to untrustworthy websites during the 2016 U.S. elections which, in addition to survey data, also uses web-traffic data. Self-reported measures generally have their pitfalls, since participants tend to over- or underestimate activities such as media use (Prior, Citation2009). It is thus possible that individuals overestimate their exposure to fake news, for example. In fact, Guess et al. (Citation2020) found exposure to false information during the 2016 U.S. elections was fairly low (see also Allcott & Gentzkow, Citation2017). This, in return, could support the notion of potential overestimation – not least because public debate generally suggests a high exposure. The point is that we can say very little about the actual frequency of exposure. On a similar note, we also know little about the quality of exposure in terms of how, when, and where people are exposed, and under which circumstances people might be more or less vulnerable. For instance, we do not yet know whether exposure is dependent on certain characteristics such as political interest, age, or ideology (Weeks & Gil de Zuniga, Citation2021). What is more, we also lack information regarding the social dimension of misinformation, disinformation, and fake news; that is, the role of people’s social networks and connections when it comes to exposure (both online and offline) (Weeks & Gil de Zuniga, Citation2021). Delving deeper into patterns of exposure under realistic circumstances thus poses another research problem for future studies.

Moving from exposure to impact, there are quite a lot of publications investigating associations between exposure to misinformation, disinformation, and fake news and its impact in terms of misperceptions, political attitudes, or political participation (e.g. Greene & Murphy, Citation2020; Lee et al., Citation2023; Thorson, Citation2016; Weeks, Citation2015). There are also some studies that simultaneously assess moderators or mediators of these relations such as affect or partisanship (e.g. Weeks, Citation2015). However, the fact that most survey-based studies rely on cross-sectional data also means that we cannot reliably speak to the effects of misinformation, disinformation, and fake news on societal and political outcomes (Lazer et al., Citation2018). While there is additionally a number of articles that investigate causal mechanisms in terms of effects through experiments (e.g. Berinsky, Citation2017; Swire et al., Citation2017), these studies usually assess effects in the short term (Flynn et al., Citation2017; Lewandowsky et al., Citation2012). What is required for the consolidation and advancement of these findings moving forward are two things. Firstly, longitudinal studies that reassert or challenge cross-sectional survey findings. And secondly, insights that speak to the nature and development of effects over time (see also Weeks & Gil de Zuniga, Citation2021). Both can be achieved through increasing usage of longitudinal data such as panel data, which at this point is still utilized rarely compared to cross-sectional data. It is therefore essential that such data be used more frequently to establish solid patterns of effects in the future. Looking beyond the micro level, there are furthermore very few studies that assess the (potential) impact of misinformation, disinformation, and fake news on the meso (e.g. issue publics, interest groups, social movements, organizations) or macro level (but see Humprecht et al., Citation2020; Jamieson, Citation2018). In this article we have argued that misinformation, disinformation, and fake news could have potentially dire consequences for democracy at large. Thus, investigating whether this is actually the case, and if so to what extent, is highly relevant from a societal perspective. In addition to strengthening individual-level findings, moving to a broader level of analysis is another challenge for future research.

A final observation considering empirical themes concerns strategic use and solutions. In our sample, these are the least frequently investigated themes, even though they might be most influential when it comes to countering the potential societal impact of misinformation, disinformation, and fake news. Both disinformation and fake news imply the intention to deceive, and thus strategic use. However, beyond studies on state-sponsored disinformation (typically in the Russian context) (e.g. Freelon & Lokot, Citation2020) scholars have yet to uncover what this strategic use looks like, when, how, and where it occurs as well as what its effects may be. Strategic use is often assumed or alleged both in public debate and research – not least to assert the importance of doing this kind of research in the first place. Thus, it is all the more vital to understand its dynamics in order not to over- or understate the threat this might pose. This is particularly true for political elites (but also other opinion leaders), whose role as potential sources and amplifiers of misinformation, disinformation, and fake news warrants greater scholarly attention (Tucker et al., Citation2018; Weeks & Gil de Zuniga, Citation2021). Today, politicians can simply bypass mainstream media and its journalistic scrutiny to communicate directly with their electorate through platforms such as Twitter. This, in return, enables them to spread a litany of false and misleading information for presumably strategic ends. Former U.S. President Donald Trump is a case in point, but he is certainly not the only political leader spreading false and misleading information. Yet, we lack systematic empirical evidence speaking to the prevalence and nature of such potential strategic use. Additionally, there might be other actors besides politicians or governments that utilize misinformation, disinformation, and fake news. Revealing who they are and how varying actors use such information strategically thus poses another important research problem. Finally, the motivations behind the strategic use of misinformation, disinformation, and fake news are not always clear. Besides the ‘who’ and the ‘how,’ there is also a need to understand the ‘why’ – the purposes behind strategic use (e.g. persuasion, sowing doubt, political gain or economic gain).

Turning to solutions, it is apparent that the efforts to investigate how the problem posed by misinformation, disinformation, and fake news can realistically be mitigated from a research perspective have been limited in comparison (see also Weeks & Gil de Zuniga, Citation2021). To be sure, there is a large body of work on corrections and a smaller one on media literacy as countermeasures, for instance (Bode & Vraga, Citation2015; De Paor & Heravi, Citation2020; Vraga & Bode, Citation2018). However, most of these publications do not investigate them as a concrete solution in real-life circumstances. As appears to be a recurring theme with research on misinformation, disinformation, and fake news, although extremely challenging, findings should be tested in more realistic settings than the artificial ones created by experiments and surveys in the future to consolidate or challenge previous findings. On the other hand, media corporations such as Facebook are already implementing certain fact-checking and content moderation strategies – which could be effective countermeasures. However, there are, to the best of our knowledge, very few studies that assess their effectiveness. In addition to testing potential solutions under realistic circumstances, scholars should also gauge the usefulness of the strategies already in use. The same applies to media or information literacy programs that have been employed in varying contexts. We thus argue that studies that assess the effects of such programs in terms of decreasing vulnerability to misinformation, disinformation, and fake news across socio-economic and geographical circumstances are extremely important to move forward.

A hurdle in terms of solutions, however, is that knowledge resistance (and its subsequent consequences for democracy) is not solely dependent on the information we consume. As we argued in this paper, what we believe is also the result of psychological biases such as motivated reasoning and confirmation biases (Kunda, Citation1990; Strömbäck, Wikforss, et al., Citation2022; Taber & Lodge, Citation2006). Garrett et al. (Citation2016) convey that individuals may be aware of the overall evidence or consensus surrounding an issue and still choose to believe otherwise. In a similar vein, directionally motivated reasoning has also been identified as a key mechanism in terms of resistance to correctional information (e.g. Flynn et al., Citation2017; Nyhan, Citation2020). Establishing countermeasures that have the capacity to override such biases will thus pose a challenge moving forward (for an overview of experimental research on possible solutions, see Ingre et al., Citation2022).

Frameworks to that end are, nonetheless, beginning to be formulated (see for instance Bode & Vraga, Citation2021). Now it is a matter of implementing these strategies. The few studies that do assess solutions in a substantial wayFootnote4 to date are typically survey-based articles analyzing the perceived success of certain measures such as governmental policy changes (e.g. Tully et al., Citation2021). While these insights are valuable, they still do not speak to the objective impact of such measures. In addition to gauging the influence of strategies such as corrections or media literacy programs as well as legislative efforts, it is also vital to theorize and investigate other possible solutions. Moving from the consumption to the production side, we believe it would be very beneficial to look into the possibility of leveraging computational social science on a larger scale for the purposes of early detection as well as mitigation of dissemination.

In addition to empirical themes, the geographical over-representation of the U.S., and to a lesser extent Europe, needs to be addressed. The great majority of research on misinformation, disinformation, and fake news both stems from and investigates Western countries, broadly speaking (see also Madrid-Morales & Wasserman, Citation2022). Exceptions are certainly India and China who are especially productive in terms of investigating dissemination and detection through computational methods – although the datasets they use to train their algorithms are typically in English. Other regions around the world are not as well represented as-of-yet. This is particularly true for African countries and other Asian countries, but Latin American countries are also generally under-represented. The geographically and culturally narrow perspective we currently hold is problematic since the challenges misinformation, disinformation, and fake news pose may look different in varying parts of the world, and thus require distinct solutions and countermeasures. For instance, while we assume that mainstream media submit to certain journalistic standards, thereby transmitting false information by mistake or in order to correct it.Footnote5 This is not an assumption that holds in non-democratic circumstances, where mainstream media are typically exploited by political elites, willfully disseminating such information. Hence the ways in which individuals are exposed to misinformation, disinformation, and fake news, how they engage with it, how prevalent it is, how it is disseminated, or the impact it may have can vary greatly according to socio-geographic circumstances. Investigating potential geographical differences across the research areas identified in this paper, and the system-level factors that might explain them, therefore poses an important research problem. This particularly pertains to the influence of cultural norms on the various areas of interest identified in this study. Encouraging and enabling continuous research from a broader range of countries as well as research that analyzes data from these countries more broadly continues to be a major opportunity and need for future research (see also Arqoub et al., 2021).

Beyond empirical themes and geospatial distribution, we also want to highlight two research gaps related to methodology and media data. The first pertains to the discrepancy between quantitative and qualitative methods. The great majority of articles in our sample use quantitative methods. And although there is a sizable number of studies utilizing qualitative content analysis, it still does not compare to its quantitative counterpart. Such studies are important, however, because they may provide a more nuanced account in terms of all potentially content-driven aspects of misinformation, disinformation, and fake news, such as characteristics or dissemination. If insights from qualitative content analysis are somewhat sparse in comparison, studies employing interview or focus group data are almost absent altogether. Hence, researchers are not actually talking to individuals, engaging with them. These first-hand perspectives may however strengthen or challenge findings from quantitative research or serve to explain discrepancies and inconsistencies within those findings, in addition to providing a unique and perhaps more critical outlook in their own right. For instance, we have yet to develop a more comprehensive understanding of people’s motivations to engage with misinformation, disinformation, and fake news, as well as what their strategies to detect it are. Additionally, while misinformation, disinformation, and fake news pose a challenge both from a scientific and societal point of view, there is a need for understanding how individuals perceive their impact in order to formulate concrete solutions. The point is that without the more detailed insights qualitative research offers, we are bound to miss essential aspects of misinformation, disinformation, and fake news, that are required for a holistic perspective on the issue from both a scholarly and societal point of view.

The final research gap concerns the use of media data. Among studies that utilize media data, frequent social media data usage is to be expected given social media provides advantageous technological affordances (reach, speed of transmission) coupled with a lack of constraints posed by journalistic practices. Since most studies using social media data typically investigate a single platform, a larger cross-platform perspective would be an important next step in future research in terms of determining platform differences when it comes to empirical themes such as exposure or dissemination.

What is rather surprising is how little other media types – mainstream and alternative media – are investigated. Granted, due to our platform-centered coding scheme, we may underestimate their share. However, during coding it did become evident that mainstream and alternative media are rarely investigated in their own right.

The idea of mainstream media as a potential driver of misinformation, disinformation, and fake news may at first sight be counterintuitive, since such outlets – at least in democracies – are supposed to operate according to a set of journalistic practices and standards including neutrality and objectivity (Boykoff & Boykoff, Citation2007; Brüggemann & Engesser, Citation2017). However, there is growing concern that they may spread misinformation to very broad audiences, even if only for the purpose of debunking false and misleading information (Tsfati et al., Citation2020). By largely excluding mainstream media from the conversation, we may thus be missing an important route of exposure which leads to the question whether mainstream media should be viewed as a mitigating or contributing factor. Secondly, data from alternative media has essentially not been used at all in our sample, except for the cases discussed previously. Extant research on misinformation, disinformation, and fake news, however, suggests alternative media as the likely primary transmitter beyond social media (e.g. Benkler et al., Citation2018; Wagnsson, Citation2022). We concur that this is a logical assumption since many alternative media are ideologically aligned and thus pursue political rather than journalistic goals (e.g. Benkler et al., Citation2018; Holt et al., Citation2019; Nygaard, Citation2019; Strömbäck, Boomgaarden, et al., Citation2022). Hence, the information provided is selected and presented in a way that is conducive to those goals – regardless of facticity. Like social media, alternative media are not governed by journalistic values. Unlike social media, alternative media have a specific purpose, and likely are more goal oriented. Consequently, alternative media are more likely to provide misleading information which makes them especially relevant in this context. And there is in fact evidence that alternative media use leads to misperceptions (e.g. Garrett et al., Citation2019). What our findings convey, however, is that there is a disconnect between theorizing about alternative media as a driver of misinformation, disinformation, and fake news, and empirically investigating it as such. In addition to the lack of consideration for mainstream media, this is an essential challenge to address in future research.

Final discussion and limitations

Summing up, this systematic literature review shows that research on misinformation, disinformation, and fake news is a highly interdisciplinary field that has developed extremely rapidly over the last years, and it offers several empirical and analytical take-aways. Based on the empirical review and a closer reading of the reviewed articles, we have identified several key research gaps and research problems that need further scholarly attention, both to increase our theoretical understanding of misinformation, disinformation, and fake news and help address them as societal problems.

These contributions notwithstanding, some limitations should be acknowledged. One limitation is related to how our coding scheme was devised, and in particular that the coding for media data focused on the platform rather than the actual content. This might overestimate the role of social media and underestimate the role of other media types in this area of research. Similarly, the division into different types of media is necessarily rather broad, hence we cannot account for differences within different types of media. Second, while we included more search terms than most earlier reviews, and we are confident that the terms misinformation, disinformation, and fake news capture the vast majority of relevant research, there are other terms such as ‘rumors’ or ‘conspiracy theories’ which were not included if they did not also include any of the search terms misinformation, disinformation, and/or fake news. Another related limitation may be that research from certain disciplines may prefer search terms not included in this study, which may have skewed our results. What is more, it is possible that there are articles that substantially investigate misinformation, disinformation, and fake news in a more implicit way, without using any of the search terms. There is hence the possibility that we have missed research relevant to the study of this area. Finally, this review only includes articles in peer-reviewed journals published in English, and we cannot rule out that we might have missed relevant research published in other languages.

These limitations notwithstanding, this is the largest and most comprehensive systematic review on research on misinformation, disinformation, and fake news across topics and disciplines. As such, it provides a unique insight into the development of the field over an extended period of time. This has enabled us to identify key research gaps and research problems, and to formulate comprehensive and exciting avenues for future research projects. Considering the threat misinformation, disinformation, and fake news pose, it is vitally important that we tackle those gaps and continue advancing the field. The stakes are undoubtedly high.

Supplemental material

Supplemental Appendix

Download MS Word (601.9 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 Initially, we also employed the search term ‘propaganda,’ but this yielded too many false positives.

2 Note again that by this we mean the country of the University the first author is affiliated with, not their country of origin or in what country or countries the data was collected.

3 By identifying these research gaps and outstanding research questions, we do not mean to suggest that scholars have not thought about them or tried to address them. In many cases, the challenges and external barriers that exist, for example in terms of access to data and costs, may be a key reason why certain research gaps exist. Furthermore, this is a non-exhaustive list.

4 That is within realistic circumstances and explicitly as a solution.

5 Which, as we discussed, may have its pitfalls too.

References

  • Aalberg, T., & Curran, J. (Eds.). (2012). How media inform democracy. A comparative approach. Routledge.
  • Abu Arqoub, O., Abdulateef Elega, A., Efe Özad, B., Dwikat, H., & Adedamola Oloyede, F. (2022). Mapping the scholarship of fake news research: A systematic review. Journalism Practice, 16(1), 56–86. https://doi.org/10.1080/17512786.2020.1805791
  • Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. https://doi.org/10.1257/jep.31.2.211
  • Almaliki, M. (2019). Misinformation-aware social media: A software engineering perspective. IEEE Access, 7, 182451–182458. https://doi.org/10.1109/ACCESS.2019.2960270
  • Aoun Barakat, K., Dabbous, A., & Tarhini, A. (2021). An empirical approach to understanding users’ fake news identification on social media. Online Information Review, 45(6), 1080–1096. https://doi.org/10.1108/OIR-08-2020-0333
  • Armborst, A. (2017). How fear of crime affects punitive attitudes. European Journal on Criminal Policy and Research, 23(3), 461–481. https://doi.org/10.1007/s10610-017-9342-5
  • Ball, B. (2021). Defeating fake news: On journalism, knowledge, and democracy. Moral Philosophy and Politics, 8(1), 5–26. https://doi.org/10.1515/mopp-2019-0033
  • Balod, H. S. S., & Hameleers, M. (2021). Fighting for truth? The role perceptions of Filipino journalists in an era of mis- and disinformation. Journalism, 22(9), 2368–2385. https://doi.org/10.1177/1464884919865109
  • Bartels, L. (1996). Uninformed votes: Information effects in presidential elections. American Journal of Political Science, 40(1), 194–230. https://doi.org/10.2307/2111700
  • Bastick, Z. (2021). Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation. Computers in Human Behavior, 116(2021), Article 106633. https://doi.org/10.1016/j.chb.2020.106633
  • Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda. Manipulation, disinformation, and radicalization in American politics. Oxford University Press.
  • Berinsky, A. J. (2017). Rumors and health care reform: Experiments in political misinformation. British Journal of Political Science, 47(2), 241–262. https://doi.org/10.1017/S0007123415000186
  • Bode, L., & Vraga, E. K. (2015). In related news, that was wrong: The correction of misinformation through related stories functionality in social media: In related news. Journal of Communication, 65(4), 619–638. https://doi.org/10.1111/jcom.12166
  • Bode, L., & Vraga, E. K. (2021). The Swiss cheese model for mitigating online misinformation. Bulletin of the Atomic Scientists, 77(3), 129–133. https://doi.org/10.1080/00963402.2021.1912170
  • Bondiello, A., & Marcelloni, F. (2019). A survey on fake news and rumour detection techniques. Information Sciences, 497(2019), 38–55. https://doi.org/10.1016/j.ins.2019.05.035
  • Boykoff, T., & Boykoff, J. M. (2007). Climate change and journalistic norms: A case-study of US mass-media coverage. Geoforum, 38(6), 1190–1204. https://doi.org/10.1016/j.geoforum.2007.01.008
  • Braşoveanu, A. M. P., & Andonie, R. (2021). Integrating machine learning techniques in semantic fake news detection. Neural Processing Letters, 53(5), 3055–3072. https://doi.org/10.1007/s11063-020-10365-x
  • Brüggemann, M., & Engesser, S. (2017). Beyond false balance: How interpretive journalism shapes media coverage of climate change. Global Environmental Change, 42(2017), 58–67. https://doi.org/10.1016/j.gloenvcha.2016.11.004
  • Bryant, A. G., Narasimhan, S., Bryant-Comstock, K., & Levi, E. E. (2014). Crisis pregnancy center websites: Information, misinformation and disinformation. Contraception, 90(6), 601–605. https://doi.org/10.1016/j.contraception.2014.07.003
  • Carlson, M., Robinson, S., & Lewis, S. C. (2021). News after Trump. Journalism’s crisis of relevance in a changed media culture. Oxford University Press.
  • Charquero-Ballester, M., Walter, J. G., Nissen, I. A., & Bechmann, A. (2021). Different types of COVID-19 misinformation have different emotional valence on Twitter. Big Data & Society, 8(2), 205395172110412. https://doi.org/10.1177/20539517211041279
  • Clawson, R. A., & Oxley, Z. M. (2021). Public opinion. Democratic ideals, democratic practice (4th ed.). CQ Press.
  • Damstra, A., Boomgaarden, H. G., Broda, E., Lindgren, E., Strömbäck, J., Tsfati, Y., & Vliegenthart, R. (2021). What does fake look like? A review of the literature on intentional deception in the news and on social media. Journalism Studies, 22(14), 1947–1963. https://doi.org/10.1080/1461670X.2021.1979423
  • Delli Carpini, M. X., & Keeter, S. (1996). What Americans know about politics and why it matters. Yale University Press.
  • Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E., & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554–559. https://doi.org/10.1073/pnas.1517441113
  • De Paor, S., & Heravi, B. (2020). Information literacy and fake news: How the field of librarianship can help combat the epidemic of fake news. The Journal of Academic Librarianship, 46(5), 1–8. https://doi.org/10.1016/j.acalib.2020.102218
  • Di Domenico, G. D., Sit, J., Ishizaka, A., & Nunan, D. (2021). Fake news, social media and marketing: A systematic review. Journal of Business Research, 124(2021), 329–341. https://doi.org/10.1016/j.jbusres.2020.11.037
  • Egelhofer, J. L., & Lecheler, S. (2019). Fake news as a two-dimensional phenomenon: A framework and research agenda. Annals of the International Communication Association, 43(2), 97–116. https://doi.org/10.1080/23808985.2019.1602782
  • Flynn, D. J., Nyhan, B., & Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38(S1), 127–150. https://doi.org/10.1111/pops.12394
  • Freelon, D., & Lokot, T. (2020). Russian disinformation campaigns on Twitter target political communities across the spectrum. Collaboration between opposed political groups might be the most effective way to counter it. Harvard Kennedy School Misinformation Review, 1(1), 1–9. https://doi.org/10.37016/mr-2020-00
  • Freelon, D., & Wells, C. (2020). Disinformation as political communication. Political Communication, 37(2), 145–156. https://doi.org/10.1080/10584609.2020.1723755
  • Gaines, B. J., Kuklinski, J. H., Quirk, P. J., Peyton, B., & Verkuilen, J. (2007). Same facts, different interpretations: Partisan motivation and opinion on Iraq. Journal of Politics, 69(4), 957–974. https://doi.org/10.1111/j.1468-2508.2007.00601.x
  • Garrett, R. K., Long, J. A., & Jeong, M. S. (2019). From partisan media to misperception: Affective polarization as mediator. Journal of Communication, 69(5), 490–512. https://doi.org/10.1093/joc/jqz028
  • Garrett, R. K., Weeks, B. E., & Neo, R. L. (2016). Driving a wedge between evidence and beliefs: How online ideological news exposure promotes political misperceptions. Journal of Computer-Mediated Communication, 21(5), 331–348. https://doi.org/10.1111/jcc4.12164
  • Glüer, K., & Wikforss, Å. (2022). What is knowledge resistance? In J. Strömbäck, Å. Wikforss, K. Glüer, T. Lindholm, & H. Oscarsson (Eds.), Knowledge resistance in high-choice information environments (pp. 29–48). Routledge.
  • Goldani, M. H., Momtazi, S., & Safabakhsh, R. (2021). Detecting fake news with capsule neural networks. Applied Soft Computing, 101(2021), Article 106991. https://doi.org/10.1016/j.asoc.2020.106991
  • Greene, C. M., & Murphy, G. (2020). Individual differences in susceptibility to false memories for COVID-19 fake news. Cognitive Research, 5(1), 1–8.
  • Guess, A. M., Nyhan, B., & Reifler, J. (2020). Exposure to untrustworthy websites in the 2016 US election. Nature Human Behaviour, 4(5), 472–480. https://doi.org/10.1038/s41562-020-0833-x
  • Guo, L. (2020). China’s “fake news” problem: Exploring the spread of online rumors in the government-controlled news media. Digital Journalism, 8(8), 992–1010. https://doi.org/10.1080/21670811.2020.1766986
  • Gutiérrez-Coba, L. M., Coba-Gutiérrez, P., & Gómez-Díaz, J. (2020). Fake news about COVID-19: A comparative analysis of six Iberoamerican countries. Revista Latina de Communicación Social, 78(78), 237–264. https://doi.org/10.4185/RLCS-2020-1476
  • Ha, L., Perez, L. A., & Ray, R. (2021). Mapping recent development in scholarship on fake news and misinformation, 2008 to 2017: Disciplinary contribution, topics, and impact. American Behavioral Scientist, 65(2), 290–315. https://doi.org/10.1177/0002764219869402
  • Hameleers, M., Powell, T. E., Van Der Meer, T. G. L. A., & Bos, L. (2020). A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Political Communication, 37(2), 281–301. https://doi.org/10.1080/10584609.2019.1674979
  • Haupt, M. R., Li, J., & Mackey, T. K. (2021). Identifying and characterizing scientific authority-related misinformation discourse about hydroxychloroquine on Twitter using unsupervised machine learning. Big Data & Society, 8(1). https://doi.org/10.1177/20539517211013843
  • Hjorth, F., & Adler-Nissen, R. (2019). Ideological asymmetry in the reach of pro-Russian digital disinformation to United States audiences. Journal of Communication, 69(2), 168–192. https://doi.org/10.1093/joc/jqz006
  • Hochschild, J. L., & Einstein, K. L. (2015). Do facts matter? Information and misinformation in American politics. University of Oklahoma Press.
  • Holt, K., Ustad Figenshou, T., & Frischlich, L. (2019). Key dimensions of alternative news media. Digital Journalism, 7(7), 860–869. https://doi.org/10.1080/21670811.2019.1625715
  • Humprecht, E., Esser, F., & Van Aelst, P. (2020). Resilience to online disinformation: A framework for cross-national comparative research. International Journal of Press/Politics, 25(3), 493–516. https://doi.org/10.1177/1940161219900126
  • Ingre, M., Lindholm, T., & Strömbäck, J. (2022). Overcoming knowledge resistance: A systematic review of experimental studies. In J. Strömbäck, Å. Wikforss, K. Glüer, T. Lindholm, & H. Oscarsson (Eds.), Knowledge resistance in high-choice information environments (pp. 255–280). Routledge.
  • Jamieson, K. H. (2018). Cyberwar: How Russian hackers and trolls helped elect a president. What we don’t, can’t, and do know. Oxford University Press.
  • Jerit, J., & Zhao, Y. (2020). Political misinformation. Annual Review of Political Science, 23(1), 77–94. https://doi.org/10.1146/annurev-polisci-050718-032814
  • Kapantai, E., Christopoulou, A., Berberidis, C., & Peristeras, V. (2021). A systematic literature review on disinformation: Toward a unified taxonomical framework. New Media & Society, 23(5), 1301–1326. https://doi.org/10.1177/1461444820959296
  • Kata, A. (2010). A postmodern Pandora's box: Anti-vaccination misinformation on the internet. Vaccine, 28(7), 1709–1716. https://doi.org/10.1016/j.vaccine.2009.12.022
  • Kavanagh, J., & Rich, M. D. (2018). Truth decay. An initial exploration of the diminishing role of facts and analysis in American public life. Rand.
  • Klintman, M. (2019). Knowledge resistance. How we avoid insight from others. Manchester University Press.
  • Krishna, A., & Thompson, T. L. (2021). Misinformation about health: A review of health communication and misinformation scholarship. American Behavioral Scientist, 65(2), 316–332. https://doi.org/10.1177/0002764219878223
  • Kubin, L. (2019). Is there a resurgence of vaccine preventable diseases in the U.S.? Journal of Pediatric Nursing, 44(2019), 115–118. https://doi.org/10.1016/j.pedn.2018.11.011
  • Kuklinski, J. H., Quirk, P. J., Jerit, J., Schwieder, D., & Rich, R. F. (2000). Misinformation and the currency of democratic citizenship. The Journal of Politics, 62(3), 790–816. https://doi.org/10.1111/0022-3816.00033
  • Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480
  • Kwanda, F. A., & Lin, T. T. C. (2020). Fake news practices in Indonesian newsrooms during and after the Palu earthquake: A hierarchy-of-influences approach. Information, Communication & Society, 23(6), 849–866. https://doi.org/10.1080/1369118X.2020.1759669
  • Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998
  • Lazić, A., & Žeželj, I. (2021). A systematic review of narrative interventions: Lessons for countering anti-vaccination conspiracy theories and misinformation. Public Understanding of Science, 30(6), 644–670. https://doi.org/10.1177/09636625211011881
  • Lee, J., Choi, J., & Britt, R. K. (2023). Social media as risk-attenuation and misinformation-amplification station: How social media interaction affects misperceptions about COVID-19. Health Communication, 38(6), 1232–1242. https://doi.org/10.1080/10410236.2021.1996920
  • Lewandowsky, S. (2020). Climate change, disinformation, and how to combat it. Annual Review of Public Health, 41(1), 1–21. https://doi.org/10.1146/annurev-publhealth-090419-102409
  • Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the ‘post-truth’ era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369. https://doi.org/10.1016/j.jarmac.2017.07.008
  • Lewandowsky, S., Ecker, U. K. H., Seifert, M., Schwarz, N. C., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. https://doi.org/10.1177/1529100612451018
  • Lindgren, E., Damstra, A., Strömbäck, J., Tsfati, Y., Vliegenthart, R., & Boomgaarden, H. (2021). Uninformed or misinformed in surveys? A review of the conceptual and empirical distinctions between (lack of) knowledge and (mis)perceptions in politics. In J. Strömbäck, Å. Wikforss, K. Glüer, T. Lindholm, & H. Oscarsson (Eds.), Knowledge resistance in high-choice information environments (pp. 187–206). Routledge.
  • Liu, P. L., & Huang, L. V. (2020). Digital disinformation about COVID-19 and the third-person effect: Examining the channel differences and negative emotional outcomes. Cyberpsychology, Behavior, and Social Networking, 23(11), 789–793. https://doi.org/10.1089/cyber.2020.0363
  • Madrid-Morales, D., & Wasserman, H. (2022). Research methods in comparative disinformation studies. In H. Wasserman & D. Madrid-Morales (Eds.), Disinformation in the Global South (pp. 41–57). Wiley Blackwell.
  • Maertens, R., Roozenbeek, J., Basol, M., & van der Linden, S. (2021). Long-term effectiveness of inoculation against misinformation: Three longitudinal experiments. Journal of Experimental Psychology: Applied, 27(1), 1–16. https://doi.org/10.1037/xap0000315
  • McIntyre, L. (2018). Post-truth. MIT Press.
  • Mckay, S., & Tenove, C. (2020). Disinformation as a Threat to Deliberative Democracy. Political Research Quarterly, 74(3), 703–717. https://doi.org/10.1177/1065912920938143
  • McNair, B. (2017). Fake news: Falsehood, fabrication and fantasy in journalism. Routledge.
  • Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413–439. https://doi.org/10.1111/j.1460-2466.2010.01488.x
  • Milner, H. (2002). Civic literacy. How informed citizens make democracy work. Tufts University Press.
  • Moscadelli, A., Albora, G., Biamonte, M. A., Giorgetti, D., Innocenzio, M., Paoli, S., Lorini, C., Bonanni, P., & Bonaccorsi, G. (2020). Fake news and COVID-19 in Italy: Results of a quantitative observational study. International Journal of Environmental Research and Public Health, 17(16), 5850. https://doi.org/10.3390/ijerph17165850
  • Nounkeu, C. T. (2020). Facebook and fake news in the “Anglophone crisis” in Cameroon. African Journalism Studies, 41(3), 20–35. https://doi.org/10.1080/23743670.2020.1812102
  • Nsoesie, E. O., Cesare, N., Müller, M., & Ozonoff, A. (2020). COVID-19 misinformation spread in eight countries: Exponential growth modeling study. Journal of Medical Internet Research, 22(12), e24425. https://doi.org/10.2196/24425
  • Nygaard, S. (2019). The appearance of objectivity: How immigration-critical alternative media report the news. Journalism Practice, 13(10), 1147–1163. https://doi.org/10.1080/17512786.2019.1577697
  • Nyhan, B. (2020). Facts and myths about misperceptions. Journal of Economic Perspectives, 34(3), 220–236. https://doi.org/10.1257/jep.34.3.220
  • O’Connor, C., & Weatherall, J. O. (2019). The misinformation age. How false beliefs spread. Yale University Press.
  • Ortoleva, P. (2019). Canards, Fausses Nouvelles, paranoid style. Classic authors for an emerging phenomenon. In J. E. Katz & K. K. Mays (Eds.), Journalism & truth in the age of social media (pp. 119–132). Oxford University Press.
  • Page, B. I., & Shapiro, R. Y. (1992). The rational public: Fifty years of trends in Americans’ policy preferences. Chicago University Press.
  • Papachrisanthou, M. M., & Davis, R. L. (2019). The resurgence of Measles, Mumps and Pertussis. SI: Infectious Disease, 16(5), 319–395. https://doi.org/10.1016/j.nurpra.2018.12.028
  • Pham, C. V., Phu, Q. V., Hoang, H. X., Pei, J., & Thai, M. T. (2019). Minimum budget for misinformation blocking in online social networks. Journal of Combinatorial Optimization, 38(4), 1101–1127. https://doi.org/10.1007/s10878-019-00439-5
  • Pham, D. V., Nguyen, G. L., Nguyen, T. N., Pham, C. V., & Nguyen, A. V. (2020). Multi-topic misinformation blocking with budget constraint on online social networks. IEEE Access, 8, 78879–78889. https://doi.org/10.1109/ACCESS.2020.2989140
  • Pickles, K., Cvejic, E., Nickel, B., Copp, T., Bonner, C., Leask, J., Ayre, J., Batcup, C., Cornell, S., Dakin, T., Dodd, R. H., Isautier, J. M. J., & McCaffery, K. J. (2021). COVID-19 misinformation trends in Australia: Prospective longitudinal national survey. Journal of Medical Internet Research, 23(1), e23805. https://doi.org/10.2196/23805
  • Prior, M. (2009). Improving media effects research through better measurement of news exposure. Journal of Politics, 71(3), 893–908. https://doi.org/10.1017/S0022381609090781
  • Prior, M. (2010). You’ve either got it or you don’t? The stability of political interest over the life cycle. Journal of Politics, 72(3), 747–766. https://doi.org/10.1017/S0022381610000149
  • Pyrhönen, N., & Bauvois, G. (2020). Conspiracies beyond fake news. Producing reinformation on presidential elections in the transnational hybrid media system. Sociological Inquiry, 90(4), 705–731. https://doi.org/10.1111/soin.12339
  • Rich, P. R., & Zaragoza, M. A. (2021). Correcting misinformation in news stories: An investigation of correction timing and correction durability. Journal of Applied Research in Memory and Cognition, 9(3), 310–322. https://doi.org/10.1037/h0101850
  • Rosenfeld, S. (2019). Democracy and truth. A short history. University of Pennsylvania Press.
  • Ross, A. S., & Rivers, D. J. (2018). Discursive deflection: Accusation of ‘fake news’ and the spread of mis- and disinformation in the tweets of President Trump. Social Media + Society, 4(2), 2056305118776010.
  • Sahoo, S. R., & Gupta, B. B. (2021). Multiple features based approach for automatic fake news detection on social networks using deep learning. Applied Soft Computing, 100(2021), Article 106983. https://doi.org/10.1016/j.asoc.2020.106983
  • Sides, J., & Citrin, J. (2007). European opinion about immigration: The role of identities, interests and information. British Journal of Political Science, 37(3), 477–504. https://doi.org/10.1017/S0007123407000257
  • Strömbäck, J., Boomgaarden, H., Broda, E., Damstra, A., Lindgren, E., Tsfati, Y., & Vliegenthart, R. (2022). From low-choice to high-choice media environments: Implications for knowledge resistance. In J. Strömbäck, Å. Wikforss, K. Glüer, T. Lindholm, & H. Oscarsson (Eds.), Knowledge resistance in high-choice information environments (pp. 49–68). Routledge.
  • Strömbäck, J., Djerf-Pierre, M., & Shehata, A. (2013). The dynamics of political interest and news media consumption: A longitudinal perspective. International Journal of Public Opinion Research, 25(4), 414–435. https://doi.org/10.1093/ijpor/eds018
  • Strömbäck, J., Wikforss, Å., Glüer, K., Lindholm, T., & Oscarsson, H. (2022). Knowledge resistance in high-choice information environments. Routledge.
  • Suarez-Lledo, V., & Alvarez-Galvez, J. (2021). Prevalence of health misinformation on social media: Systematic review. Journal of Medical Internet Research, 23(1), e17187. https://doi.org/10.2196/17187
  • Sude, D., & Knobloch-Westerwick, S. (2022). Selective exposure and attention to attitude-consistent and attitude-discrepant information: Reviewing the evidence. In J. Strömbäck, Å. Wikforss, K. Glüer, T. Lindholm, & H. Oscarsson (Eds.), Knowledge resistance in high-choice information environments (pp. 88–105). Routledge.
  • Swire, B., Berinsky, A. J., Lewndowsky, S., & Ecker, U. K. H. (2017). Processing political misinformation: Comprehending the Trump phenomenon. Royal Society Open Science, 4(3), Article 160802. https://doi.org/10.1098/rsos.160802
  • Swire-Thompson, B., & Lazer, D. (2020). Public health and online misinformation: Challenges and recommendations. Annual Review of Public Health, 41(1), 433–451. https://doi.org/10.1146/annurev-publhealth-040119-094127
  • Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769. https://doi.org/10.1111/j.1540-5907.2006.00214.x
  • Tandoc, E. C., Lim, Z. W., & Ling, R. (2018). Defining ‘fake news’: A typology of scholarly definitions. Digital Journalism, 6(2), 137–153. https://doi.org/10.1080/21670811.2017.1360143
  • Thorson, E. (2016). Belief echoes: The persistent effects of corrected misinformation. Political Communication, 33(3), 460–480. https://doi.org/10.1080/10584609.2015.1102187
  • Tsfati, Y., Boomgaarden, H. G., Strömbäck, J., Vliegenthart, R., Damstra, A., & Lindgren, E. (2020). Causes and consequences of mainstream media dissemination of fake news: Literature review and synthesis. Annals of the International Communication Association, 44(2), 157–173. https://doi.org/10.1080/23808985.2020.1759443
  • Tucker, J. A., Guess, A., Barberá, P., Vaccari, C., Siegel, A., Sanovich, S., Stukal, D., & Nyhan, B. (2018). Social media, political polarization and political disinformation: A review of the scientific literature. https://www.hewlett.org/wp-content/uploads/2018/03/Social-Media-Political-Polarization-and-Political-Disinformation-Literature-Review.pdf.
  • Tully, M., Madrid-Morales, D., Wasserman, H., Gondwe, G., & Ireri, K. (2021). Who is responsible for stopping the spread of misinformation? Examining audience perceptions of responsibilities and responses in six sub-Saharan African countries. Digital Journalism, 10(5), 679–697. https://doi.org/10.1080/21670811.2021.1965491
  • Van Duyn, E., & Collier, J. (2019). Priming and Fake News: The Effects of Elite Discourse on Evaluations of News Media. Mass Communication and Society, 22(1), 29–48. https://doi.org/10.1080/15205436.2018.1511807
  • Vargo, C. J., Guo, L., & Amazeen, M. A. (2018). The agenda-setting power of fake news: A big data analysis of the online media landscape from 2014 to 2016. New Media & Society, 20(5), 2028–2049. https://doi.org/10.1177/1461444817712086
  • Vraga, E., Kim, S. C., Cook, J., & Bode, L. (2020). Testing the effectiveness of correction placement and type on Instagram. International Journal of Press/Politics, 25(4), 632–652. https://doi.org/10.1177/1940161220919082
  • Vraga, E. K., & Bode, L. (2017). Using expert sources to correct health misinformation in social media. Science Communication, 39(5), 621–645. https://doi.org/10.1177/1075547017731776
  • Vraga, E. K., & Bode, L. (2018). I do not believe you: How providing a source corrects health misperceptions across social media platforms. Information, Communication & Society, 21(10), 1337–1353. https://doi.org/10.1080/1369118X.2017.1313883
  • Vraga, E. K., & Bode, L. (2020). Defining misinformation and understanding its bounded nature: Using expertise and evidence for describing misinformation. Political Communication, 37(1), 136–144. https://doi.org/10.1080/10584609.2020.1716500
  • Wagnsson, C. (2022). The paperboys of Russian messaging: RT/Sputnik audiences as vehicles for malign information influence. Information, Communication & Society, 26(9), 1849–1867. https://doi.org/10.1080/1369118X.2022.2041700
  • Walter, N., & Salovich, N. A. (2021). Unchecked vs. uncheckable: How opinion-based claims can impede corrections of misinformation. Mass Communication and Society, 24(4), 500–526. https://doi.org/10.1080/15205436.2020.1864406
  • Wardle, C. (2018). Information disorder: The essential glossary. https://firstdraftnews.org/wp-content/uploads/2018/07/infoDisorder_glossary.pdf
  • Wasserman, H. (2020). Fake news from Africa: Panics, politics and paradigms. Journalism, 21(1), 3–16. https://doi.org/10.1177/1464884917746861
  • Wassermann, H., & Madrid-Morales, D. (2019). An exploratory study of ‘fake news’ and media trust in Kenya, Nigeria and South Africa. African Journalism Studies, 40(1), 107–123. https://doi.org/10.1080/23743670.2019.1627230
  • Wassermann, H., & Madrid-Morales, D. (Eds.). (2022). Disinformation in the Global South. Wiley Blackwell.
  • Weeks, B. E. (2015). Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation. Journal of Communication, 65(4), 699–719. https://doi.org/10.1111/jcom.12164
  • Weeks, B. E., & Gil de Zuniga, H. (2021). What’s next? Six observations for the future of political misinformation research. American Behavioral Scientist, 65(2), 277–289. https://doi.org/10.1177/0002764219878236
  • Yang, J., & Tian, Y. (2021). ‘Others are more vulnerable to fake news than I am’: Third-person effect of COVID-19 fake news on social media users. Computers in Human Behavior, 125(2021), Article 106950. https://doi.org/10.1016/j.chb.2021.106950
  • Zaller, J. (1992). The nature and origins of mass opinion. Cambridge University Press.
  • Zhang, X., & Ghorbani, A. A. (2020). An overview of online fake news: Characterization, detection, and discussion. Information Processing & Management, 57(2), 601–605.