890
Views
0
CrossRef citations to date
0
Altmetric
Culture, Media & Film

Navigating the infodemic minefield: theorizing conversations in the digital sphere

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Article: 2303189 | Received 06 Apr 2023, Accepted 04 Jan 2024, Published online: 25 Jan 2024

Abstract

The study uses a relevant theoretical lens and an exploratory qualitative research approach to examine the dynamics of how an overabundance of information in the digital public sphere combined with fake news during the Covid-19 pandemic led to an infodemic. The study explores the viewpoints of 30 trained fact-checkers through in-depth interviews to learn more about their perspectives on the causes of the infodemic, tools for validating and spotting fake news, the role of the World Health Organization (WHO) in managing the infodemic, and the benefits and drawbacks of employing artificial intelligence (AI) for fact-checking. Additionally, the paper critically examines how the spread of false information contributed to the infodemic that worsened the Covid-19 outbreak. The study uses theories for understanding the core causes and motivating factors behind people’s excessive involvement in spreading false information online. The findings show how the WHO works with the national government and social media platforms to combat false information and provide accurate information. The research contributes to a deeper understanding of how AI, false information, and the global health issues interact by using theoretical perspectives. The work lays the way for efficient methods of controlling the flow of information during a crisis.

Introduction

In the internet era, digital communication has made it easier to stay in touch with family, friends, and society. While digital communication was useful in keeping the community in touch during the COVID-19 pandemic, there was a lot of information sharing that happened, including fake news. Understanding content sharing and news consumption on digital and social media platforms has been a well-researched area. When one compares the COVID-19 pandemic with similar events, it can be found that it is much different from its predecessors due to the dominance of conversations in the digital public sphere (Naughton, Citation2020). During the pandemic-induced lockdown, digital media was essential for all to access COVID-19 pandemic-related news and information. Due to its openness and interactivity, social media acts as a virtual platform for debates on serious health issues in the form of comments and chats. However, many people were confused during the pandemic and could not distinguish between actual and false news and information due to a lack of media literacy. The World Health Organization was expected, along with other government and non-governmental organizations, to verify and validate global pandemic rumours and misinformation (Taylor, Citation2019). The World Health Organization (WHO), government agencies, and other control units made efforts to verify and combat rumour and misinformation (Faye, Citation2020). However, despite best efforts, including those of the WHO, government, and other regulatory bodies, the spread of misinformation, disinformation, and various types of fake news could not be controlled (BBC, Citation2021).

As a primary global stakeholder, WHO played an instrumental role in fighting the COVID-19 pandemic and moderating information during the crisis, but despite its best efforts with the spread of fake news on digital media, COVID-19 led the world into the world’s first social media epidemic (Hao & Basu, Citation2020). Fact-checking as a tool assists in double-checking a claim and tracing its origin. There is a dire need for a technologically advanced tool like artificial intelligence (AI) to manage this vast pool of information, and it is not possible to manually track and claim every piece of information on the web.

Experts also believe that using machines with artificial intelligence (AI) can check and verify massive digital data. WHO came up with an ‘Early AI-supported Response with Social Listening’ to monitor social media conversations to show how people talk about COVID-19 online on social media platforms across 30 countries (WHO, Citation2020a). Tracking questions and concerns about the COVID-19 pandemic is essential for health authorities to manage the epidemic.

From the literature review, it emerged that the current studies either lack a theoretical framework or have a limited understanding of the theoretical reasons due to which people are motivated to produce, distribute, and consume fake news during a health pandemic.

The unique combination of a worldwide pandemic and widespread misinformation calls for research as a vital response. There is a lack of studies that have examined the complex connections between false news and the COVID-19 problem from the viewpoint of an emerging economy or studied the causes of the fear and anxiety brought on by an abundance of information. It is important to investigate the underlying causes of disinformation dissemination since doing so can offer priceless insights into an individual’s weaknesses, societal vulnerabilities, and technology implications. For a well-informed view of institutional response, it is also necessary to evaluate the WHO’s containment tactics in the Infodemic. A technological capability perspective on addressing and managing infodemics can be obtained by exploring AI’s potential for fact-checking.

As a response to the above gaps, this study is therefore designed to review the relevant theories that can be used to comprehend the primary motivations and reasons for producing, distributing, and consuming fake news in the digital sphere. Using in-depth interviews with fact-checkers, the study explores the role of fake news in exacerbating the COVID-19 crisis, the reasons for the spread of fake news, the World Health Organization’s (WHO) efforts to manage fake news, and the possibility of using AI for fact-checking and regulating information flow over social media.

Literature review

Fake news has grown exponentially with the rise of digital media. ‘Fake news’ is defined as ‘false, often sensational, information disseminated under the guise of news reporting’ by Collins English Dictionary (Citation2019). Fake news encompasses both misinformation and disinformation, implying that not all false news is sent on purpose; it can also occur unintentionally. Because online platforms are bidirectional, people may quickly shift from being false news consumers to providers, or vice versa.

Misinformation has recently become a source of concern on social media platforms because it spreads quickly and causes havoc, particularly during a global health crisis. Fact-checking is one method of dealing with the flood of false or misleading information.

AI is the training of machines to think and process information in the same way that humans do. Machines or algorithms that exhibit human-like problem-solving traits with greater precision and speed are also referred to by the term ‘AI’ (Frankenfield, Citation2021).

WHO coined the term ‘Infodemic’ during the COVID-19 pandemic to make people understand that not everything they read on digital media platforms is true. As rumours spread faster with increased digitization, an Infodemic can amplify a pandemic (WHO, Citation2021).

Theoretical framework

There are multiple theoretical lenses through which fake news production, distribution, and consumption can be explored and analyzed. In this section, the relevant theories and concepts have been reviewed and discussed.

Reality monitoring theory

During the COVID-19 pandemic, the events occurred very quickly, and there was so much information being consumed on multiple platforms that many people faced issues with reality monitoring. Humans use a mechanism where they monitor reality to distinguish between events that are actually happening and those that are imagined; this set of processes that help in discriminating between internal and external sources of events is known as reality monitoring (Johnson & Raye, Citation1981). People were perplexed to a new level as fake news complicated the complex process of reality monitoring. Many times the COVID-19-related fake news was both negative and arousing, and research says that in the case of negative arousal, subjective vividness along with memory’s accuracy are enhanced (Kensinger & Schacter, Citation2006).

Information manipulation theory and the Semmelweis reflex

Information Manipulation Theory (McCornack, Citation1992) describes the different ways in which information manipulation can happen during the creation of deceptive messages and suggests that deceptive messages violate the principles for conversational exchanges in terms of quantity, quality, manner, and relevance of information. Verbal deception is defined by McCornack (Citation1992) as a sub-class of acts in which the principles guiding cooperative exchanges are covertly violated.

When it comes to acceptance of prescribed behaviours like handwashing and wearing masks, one important theoretical lens that can be used is the Semmelweis reflex (Gupta et al., Citation2020).

Group polarisation law and echo chambers

One can use the law of group polarisation (Sunstein, Citation1999) as a lens to look at the selective news consumption and echo chambers on social media. As per the Group Polarization Theory, echo chambers act as a filter bubble where the existing opinions of members within a group are reinforced due to selective and similar news feeds, resulting in the formation of extreme positions within a group.

Bandwagon effect

The bandwagon effect (Leibenstein, Citation1950) takes into account the desire of people to be like others, therefore influencing what they wear, shop for, consume, and do. The bandwagon effect refers to the phenomenon of mob motivations and mass psychology as a whole or as individuals. News media consumption is at times dependent on bandwagon cues, as popular metrics like likes, comments, and shares capture what others think about the news article or piece.

The effect of illusory truth

Previous research on the effect of prior exposure to a statement on the likelihood of users judging it to be true or accurate has led to something known as the ‘illusory truth effect’ (Fazio et al., Citation2015; Polage, Citation2012). Due to the illusory truth effect, repetition leads to processing fluency, resulting in an inference of accuracy as it becomes easier for participants to process the information (Unkelbach, Citation2007; Wang et al., Citation2016).

Availability cascade and attentional bias

The availability cascade is heavily influenced by social and individual cognitive processes (Kuran & Sunstein, Citation1999). Individuals’ perceptions of fake news and its acceptability could have been influenced by the availability cascade. In the case of the availability cascade, a self-reinforcing cycle results in a piece of information being accepted more as it is publicly discussed and shared.

Attentional bias, leading to selective exposure, has an impact on people’s perceptions of fake news. One such concept is attentional bias, which is the phenomenon of paying too much attention to potentially dangerous material (MacLeod et al., Citation1986).

Confirmation bias and desirability bias

The term ‘confirmation bias’ refers to the seeking or interpreting of evidence in ways that are biased towards existing beliefs, expectations, or a hypothesis in hand (Nickerson, Citation1998). So during COVID-19, a lot of people accepted fake news due to confirmation bias because the fake news was in line with their pre-existing beliefs, hypotheses, and expectations. The desire to avoid embarrassment and project a favourable image to others is referred to as ‘social desirability bias’ (Fisher, Citation1993). Many people were acting in ways that were consistent with what others were doing or expected them to do. Many people wore masks or did not wear them due to social desirability bias. They didn’t want to be the odd man out.

Theory of selective exposure

According to the theory of selective exposure (Freedman & Sears, Citation1965), people actively avoid being confronted with arguments that contradict their own beliefs. Because of selective exposure, government and WHO propaganda and information campaigns were actively avoided by a segment of the public who found the information in these campaigns to be contrary to their beliefs.

Another intriguing theory related to the spread of fake news is the illusion of asymmetric insight (Pronin et al., Citation2001). Individuals believe they know the other person, but the other person does not know them. These interpersonal insight perceptions reveal asymmetry in how people interact with one another.

Naive realism and the overconfidence effect

A few individuals also had issues due to naive realism (Calvert, Citation2017) and some due to the overconfidence effect (Dunning et al., Citation1990). Naive realism is the belief that one’s own perception of the world is the most accurate view because it is unbiased and unfiltered (Calvert, Citation2017). Due to the naive realism effect, few individuals feel that those who disagree with them are uninformed, irrational, or biased.

The public and digital public spheres

According to Habermas in, 1962, the ‘public sphere’ is ‘the realm of social life where public opinion about something approaching can be formed’ (Habermas, 1962/2010). Newspapers, radio, and television were all considered public-sphere media (Habermas et al., Citation1974). However, with the advent of social media as a communication platform, the public sphere’s boundary was extended to digital mediums, resulting in the introduction of new concepts, such as the ‘digital public sphere’, which have emerged as an unavoidable reality.

The rapid rise of individual websites in the early, 1990s, as well as the constant increase in information and conversation in the online domain, completely shifted every physical interaction onto digital platforms, shifting the flow of information from word of mouth to online sharing of limited or biased information (Schäfer, Citation2016). Because there is no proper governance or authenticity to what is produced or consumed on these platforms, these troubling digital realities have caused concern. Society must progress beyond the simple regulation of these spaces (Franks, Citation2022).

These digital spaces have the potential to cause havoc overnight, as seen during the COVID-19 pandemic, thus emphasising the importance of authentication and accurate information dissemination.

AI and fact-checking

AI-based automated systems are used to automate the process of spotting false and misleading claims because they can scan and analyze written content, visual content, and videos from a range of sources, including articles, social media postings (Chen et al., Citation2019), and videos (Thang & Ta, Citation2023). These automated systems use pattern recognition and contextual awareness to select and flag communications that require additional in-depth inspection and analysis (Saquete et al., Citation2020). AI-based technologies help human fact-checkers by assisting in the first screening and, following further in-depth study, the debunking of erroneous content. Interestingly, while media organizations concentrate on educating journalists to spot fake news, online platforms like Google, Facebook, and Twitter frequently support studies whose goal is to create or enhance media forensics tools (Vizoso et al., Citation2021).

AI-based fact-checking is currently employed to combat the fake news threat (Full Fact, Citation2023). Natural language processing (NLP) and AI-based fact-checking techniques are being utilized to validate the veracity of news with unmatched speed, scale, and precision (Nakov et al., Citation2021). By analyzing transcripts to find likely rumors and fact-checking in real-time, ClaimBuster, for instance, is a tool that uses AI to aid journalists and fact-checkers in newsrooms (IDIR Lab, Citation2023). To detect, verify, and refute false claims, journalists and fact-checkers all over the world employ AI-based automated systems from companies like Fact Check, a registered charity with headquarters in England and Wales (Full Fact, Citation2023).

AI systems may analyze language and identify minor signs of misrepresentation using NLP (Horák et al., Citation2021). When confirming facts, AI may identify linguistic inconsistencies, contextual inconsistencies, and feelings. The lack of resources for non-world languages and the high costs associated with model learning continue to be important obstacles in the development of models providing multilingual check-worthiness detection (Schlicht et al., Citation2023). AI-based systems can also cross-reference claims with already-existing databases of verified facts and credible sources to aid the automated fact-checking procedure in the case of world languages and also for multilingual check-worthiness detection.

Similar to text analysis, image, and video analysis are essential for fact-checking. Automated systems powered by AI are capable of spotting digital manipulations, deep fakes (Vizoso et al., Citation2021), and visual deception. Algorithms can compare photos and videos from databases to look for phony or synthetic media. When establishing scientific claims, using AI is essential for confirming the authoritative source component of information verification (Inshakova & Pankeev, Citation2022).

The use of AI-based automated systems in the context of social media platforms is crucial because they operate in real-time. AI-powered fact-checking bots that monitor topics of interest and promptly and accurately highlight dubious claims (Lim & Perrault, Citation2023). Social media platforms must act quickly to remove the content before the fake content is widely shared.

AI systems can evaluate the accuracy of news, identify discrepancies, the sensational aspects, and identify biased viewpoints that are often associated with fake news (Simpson & Yang, Citation2022). To avoid bias, it’s crucial to make sure that the algorithms used for AI-based fact-checking are trained on a variety of relevant datasets. AI-based automated systems can examine a person’s content consumption habits and help consumers by recommending a variety of sources, breaking up echo chambers, and developing a more sophisticated understanding of the news (Bonneau et al., Citation2022). It is vital to strike a balance between automation and human intervention because early-stage automated systems have trouble understanding the intricacies of fake news (Svahn & Perfumi, Citation2021). Human actors are also essential to the fact-checking process for accuracy and attribution (Johnson, Citation2023).

Methodology

This exploratory investigation employs a qualitative research methodology in accordance with Creswell’s (Citation1998) suggestions. Semi-structured interviews with fact-checking experts have been used in this study to collect qualitative data in accordance with Thyer’s (Citation2001) recommendations. A semi-structured interview involves the interviewer asking a number of predetermined yet open-ended questions. This interviewing technique involves asking predefined questions in a methodical manner and using the appropriate probes to gain insightful information about the topics the interviewer is researching or investigating.

A pilot study with three participants was conducted to evaluate the validity and relevance of the questions. A discussion guide (Annexure 1) with open-ended questions was created and finalised based on the findings of the pilot study. Open-ended questions and essential inquiries helped in finding relevant responses from the participants who trained fact-checkers. With prior approval and the appropriate annotations, the interviews were recorded using the recording equipment. An average of 30 min was spent on the interview, and then coding and transcription of online interviews were done after that.

Respondents

The Respondents in the study were fact-checking professionals working in fact-checking agencies, organizations, or as freelance professionals. The specifics of these respondents, who were chosen using snowball sampling, are listed in . All the fact-checkers are trained in the profession by agencies like Google or fact-checking agencies running virtual internship programmes for fact-checking training. Journalists, editors, content writers, academics, and freelance professionals from India made up the majority of the participants’ professions. The idea of saturation as well as other research that have been published serves as the foundation for the sample size in qualitative studies. A sample size between twenty and thirty is adequate for qualitative investigations utilising grounded theory, according to Marshall et al.’s (Citation2013) recommendation. After reaching saturation, the qualitative researcher ceases interviewing new participants in the absence of fresh insights. The sample size for this study was set at 30, as there were no more surprising findings after the 25th interview, which indicated data saturation.

Table 1. Details of respondents.

Procedure

Researchers used a discussion guide with eight questions to interview the fact-checking professional. The questions ranged from their general perception of the Infodemic to questions on tools for fact-checking to AI applications for fact-checking and managing the Infodemic.

The categorization and identification of themes use an inductive approach. Two coders transcribed and analysed the recordings of the interviews. Using Creswell and Creswell’s (Citation2018) suggestions for data coding, the responses were coded and thematically arranged to understand the underlying patterns. Coders used an Excel sheet to summarise the transcripts. Two coders were used to maintain the inter-coder reliability of the analysis.

Research questions

Through in-depth interviews with academics who are actively engaged in fact-checking and fact-checking experts, the following research questions are specifically further investigated.

  1. What role has fake news played in worsening the crisis during the COVID-19 pandemic?

  2. What are the prominent reasons for the spread of fake news and the overload of information leading to the Infodemic?

  3. How has the World Health Organization (WHO) managed the spread of fake news during the COVID-19 pandemic?

  4. Is there a possibility of using AI for fact-checking and regulating the information flow over social media?

Findings and discussion

In the following section, there is a discussion on the role of fake news in exacerbating the pandemic, the reasons for the spread of fake news, WHO efforts, and the possibility of using AI to combat misinformation.

The role of the infodemic in exacerbating the pandemic

The occurrence of events and information related to the COVID-19 virus, symptoms, cures, regulations, vaccines, and mortality were at a heightened level. People were confined to their homes due to lockdowns in many countries and cities. Many people were confused and found it challenging to monitor reality, which led to issues with reality monitoring (Johnson & Raye, Citation1981). The mechanisms for monitoring reality that aids humans in distinguishing between actual and imagined events have become blurred. Furthermore, fake news has increased the complexity of the complex process of reality monitoring.

The respondents in the study were asked about the role that fake news played in worsening the crisis during the COVID-19 pandemic. All the 30 respondents in the study again and again reiterated and tried to validate the assumption that excess information had worsened the pandemic situation.

Infodemic is worsening the pandemic.’ (Journalist, Female, Three Years of Experience)

The major role that fake news plays is due to the fact that false information can lead to confusion, doubt, and a heightened level of curiosity among the public. This phenomenon of heightened and compulsive curiosity and doubt is captured in the quote by one of the respondents.

Fake news creates doubt, uncertainty, and compulsive curiosity among the people.’ (Media literacy trainer, male, five years of experience)

In the same line, Katella (Citation2020) highlighted that the daily flood of pandemic information, including COVID-19 numbers, can be perplexing and misleading due to different protocols used in testing and reporting confirmed cases.

Talks on the anti-mask campaign, infertility after vaccination, and some COVID-19 vaccines being haram (illegitimate) in many religions are some major examples of how fake news worsens the pandemic.’ (Financial correspondent, female, six years of experience)

Response to handwashing and the wearing of masks can be looked at through the lens of the Semmelweis reflex (Gupta et al., Citation2020) because wearing masks and handwashing were against their pre-existing beliefs. When fake news was in line with their pre-existing beliefs, more and more people believed it to be true despite scientific evidence against it.

Consumers need to be aware of the agenda behind the news shared or forwarded for consumption. Some of the respondents highlighted the need for clarity in communication coming from official or government channels.

The government should have clearly communicated with the people about health hazards—lack of planning has little to do with the epidemic’s worsening.’ (Fact-checking editor, male, eight years of experience)

Even traditional mass media contributes to misinformation and chaos and is not fulfilling its crucial role in providing evidence-based facts to the general public (Zarocostas, Citation2020). During a declared pandemic, people are in a state of panic, and such examples show the critical impact any such information has on society and its people. Some of the experts even highlighted the role of the mainstream media in spreading misinformation.

Our mainstream media has been spreading misinformation at such an extreme level.’ (Senior editor, digital journalism, male, 15 years of experience.)

As is evident from the above responses, there are mixed views on the role of the epidemic in worsening the pandemic. The majority of the participants felt that the Infodemic worsened the pandemic, combined with public insecurity. Participants talked about how the Infodemic is making people ill and the role of ego and prejudice in consuming information from credible resources.

One consistent theme throughout the interviews was that every interviewee agreed to the availability of massive loads of information that are being created and circulated every minute. A journalist and a trained fact-checker said,

‘The information overload has reached a different level these days.’ Broadcast journalism and social media are also constantly disseminating information on the same topic. (Journalist and fact-checker, male, three years of experience)

Participants also talked about the challenges involved in debunking fake news when the load of information is excessive, as seen during the pandemic.

Since the amount of fake news online is enormous, debunking is not easy.’ (Assistant Professor, Female, Six Years of Experience)

Participants said everyone wants to jump on the bandwagon of remaining relevant due to the sudden increase in digital platforms and digital information consumption. At the same time, it lasts, which makes the spread of fake news more straightforward and accessible.

One of the respondents said, ‘There are two aspects to spreading fake information; the first one is at the source level. Second at the level of the receiver’ (Trained fact-checker, male, one year of experience)

Even consumers want to consume whatever is constantly being produced.

Another respondent suggested, ‘People do not check whether it is right or wrong and share it further and form their opinions no matter where they are consuming.’ (Account executive at a fact-checking agency, male, five years of experience)

Another consequence of this digital information dump is that consumers often remember and seek out bits of information that best confirm what they already believe. Here ‘confirmation bias’ came into play as individuals were biased because of their existing beliefs, expectations, or a hypothesis in hand (Nickerson, Citation1998). People believed in fake news if it was in line with their pre-existing beliefs, hypotheses, and expectations. And at times, due to social desirability bias (Fisher, Citation1993), a lot of individuals did not counter fake news and liked, shared, or commented on the fake content. Due to social desirability bias, people were trying to do things that others wanted them to do. Even when everyone was wearing masks, individuals had to wear masks, and when people in their groups stopped wearing masks, individuals stopped wearing them as they wanted to go with the group.

The above responses and analysis show that information overload is a common theme. In the literature review as well, the overload of information is evident. Respondents have also attributed the Infodemic to low media literacy and the sharing of information on messaging platforms like WhatsApp. Respondents raised concerns over the health tips circulated and the propensity of people to share their views on everything, including pandemics.

15 out of 30 of the respondents talked about using fake pages to spread fake information. The 20 respondents highlighted the role of both the media and the viewers as contributors to the current pandemic. The availability of multiple sources of information, as well as users’ proclivity to accept any information without verifying facts, are significant causes of the Infodemic. 25 respondents discussed the volume of information and fake news on social media platforms, as well as the infrastructure and fact-checkers’ inadequacy in dealing with fake news.

Prominent reasons for the spread of fake news

The illusory truth effect is one of the primary reasons people believe fake news. When people are repeatedly exposed to the same news due to processing fluency, they infer the news to be accurate due to the ease of processing information (Pennycook & Rand, Citation2021; Unkelbach, Citation2007; Wang et al., Citation2016).

‘The reasons for the overflow of information include increased mobile penetration, a cheap data plan, and an abundance of platforms to voice opinion and share news, which are contributing to the circulation of more information.’ (Senior journalist, male, seven years of experience.)

As evident from the above quote by an expert, the availability of internet access via smartphones with cheap mobile data and the abundance of social media platforms (Efe Stanley, Citation2021) for voicing opinions have contributed to the ever-growing amount of information on a trending topic or phenomenon.

Content sharing on social media leads to amplification, contributing to the Infodemic (Zarocostas, Citation2020). As the users feel that they know a lot about a particular topic, they feel this compelling urge to like, share, and comment on a particular topic or post. One of the respondents captured this in the post below, in which digital space is depicted as a democracy in which everyone has the right to express themselves freely.

Another respondent said, ‘Digital space is like democracy; everyone gets to voice their opinion.’ (Journalist and social media account manager, female, five years of experience)

Marcella et al. (Citation2019) discovered that study participants judged facts quickly and intuitively; the study also discovered support for the belief that facts are not further verified. The tendency of participants to not verify facts contributes to the acceptance of fake news.

At times, social media facilitates an ‘echo chamber’ or ‘filter bubble’, allowing users to avoid opposing viewpoints by interacting only with stories that support their personal beliefs and opinions (Hanz & Kingsland, Citation2020). One can look at the formation of echo chambers through the lens of the law of group polarisation (Sunstein, Citation1999). Echo chambers on social media act as a bubble, and the existing beliefs are reinforced due to similar news feeds, resulting in extreme positions in a group. These echo chambers result in selective exposure and polarisation (Bakshy et al., Citation2015). Participants in the study highlighted the lack of media literacy as a key reason for the spread of fake news.

Lack of media literacy leads to the spread of such fake information.’ (Financial correspondent, female, six years of experience, and media literacy trainee)

Another lens that is important in understanding how selective exposure impacts people’s perceptions of fake news is attentional bias. So during the pandemic, we were dealing with a life-threatening situation and therefore paying too much attention to life-threatening news about the COVID-19 virus (MacLeod et al., Citation1986). People who were dealing with a fatal virus and therefore paid heightened attention to negative news about the virus. Attentional bias led to impaired judgement while accessing news that dealt with the life-threatening COVID-19 virus.

Ten respondents highlighted the role of human psychology, similar to the fear of missing out, where they do not want their close ones to miss out on any news or information.

It is also because of human psychology, where people continuously feel the need to share information with their close ones.’ (Tech writer, female, three years of experience)

Even the bandwagon effect (Leibenstein, Citation1950) suggests that people have a desire to be like others, therefore influencing their content and news consumption. Due to the bandwagon effect, a phenomenon of mob motivations can be seen in the liking, sharing, and following of news on social media platforms.

People want to know everything and want to be the first informers so that their friends and followers also know.’ (Media consultant, female, six years of experience)

Some individuals felt that their world view was correct because it was unbiased due to naive realism (Calvert, Citation2017). These individuals at times also felt that those who agreed with them were uninformed, irrational, or biased. Few individuals were overconfident about their judgements due to the overconfidence effect.

At times it’s easier for people to reject news that is contradictory to their own beliefs, in line with the theory of selective exposure (Freedman & Sears, Citation1965). As a result, when the government and WHO pushed people to adopt certain lifestyles or habits that contradicted their personal beliefs, they simply avoided them and supported arguments that supported their beliefs, even if they were false due to confirmation bias (Nickerson, Citation1998). Individuals were sometimes simply forwarding news or content to others because they believed they knew the other person better and thus the piece of advice or cure would be more appropriate for that person due to the illusion of asymmetric insight (Pronin et al., Citation2001).

Role of WHO in combating fake news during the COVID-19 pandemic

WHO has been instrumental in leading the combat against the pandemic and a global epidemic of misinformation with the help of a WHO Information Network for Epidemics (EPI-WIN) platform to curb and fight misinformation spreading through social media and other mediums (Zarocostas, Citation2020).

EPI-WIN strategies fall into four strategic areas: identification, collection, and assessment of real-time evidence for recommendations and policies; conversion into actionable behavioural change messages; amplification of messages through crucial stakeholders; and quantification, monitoring, and tracking through social media technology platforms.

Communications at six regional WHO offices, as well as risk communication consultants and officers, attempt to counter misinformation and rumour by staying in touch with social media platforms, such as Facebook, Twitter, Tencent, Pinterest, and TikTok for real-time updates (Zarocostas, Citation2020).

WHO’s TikTok account tried to cut through COVID-19 misinformation. The video also directed the users to the WHO website for additional information (Kelly, Citation2020). The WHO has advised governments to be prompt in sharing accurate information for transparency (Euronews, Citation2020). WHO has collaborated on open innovation to proactively combat the misinformation epidemic (WHO, Citation2020b).

The majority of participants felt that WHO had initially done an excellent job spreading awareness about the precautions and safety measures. Countries later played a critical role in disseminating pandemic and Infodemic control guidelines and measures. Few participants felt that WHO had limited jurisdiction and could only organise press conferences to disseminate information. One of the participants even felt that WHO has done close to nothing to control fake news.

WHO did an excellent job in making people aware through different means of communication.’ (Correspondent and media literacy trainee, female, four years of experience)

15 respondents also added that WHO did whatever it could according to its abilities. Another journalist agreed.

WHO has tried, but it cannot control a pandemic since it is tough for any international or government body to manage on such a large scale.’ (Journalist, Female, Five Years of Experience)

WHO, as an organization, was not independently responsible for controlling the Infodemic, as it is always harder to penetrate the foreign news economy single-handedly.

‘There is no doubt that the WHO has not succeeded, but the WHO is a health organisation and not a world media or communication organization.’ (Media literacy expert, Google, male, seven years of experience)

Twenty-two respondents agreed that the WHO tried and did its best to manage disinformation.

WHO is not responsible; they were not aware that this would happen. WHO has an entire world to manage, and the officials could pass on and share the authentic scientific information they were getting.’ (Senior online editor, male, 6.5 years of experience)

Based on the responses, the WHO has managed the epidemic with little success. Participants felt that since WHO is not a specialist in health communications or managing the Infodemic, it should not be blamed.

Tools and mechanisms for fact-checking

Next, while exploring the current tools and mechanisms for fact-checking, it was revealed that almost all agencies and organisations rely almost entirely on the manual tracking of information (Pavleska et al., Citation2018).

The process of fact-checking includes using Google to find the news, reverse image search to analyse images, keyword searches, trending events, etc.’ (Senior journalist, male, six years of experience)

Google search can give the necessary information, and for verification of images, the use of Google reverse image search is widespread. For this, one has to upload the image and add a description of the image. Then to verify videos, there is an InVid verification tool, a plugin to check the metadata of YouTube videos, check the origin of the image, and check text and photos. If a claim mentions any location, then Google Earth is an essential tool, as is the Google fact-checking explorer. Most fact-checkers have relied on basic search and reverse image search for verifying images until the emergence of such authenticating platforms.

The role of AI in managing fake news and misinformation on social media

Fake news has already created a giant wall of distrust between media houses and consumers. Moreover, while many assume that AI can make things worse, it can also help with this ever-growing problem. While prevailing technologies like deepfakes have already created a ruckus on online platforms, it is only logical that more vital technologies and algorithms will be able to combat them (Cassauwers, Citation2019). When asked about the possibility of using AI for fact-checking and verifying information, the respondents talked about the use of AI by big technology companies and social media platforms. Few respondents with exposure and background to new technology tools have strongly agreed that AI has the mass potential to understand and make conclusive decisions in the new digital space of information consumption.

‘The volume of misinformation is so huge that AI is the only way to control it, more than humans can’ (Assistant Professor and Fact-Checking Expert, Female, Six Years of Experience).

‘Some people in agencies use such technology-driven tools that tell the source of information, where it is coming from, form some patterns, and identify the platforms for sharing’ (Senior Account Manager at the fact-checking agency, female, three years of experience).

Unchecked information is not only hazardous to the firms or people it is being floated about but also to the consumer’s health, as we witnessed during the coronavirus pandemic. Fact-checkers sincerely believe that most technology firms and social media firms will use AI to screen fake users and content on social media in the future.

Machines and algorithms are still in their infancy stage, and then they can analyse and give results based on the data that is being fed to them. However, after that, there needs to be some level of human intervention to understand and decide if the content needs to be flagged or not (Marr, Citation2021).

When asked about AI’s role in combating the spread of fake news, especially on social media, the respondents showed mixed reactions. 14 respondents believed that AI could autonomously manage and screen the fake news menace. 16 respondents felt that human intervention would be needed to manage the misinformation and fake news menace.

‘An organisation called FullFact is a full company dedicated to building algorithms for automated fact-checking.’ (Senior fact-checker and social media manager, male, 18 years of experience.)

Algorithms can keep a watch on fake videos and images and help prepare a database, which can be handy for punishment and controlling the spread of fake news and can help in maintaining the sanctity of information on online platforms.’ (Journalist and owner of a fact-checking agency, male, 15 years of experience)

‘News organisations need to implement AI to check the spread of fake news; journalists need to verify the news source for authenticity.’ (Senior print news consultant, male, ten years of experience)

Nearly all the 30 respondents believe that AI is central to managing fake news. To manage large amount of information, there is a requirement for superior intelligence, but also, just those machines will not be able to help these platforms get rid of misinformation in totality; there needs to be human intervention and a substantial amount of EQ that is required to moderate this chaos. They are only not sure about the future—whether it will be autonomous without any human intervention or will have some form of human intervention.

Conclusion

In line with the arguments of Habermas et al. (Citation1974) with regard to the public sphere, in current times the role of digital media is undeniable in forming public opinion about a new phenomenon, including fake news in the digital public sphere. In line with Chen et al. (Citation2019), this study acknowledges the role of social and digital media in the modern digital sphere. Similar to Schäfer’s results from 2016, there was fragmentation in the public’s opinion in the digital public sphere during the COVID-19 epidemic and lockdown, when people were confined to their private online digital spaces. The reasons why individuals were engaging in the production, distribution, and consumption of fake news can be looked at from multiple theoretical lenses. This paper discusses these theories to understand user motivation and behaviour related to fake news consumption, production, and dissemination. From the theoretical perspective of reality monitoring (Johnson & Raye, Citation1981), the rise of fake news has caused people’s ideas to become fragmented and skewed since they are having trouble keeping up with reality. Human fact-checkers agreed that they could not monitor the spread of fake news. Similar to the findings of Lim and Perrault (Citation2023), fact-checkers in this study also acknowledged the necessity for some type of technical intervention, such as AI-based systems like fact-checking chatbots, to identify and stop the spread of false information. This study, like earlier studies like those by Hao and Basu (Citation2020), found that while state agencies, other organizations, and the WHO (Faye, Citation2020) all played significant roles in managing information by using a variety of strategies, they failed on numerous fronts due to a lack of coordination with respective government, public, and private bodies and could have done a better job by employing advanced technological interventions like AI-based fact-checking programs.

This study, like Inshakova and Pankeev (Citation2022), found that social and digital media platforms could have acted more responsibly by taking measures to screen the news and content for accuracy, especially related to scientific claims by focusing on the authority and credibility of authors and by using algorithms for detecting deep fakes (Vizoso et al., Citation2021). To combat fake online content on social media platforms, the digital public sphere requires proper governance measures.

Adding to the previous research, the additional factors that aggravate the Infodemic include mobile penetration, cheap data plans, access to social media platforms, and the fear of missing out. Several factors arise from the relevant psychological theories and associated reasons, including the urge to share information with friends, a lack of media literacy, issues related to theories like reality monitoring (Johnson & Raye, Citation1981); information manipulation theory (McCornack, Citation1992); the Semmelweis reflex (Gupta et al., Citation2020); the law of group polarisation (Sunstein, Citation1999); the bandwagon effect (Leibenstein, Citation1950); the illusory truth effect (Fazio et al., Citation2015; Polage, Citation2012); and availability cascades (Kuran & Sunstein, Citation1999). A few more factors include attentional bias (MacLeod et al., Citation1986); confirmation bias (Nickerson, Citation1998); social desirability bias (Fisher, Citation1993); the theory of selective exposure (Freedman & Sears, Citation1965); the illusion of asymmetric insight (Pronin et al., Citation2001); naive realism (Calvert, Citation2017); and the overconfidence effect (Dunning et al., Citation1990).

Results show that despite WHO’s engagement with major social media companies to combat the infodemic, it was difficult to contain the spread of fake news, and WHO’s efforts to do so were lacking (BBC, Citation2021). To check on the spread of misinformation, rumors, and disinformation, AI based technologies by organisations like ClaimBuster (IDIR Lab, Citation2023) and Full Fact (Full Fact, Citation2023) could have screened the information related to a pandemic for authenticity and correctness. According to the findings of a study by Horák et al. (Citation2021), several news organizations use AI for fact-checking. Similarly, Report Lab, the Lab’s Tech, and Check Alerts apply AI to fact-checking and help journalists save time. Another exciting application of AI is verifying claims about scientific concepts related to biomedicine (Inshakova & Pankeev, Citation2022). Scientific fact-checking is a vital application and line of fact-checking research. Experts believe that fact-checking is complex, and with time, as AI models and technology develop, the fact-checking of scientific facts and claims for authenticity and correctness will improve. The current AI capabilities for detecting fake news and fact-checking are still in the nascent stages and will improve with time.

Contribution to theory and practice

The concept of the Infodemic is a recent phenomenon of great importance as it affects the economy, life, and society to a great extent. This paper opens up new directions for research and theory development related to fake news in the digital public sphere. This study looks at the motivational factors related to the production, dissemination, and consumption of fake news from multiple theoretical lenses in the digital sphere. These theories include reality monitoring (Johnson & Raye, Citation1981); information manipulation theory (McCornack, Citation1992); the Semmelweis reflex (Gupta et al., Citation2020); the law of group polarisation (Sunstein, Citation1999); the bandwagon effect (Leibenstein, Citation1950); the illusory truth effect (Fazio et al., Citation2015; Polage, Citation2012); and availability cascades (Kuran & Sunstein, Citation1999). In the paper, there are other theories and constructs through which human motivation and complex behaviour have been looked at; these theories include attentional bias (MacLeod et al., Citation1986); confirmation bias (Nickerson, Citation1998); social desirability bias (Fisher, Citation1993); the theory of selective exposure (Freedman & Sears, Citation1965); the illusion of asymmetric insight (Pronin et al., Citation2001); naive realism (Calvert, Citation2017); and the overconfidence effect (Dunning et al., Citation1990).

Adding on to the prior research that has been done in the fields of fake news and fact-checking. This research raises new questions, like the role of technology in detecting misinformation during a pandemic or any crisis. Theoretical and conceptual model development and testing using quantitative, qualitative, and mixed methods based on inductive or deductive reasoning as applicable. Based on the findings, practitioners in health communications and health management can employ strategies to manage information during the Infodemic.

For practitioners, this study sensitises and argues for the need for content regulation on social media platforms in the digital public sphere. The study critiques the role of new technologies like AI in managing the flow of communications amongst the different stakeholders during the pandemic.

Future scope and limitation

This exploratory research opens a critical and novel area related to the spread of fake news, the overabundance of information, and the governments’ and private agencies’ inability to manage the flow of communications during times of pandemic. New studies can explore the applications of AI for fact-checking and managing the spread of fake news on digital platforms.

Further research can investigate fake news, fact-checking, Infodemic, and crisis communication by applying various theoretical dimensions from psychology literature to the digital public sphere. Future research must incorporate existing relevant theories as well as develop new ones to comprehend the phenomenon of fake news. Future studies can also discuss and deliberate on the impact of using AI for fact-checking and its role in managing the spread of misinformation and fake news in the digital public sphere.

The major limitation arises from expert interviews. The responses for the study have been collected from participants who are trained in fact-checking and are located in India. It is possible that if the responses were taken from the general public in India or other countries, the insights would have been richer.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Ashwani Kumar Upadhyay

Ashwani Kumar Upadhyay, PhD, Professor, Symbiosis Institute of Media & Communication, Symbiosis International (Deemed University) (SIU), Pune, India. With over 22 years of expertise in both teaching and research, Dr. Upadhyay is the co-author of “AI Revolution in HRM: The New Scorecard,” a book published by Sage Publications. Applications of artificial intelligence, virtual reality, and augmented reality in marketing, public relations, human resources, journalism and advertising are some of his areas of research interest. 

Komal Khandelwal

Komal Khandelwal, PhD, Associate Professor, Symbiosis Law School, Symbiosis International (Deemed University) (SIU), Pune, India. With seventeen years of academic and research experience, Dr. Khandelwal has a PhD in management. Her areas of research interest are artificial intelligence, training, HR technology, virtual and augmented reality, training, structural equation modelling, and mediation analysis. “AI Revolution in HRM: The New Scorecard,” published by Sage Publications, is co-authored by her.

Sukrit Dhingra

Sukrit Dhingra, MBA, Insights Manager, Engage Digital Partners, Gurugram, India. Mr. Dhingra is an insights manager at Engage Digital Partners. He is a driven and inquisitive learner who has worked primarily at companies like Adobe, Sprinklr, and IBM for the past seven years. His main areas of interest for research include communications and technology.

Geetanjali Panda

Geetanjali Panda, PhD Scholar, Symbiosis Institute of Media and Communication, Symbiosis International (Deemed University) (SIU), Pune, India. Ms Panda is a PhD scholar at Symbiosis International (Deemed University) in Pune, India. She earned her postgraduate degree in public relations and advertising from Xavier University in Bhubaneswar. Artificial intelligence, public relations, and communication management are among her research interests.

References

Annexure 1

Discussion guide

Introduction

Hello, sir/ma’am/name, thank you for taking the time to talk to us today. We will talk for about 30 min. I am researching the role of AI in controlling fake news. Our conversation will revolve around infodemic (information overload in the present times), their impact on people and controlling bodies, and the importance of technology to track and identify credible sources of information. The discussion is confidential and will be recorded for research. None of your answers will be made public. There are no right or wrong answers here. We will be running through 10 questions and request that you answer them honestly.

Warm-up questions

  1. Could you please tell me your place of origin and what you do?

  2. Please name a few platforms (digital or non-digital) that you access for consuming information and news.

Interview questions

  1. What are your views on the present epidemic?

  2. Why do you think there is a sudden overflow of information, especially in the digital space?

  3. Who do you think is responsible or accountable for the spread of these enormous amounts of information?

  4. Do you believe that the epidemic is further worsening the pandemic? If yes, then we would request that you elaborate on this or share a few examples with us.

  5. What are your thoughts on WHO’s efforts to monitor and control the infodemic?

  6. Currently, what are the different ways to fact-check a piece of information and control its further spread? Please name a few tools, if possible, to fact-check information and explain how they function.

  7. Is it possible to use AI to keep a check on and regulate the information flow during the pandemic? Please share your views on it.

  8. Do you believe that AI can play an important role in combating the spread of fake news, with a special focus on social media? If yes, then how; if no, then how?

Debriefing

Thank you for sharing some really valuable insights with us. Please let me know if you have any questions for me; I will be happy to address them.