3,376
Views
2
CrossRef citations to date
0
Altmetric
Digital Humanities

Digital hate speech and othering: The construction of hate speech from Malaysian perspectives

ORCID Icon, , &
Article: 2229089 | Received 07 Feb 2023, Accepted 20 Jun 2023, Published online: 25 Jul 2023

Abstract

Hate speech is a phenomenon that affects communication and undermines multicultural society by disrupting intercultural engagement. The internet’s euphoria allows media around the world to witness the troubling rise of vitriolic language. Moving beyond the Euro-America of hate speech phenomenon, cross-cultural nuanced by ethnography has evolved in Malaysia. Therefore, the challenge of protecting against discrimination, dehumanisation, and incitement to violence while preserving individual freedom of expression seems to be contextual when defining hate speech. A series of three different online focus groups which consist of a total of 22 participants were conducted to gather insights from civil society and experts to build concrete definitions and understand the construction of hate speech from a Malaysian perspective. Hence, it tries to explore the extent how social media act as a platform that exacerbates the othering discourse in the online sphere. This study contributes to the academic discussions about the public sphere, and social media in the creation of meaning, discourses, and ensuing cultural and social change, as well as understanding the trajectories across digital spheres by utilising online focus groups for discussion.

1. Introduction

Because of the proliferation of digital technology, the significance of the media as a source of information, a medium for communication, and a companion in our everyday lives is greater than it has ever been and more pervasive than it has ever been. This highly mediatized society has given rise to a new and serious problem: the dissemination of hate speech across various online social media platforms by unscrupulous individuals (Schindler & Domahidi, Citation2021). These types of beliefs and statements, which are intended to exclude people or groups, constitute one of the most common manifestations of hate speech as a social phenomenon. Hate speech communicated via the internet is therefore at the centre of many different conflicts, which involve conflict between many groups spanning the cultures of many different nations (Gianan, Citation2020; Matamoros-Fernández, Citation2017; Vasilenko, Citation2021).

During the COVID-19 pandemic, there has been a vast increase in the use of electronic communication platforms to convey hate, racist, and xenophobic content (Almuzafer, Citation2021; Ittefaq et al., Citation2022; Zamri et al., Citation2021a; Zamri et al., Citation2021b). Several non-governmental organizations (NGOs) and scientific research have revealed that this phenomenon is not a fantasy and that intervention is thus required (Hawdon et al., Citation2015). It is also claimed that the internet has facilitated the spread of extremely violent ideas and radical recruitment (Halilovic-Pastuovic, Citation2017).Internet has been identified as a facilitator for the dissemination of extremist and violent ideas, as well as for radical recruitment (Halilovic-Pastuovic, Citation2020).

As the global influence of populism and identity politics grows, the battle for hearts and minds has spilled over onto online platforms, where various ideas and ideologues can quickly connect with their followers (Fukuyama, Citation2018). Consequently, social media has emerged as a significant driving force behind the ongoing surge of populist movements worldwide (Cervi & Marín‐Lladó, Citation2021). These platforms have become primary channels for disseminating and propagating such discourses (Matamoros‐Fernández & Farkas, Citation2021), leading to considerable effects on public opinion (Cáceres‐Zapatero et al., Citation2022) and contributing to pervasive social polarization (Makhortykh et al., Citation2021). While several studies have focused on the use of social media platforms by populist right-wing parties and leaders, such investigations have predominantly concentrated on a limited range of platforms like Twitter and YouTube (Finlayson, Citation2022; Peck, Citation2022).

Apparently, the pandemic has exposed deep social and political divisions within societies, as evidenced by the racially charged and discriminatory responses disproportionately affecting marginalized groups (Haynes et al., Citation2020; Zheng, Citation2020). In Malaysia, for instance, anti-Islam/Malay sentiments were the most prevalent racially charged comments on social media during the pandemic, followed by remarks targeting Chinese, Indians, and migrants (Auethavornpipat, Citation2021; Chin, Citation2020). The Malaysian Communications and Multimedia Commission (MCMC) reported receiving 21,296 reports regarding social media messages related to the “3 R” (race, religion, and royalty). Upon careful examination of these reports, MCMC discovered that 80% of them contained racial undertones, while the remaining 20% revolved around religious issues (Malaysian Communications and Multimedia Commission MCMC, Citation2019).

On the other hand, hate speech thrives on social media because of its anonymity and pervasiveness, making it seem like a losing battle to stop it (Brown, Citation2018). In a multi-cultural and multi-religious community like Malaysia, what could be regarded as hate speech in a culturally or religiously neutral environment might not be perceived as such. Therefore, the definition of hate speech may be contextual. Moreover, there are still definitional issues that must be carefully addressed, as the process of labelling cyberhate poses the challenge of protecting individuals’ freedom of expression while also shielding them from discrimination, dehumanisation, and incitement to violence (Blaya, Citation2018).

Moreover, it is challenging to define hate speech because the concept of hate speech is unstable and adaptable to the character of racist discourse (Titley, Citation2020). What constitutes hate speech is thus extremely contextualised and dependent on degrees of tolerance (Celik, Citation2019; Pálmadóttir & Kalenikova, Citation2018) and also raise significant controversies in public debates (Cohen‐Almagor, Citation2011; Gelber, Citation2016; Kalsnes & Ihlebæk, Citation2021). At the same time, the definition of hate speech has become more complicated as a result of three different discourses that overlap with one another. These discourses are legal and philosophical, violent and extremism, and comparative debates on political conflict and violence (Brubaker & Cooper, Citation2000).

Consequently, hate speech and memes often intersect in the digital landscape, playing a significant role in shaping online discourse. Spanish memes, in particular, contribute no new ideas, as some authors point out in other cases (Mielczarek, Citation2018), they replicate and consolidate existing stereotypes, expressions, and attacks. Hence, memes seem to represent an emotional change of direction in political communication, similar to the Ukrainian and Venezuelan cases (Makhortykh & GonzálezAguilar, Citation2020).

While numerous studies have examined the negative effects of hate speech on the individual, community, and societal levels (Auethavornpipat, Citation2021; Fernandez, Citation2020; Sharma, Citation2019), little is known about the construction of hate speech from Malaysian perspective. Just what is hate speech? How do we define specific words or phrases that can be considered hate speech? In addition, little has been known about how social media can facilitate the othering discourse, particularly in the Malaysian context.

As a result, the purpose of this study was to fill a research gap by gathering perspectives from civil society and experts from all multidisciplinary in order to develop clear definitions and comprehend the construction of hate speech from a Malaysian viewpoint. This study will contribute to academic discussions about the public sphere, social media in the creation of meaning, discourses, ensuing cultural and social change, and understanding the trajectories across digital spheres, in addition to providing valuable insights for anti-hate speech prevention programmes.

1.1. Cyber troopers

According to Prakash (Citation2021), cyber troopers can be defined as individuals who receives payment to spread political propaganda online, especially on social media sites. It can be bloggers, tweeters, netizens, or participants who comment and respond to any online postings (Hopkins, Citation2014). They manipulate exaggerated and distorted information under the guise of propaganda (Leong, Citation2019). The activities of cyber troopers in Malaysia are focused on creating fake bot accounts, which are accounts automatically developed to imitate human behaviour online. These accounts include those used to spread pro-government or pro-party propaganda, attack the opposition in campaigns, and discourage participation through personal attacks or harassment on Facebook, WhatsApp, YouTube, and Twitter. While cyber troopers’ salaries can reach RM10,000 depending on experience and skills (Zain & Ramlan, Citation2019), some receive up to RM3 million depending on the project during the election (Leong, Citation2019).

Incidents involving cyber troopers have also occurred in Indonesia as early as 2012, especially when social media was used in the election of the governor of Jakarta (Alizen & Fajar, Citation2022). The former mayor Joko Widodo, and still someone who has remained a relative outsider in the political system, defeated his opponent Fauzi Bowo. He was able to strengthen his position in part because a digitally savvy team of volunteers ran a vigorous social media campaign (Hamid, Citation2014). The same group of volunteers was also able to unite support for Jokowi through social media in the 2014 presidential election, in which he contested against his opponent Prabowo Subianto. Both campaigns used influencers and unregistered buzzer squads (Alizen & Fajar, Citation2022). As a result, organised cyber troopers have become an important part of the upcoming election campaigns in Indonesia.

Meanwhile, the cyber troopers began in Malaysia in 2008, when the 12th Malaysian General Election (GE12) party suffered defeat for the first time and failed to achieve a two-thirds majority (Tiung et al., Citation2018). A “political tsunami” has been described in regard to the incumbent’s spectacular losses and the opposition’s gains. Likewise, with the 14th Malaysian General Election (GE14) on NaN Invalid Date , when Pakatan Harapan (PH) coalition ousted the former ruling party Barisan Nasional (BN) after more than six decades of authoritarian rule (Nadzri, Citation2019). In this edition of the GE, cyber troopers were believed to play an important role in determining the results of GE-14, especially the current issues on social media (Saidin & Othman, Citation2021).

In January 2017, former Deputy Prime Minister Dato’ Seri Dr. Ahmad Zahid Hamidi stated that 93.4% of cyber warriors in this country supported the opposition (PH), while only 6.6% supported the ruling party (BN) (Zain & Ramlan, Citation2019). That is why BN which was led by the former Prime Minister, Dato’ Seri Mohammad Najib Tun Abdul Razak suffered a severe defeat. According to Leong (Citation2019), the sophistication and proliferation of such fake news have been further exacerbated in the cyberwar between BN and the opposition. Besides the widespread use of WhatsApp as a form of public communication and the ease with which users share news and information they have received has also contributed to the spread of fake news in cyberspace during GE14. Because of this, UMNO and BN admitted that cyberwar activities play an important role (Zain & Ramlan, Citation2019).

Therefore, any political party in Malaysia has a cyber trooper appointed based on the experience and skills available to attack any political party including the individual himself. The word “cyber trooper” itself carries a negative meaning, and if this matter is not curbed, it is feared that it will affect the harmony of the race and the people of Malaysia. Cyber troopers are responsible for creating false rumours and creating hate campaigns against certain individuals. They are also accountable for bringing up sensational issues, to fuel the fire of hatred among political leaders and the people. Furthermore, cyber troopers are believed to be intentionally impersonating members of target groups who post comments to bait and stir up hatred. The comments of these cyber troopers that play with popular sentiment succeeds in turning individuals dealing with public opinion against each other (Rahman, Citation2020).

1.2. Racial hate speech

Racism is a race-based process of systems, policies, actions, and attitudes that leads to the inequalities of opportunities and outcomes for people that are more than just prejudiced in terms of thought or action (Australian Human Rights Commission, Citationn.d.). The intention behind such an attitude is to oppress, discriminate against, or limiting the rights of others. Cambridge Dictionary (Citation2022) defines racism as “harmful or unfair things that people say, do, or think based on the belief that their own race makes them more intelligent, good, moral, etc. than people of other races.”

Over time, the concept of racism and its phenomenon is becoming more complex. Racial discrimination is defined in Article 1 (Part 1) of the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) in 1969 as any restriction, distinction, preference, or exclusion that was based on either colour, race, ethnicity, descent, or nationality with effect or purpose to impaired or nullified one’s recognition, enjoyment, or exercise of human rights, political, economic, social, cultural or any freedom in all aspects of life.

The expression and manifestation of racism used to its meaning specifically to incite hatred is defined as hate speech by the European Parliament (Citation2020). The Council of Europe (Citation2023.) describes hate speech as any forms of expression that may incite, encourage, instigate, or justify discrimination, violence, or hate towards an individual or group of people that is due to numerous reasons. A larger scale of conflict and violent acts may occur if the situation is left unattended.

In 2021, a total of 53 racism and/or racial discrimination incidents were documented in Malaysia based on media reports and exploratory observation by Pusat KOMAS (Citation2022). Pusat KOMAS categorises the racial discrimination trends in Malaysia into seven categories namely racial and religious politics; religious provocation; educational sector racial discrimination; racism online and in the media; business sector racial discrimination; other sectors racism and racial discrimination and xenophobia.

Racial politics has been identified as a concerning issue in Malaysia according to the Malaysia Racial Discrimination Report (2021). Aminnuddin and Wakefield (Citation2020) determined that the Malays exhibit a higher level of discrimination by not wanting to be in the same neighbourhood with individuals of other races and religions. Malaysia’s primary divisions of polarisation are recognised as ethnic; Islamist and secularist; as well as political reform (Welsh, Citation2020). The drivers for such polarisation are based on the differences in Malaysia society in terms of income class, economic inequality, religious beliefs, media fragmentation, and political parties. According to Kok Seong (Citation2019), a rise in racial tensions is partly due to the rise of social media and the anonymity it allows. It has become even worse in recent times. This is because many can freely say what they want on social media. Social media has its good points, but it also gives a voice to many who can abuse it.

The concern with hate speech is the inspiring effect on its audience to either harm the intended victim(s) or not after the victim(s) was aimed with such speech (Ghanea, Citation2012). Ghanea (Citation2012) also depicted the five kinds of speech that is used to incite or address “lower grade” or discriminate with the intention to start genocide or terrorism—discriminatory language, hate speech, incitement to hatred, terrorism, and genocide incitement.

MCMC in 2022 reported that they received over 1,700 complaints related to hate speech with regards to race, religion, and royalties (3 R). Such trends of hate speech are extremely alarming for Malaysians as it may lead to extremism and spark radicalisation that is grounded by violence (Ahmad Tajuddin, Citation2020). The sharp rise in statistics for hate speech are assumed to occur due to the introduction of a quick complaint procedure by the public through MCMC’s WhatsApp as well as the compliance to stay-at-home for Malaysian citizens during the COVID-19 pandemic.

As the world is progressively growing, the emergence of digital technologies has altered our ways of living as well as the nature of racism. The continuality of social transformation in the digital landscape is depending on humans and technology interrelationships (Noble, Citation2018). The uprising of new communication technologies has amplified the scale and impact of hatred causing hate speech to be one of the most common methods for threatening global peace rhetorically and ideologically (United Nations, 2022).

While Article 10 (1) Federal Constitution of Malaysia guarantees its people of their freedom to express and speak, it will still meet with a certain limitation (Shukor et al., Citation2015). Having the internet as a space for the Malaysian global village to connect, the contents could easily be manipulated and speculated by irresponsible users. Including propagating discriminatory ideology in a speech could influence others to adopt the idea of belief in racial superiority (Murni & Ratnawati, Citation2015).

Daniels (Citation2013) argues that social media platforms, like social networking sites (SNS), are the space “where race and racism play out in interesting, sometimes disturbing ways”. As social media is used to influence the socio-political landscapes globally, the practice of old and new racism accelerated within these platforms. In addition, racial hate speech seems to be thriving on social media especially through means like memes (Lamerichs et al., Citation2018), toxic subculture cyberspace on Reddit (Chandrasekharan et al., Citation2017; Massanari, Citation2015), the rise of reaction network of racist influencers on YouTube (Johns, Citation2017; Murthy & Sharma, Citation2019), the incitement of racial hate speech using fake profiles (Farkas et al., Citation2018), as well as pervasive coordinated harassment via Twitter (Shepherd & Paluck, Citation2015).

1.3. Self and other in CyberSphere

Racist discourses and practices on social media are important but complex research topics. This present study offered a critical mapping of Stuart Hall’s (Citation1997) “otherness,” in which he makes the case that representing “difference”—not just in terms of race and ethnicity, but also in terms of gender, sexuality, and class—has emerged as a potent theme in contemporary culture.

Hall questions where the source of the fascination with “otherness” in his philosophical writings and stereotypical images comes from. As a result, he acknowledges the problem of representation politics and emphasises the complexity of representing difference because it elicits strong emotions like fear, anxiety, and excitement, which typically lead to stereotyping, a binary form of representation.

Conversely, conceptualizations of the “self” and “other” have been demonstrated in intellectual inquiry dating back to Plato (Meddaugh & Kay, Citation2009). The “doubleness” of discourse, according to Hall, inextricably links the self’s identity with that of the other. “It is always told from the perspective of the other as a process, as a narrative, as a discourse” (2000, p. 147).

Contemporary postmodernism and cultural studies scholars emphasise the critical dimension of the “other” in examining oppressive hierarchies resulting from the struggle for political and social power (Bhabha, Citation1995, Citation1996, Citation1998; Said, Citation1994). When actual or perceived resources, including power, are threatened, naming cultural differences becomes especially important (Bhabha, Citation1996, p. 16). In a multicultural era, historically physical racial boundaries, such as segregation and miscegenation statutes, have given way to linguistic spaces in which “we” and the “other” coexist. Scholars are beginning to investigate the abundance of racist rhetoric on the internet (Azman & Zamri, Citation2022; Bliuc et al., Citation2018; Hughey & Daniels, Citation2013; Siapera et al., Citation2018; Zamri et al., Citation2021b, Citation2022).

In an ironic twist, dominant discourses generate and spread power (and power relations) among people by influencing their attitudes and behaviours. Power is always encoded in media representations, which frequently results in power dynamics. As a result, power relations are divided and mediated through the production of knowledge, values, and beliefs. The importance of Foucault’s work was emphasised heavily. The developed framework is linked because both theories involve power production, as in Stuart Hall’s “Representation” and Edward Said’s “Otherness” (Zamri et al., Citation2017).

Malaysia is a multiracial country, which makes the recent and expanding research on non-whites and Asians using social media stand out (Cisneros & Nakayama, Citation2015). Critical indigenous studies are rarely employed as lenses through which to examine racism and hate speech on social media. Taking this into consideration, and echoing Daniels (Citation2013) and Kopytowska and Baider (Citation2017), as well as the need for additional research from Kyslova et al. (Citation2020), this present study attempted to investigate the extent to which social media serves as a platform that exacerbates the othering discourse in the online sphere.

2. Methodology

2.1. Focus groups

The data were gathered through the research method of focus groups as a stand-alone method. Focus group discussions encourage interaction among participants, allowing them to share ideas and experiences as well as revealing attitudes and behaviours in a natural way (Barbour, Citation2018; Krueger, Citation2014. The focus group discussions were conducted online because the diverse geographical origins and was difficult to reach and lived throughout Sabah, Sarawak and peninsular Malaysia. Online focus groups are popular means of conducting research in most of discipline of study (De Groot et al., Citation2018; Halliday et al., Citation2021). By choosing online focus group discussion, especially using a synchronous approach, it allowed ‘real time feedback and active discussion among participants. Active engagement among participants could be achieved by promoting inclusion and allowed active member to clarify their views to avoid misinterpretation (Teti et al., Citation2020).

2.2. Participants and moderator

The number of the participants of the focus groups may differ, commonly focus group members consists of 10–12 people (Baumgartner, Citation2002) between six to ten (Powell & Single, Citation1996) and between six to eight (Krueger & Casey, Citation2000), therefore the study conducted a series of three online focus groups, each with six to eight participants. The participants consisted of multidisciplinary experts (including academics, lawyers, non-govermental bodies (NGO) and policy makers) members of civil society who had experienced hate speech (mixed ethnic and geographic origins, academic qualifications, age, and socio-economic status).

To meet transferability and applicability, the synchronous approach was applied whereby all participants were included and their perspectives were acknowledged and understood as an expression of their own context and several series of focus group discussion were conducted (Lobe, Citation2017, 2022).

According to Krueger (Citation1994), it is ideal for the focus group to have a moderator team. Therefore, each series was moderated by an experienced moderator, co- moderator and a third researcher taking notes to inform potential emergent questions to ask. The moderators chosen for this focus group study possess exceptional qualifications, expertise, and extensive experience that make them highly suitable for leading the discussions. Additionally, they have demonstrated their expertise through previous research projects and publications in the area of hate speech, racism, and xenophobia.

Furthermore, these moderators have extensive experience in conducting focus groups and facilitating discussions on sensitive and complex topics. They have successfully led numerous focus groups on similar subject matters, showcasing their ability to create a safe and inclusive environment where participants feel comfortable sharing their perspectives. Their experience in managing group dynamics and fostering meaningful dialogue ensures that all participants have an opportunity to contribute and that diverse viewpoints are respected and valued.

The moderators’ interpersonal skills further enhance their effectiveness in this role. With their active listening abilities, empathy, and cultural sensitivity, they can establish rapport with participants and create an atmosphere of trust and openness. Their adeptness in navigating sensitive conversations and handling diverse viewpoints ensures that the discussions remain constructive and productive (Krueger, Citation2014).

2.3. Sampling and ethics

Purposive sampling was selected as a method for selecting the participants. Purposive sampling is a sample that includes participants with specific characteristics or features (Kelly, Citation2010). The selection of participants was on the basis of the civil society’s experiences, academics and interdisciplinary experts that have researched and written about issues that are related to this study. The sample is not exhaustive, nor is it considered representative of the range of civil society groups active in this area by this study.

To recruit the sample, email invitations were sent to potential participants. All participants were required to submit written consent via email before participating. Regarding ethical risk in virtual environment, participants were required to print out the consent form, sign it and return it via email

Follow-up and announcements regarding the actual date of the online focus group discussion were posted via WhatsApp apps. Prior to this study, no relationship was established between the researchers and participants. The participants neither had any information nor did they develop any further relationship with the researchers.

Typically, the identity of each participant is anonymous. Thus, there is a greater likelihood of disclosure of personal or sensitive information than might be revealed in a face-to-face focus group (Walston & Lissitz, Citation2000). During the discussion, participants used their first name and nickname only, and if they wished to present a view that was in opposition to the tenor of the group discussion or reveal something personal, then the participant was able to send a private message to the moderator, in which case only the moderator would view the message and respond.

As per the data, the transcribed and raw data from FGD were captured and stored in the drive and were accessible by the principal investigator and research team members. For this study, full ethical approval was granted by the UiTM Research Ethics Committee which operates in accordance with the ICH Good Clinical Practice Guidelines, Malaysian Good Clinical Practice Guidelines and the Declaration of Helsinki.

2.4. Data collection

The focus group discussions were conducted from May to September 2022 and were conducted using Zoom, an online conference platform. All participants were given a unique password to join the Zoom meeting. Three different online focus groups which consisted of a total 22 participants were conducted. The discussions were conducted in a synchronous approach, where all participants could actively engage with each other. During the discussion, it was interesting to note that how age and gender did not seem to affect the pace of communication as online focus group discussions proved to have produced more ideas and quality participation compared to face-to-face sessions (Murgado-Armenteros et al., Citation2012; Reid & Reid, Citation2005).

All online focus groups discussions were moderated by experts and questions were based on the interview guide, as shown in Table . The structure of the focus group discussion guide was based on related academic literature and published reports in the field. Two observers (from research teams) were present to take notes and to ensure that all items in the guide were addressed.

Table 1. Interview guide for online focus groups

The focus group discussions were conducted approximately lasting for 1 to 1.5 hours for each session. The interview data were recorded and transcribed. Data saturation was not discussed and the transcripts were not returned to the participants for any comments and corrections.

Before the actual focus group, a pilot focus group was carried out involving all lecturers in the Faculty of Mass Communication and Media Studies in Melaka in obtaining inputs on methodology, and to tease out some ethical issues around discussing sensitive topics. In order to meet reliability and validity, the consistency between accounts given by all the research participants and scholarly sources from online searches on hate speech would be reflected in the reliability and validity of this study.

3. Data analysis

To date, no framework has been provided that delineates the types of qualitative analysis techniques that focus group researchers have at their disposal (Onwuegbuzie et al., Citation2009), hence a thematic approach was employed to analyse the data. According to Kitzinger and Barbour (Citation1999), a significant majority of focus group research e.g (Kumari et al., Citation2021; Scaffidi et al., Citation2022; Van Gaalen et al., Citation2021). utilises thematic analysis as it is considered suitable for this method of analysis.

Emergent themes from earlier focus groups were used to enhance or add to questions and prompts in later groups in this iterative and abductive analytical process. In general, Thematic Analysis (TA) is neither constrained or confined by any pre-existing theoretical framework because it provides for certain epistemological and theoretical flexibility. As a result, it does allow for the analysis’s evaluation of pertinent hypotheses (Braun & Clarke, Citation2006, Citation2022).

A technique for finding, analysing, and reporting patterns in data that allude to “themes” is thematic analysis. It also organises and describes data sets in minute detail. This study used thematic analysis to analyse data, following the six processes suggested by Braun and Clarke (Citation2006): (1) familiarising data, (2) generating initial codes, (3) searching for themes, (4) reviewing themes, (5) defining and naming themes, and (6) producing the report.

This adaptability enables the identification of themes at both the semantic (i.e. “micro”) and latent (i.e. “macro”) levels of analysis (Allen et al., Citation2009; Braun & Clarke, Citation2006). Both levels of analysis are utilised due to the broad inductive scope of the research question. To protect confidentiality, all participant information was anonymized before being transcribed for coding. All researchers discussed and devised themes. Data collection and analysis were carried out until theoretical saturation was reached. The authors are unable to make the data set public due to ethical concerns about participant confidentiality. The participants were explicitly assured during the consent process that the data would only be seen by members of the study team.

4. Findings and discussion

Although there is a ton of material on hate speech, it seems to almost universally focus on a very large nation. Since Malaysia is a multicultural society, defining hate speech is crucial because it involves different languages and dialects. Who decides what constitutes hate speech? Who is qualified to make the distinction between right and wrong? The definition seems to blur the lines between every piece of literature in this field. In fact, the concept of hate speech is based on social constructs. As a result, the purpose of this study was to find an empirical definition by conducting focus groups with two groups. Group 1 consisted of six participants who were chosen by purposive sampling and primarily represented academics, attorneys, and practitioners. While group 2 consisted of seven participants mainly from social media influencers, victims of hate speech, media practitioners, and non-governmental bodies. Lastly, group 3 consisted of 9 participants from numerous geographical backgrounds, agegroups, and races of hate speech victims.

The participants discussed and commented on the concept of hate speech based on their personal experiences. They ranged in gender, age, and educational background. Throughout the study, research ethics was concerned with respecting the research participants. The researchers’ online focus group discussions took the form of a structured informal discussion, as described below. Three focus group sessions, including a pilot focus group, were held with the team of researchers in 1.0–1.5 hours of regular discussion sessions from May to September 2022.

During the discussion, participants were asked to describe and provide examples of racially charged concepts/ideas/images they had encountered online. Which groups/communities are most frequently targeted, and what types of hate speech are most common? What issues tend to elicit racist responses, and how did they fare when reporting racist hate speech to regulatory bodies? Participants eventually expanded on their personal experiences with online harassment and shared some names or groups on Twitter that they had come across or blocked due to abusive or racist posts. Several themes emerged from the discussions.

4.1. Definition of hate speech

The findings of this study’s focus group discussions demonstrate that four modes have intriguing similarities to the thorough “four bases of definition” of hate speech mentioned in Anderson and Barnes (Citation2022), “What is Hate Speech?” section, that includes (1) harm, (2) content, (3) intrinsic properties, and (4) dignity. We find that these bases, however, share the same coherence issues as numerous other definitions. The focus group interviews and subsequent non-systematic assessment of the hate speech literature appear to have yielded the most significant result: a list of topics for hate speech definitions from Malaysian viewpoints (see Figure ).

Figure 1. Comparisons from a review of the literature.

Figure 1. Comparisons from a review of the literature.

The debate over hate speech was mirrored to define the concept of “free speech,” given the inextricable link between the two concepts because the unconditional defense of one can lead to the manifestation of the other. The right to freely express ideas, thoughts, emotions, desires, preferences, opinions, and beliefs without limitations as well as fear of criticism was defined as freedom of speech. Based on the focus group discussion, most participants indicated that hate speech is essentially a negative behaviour and that it represents that we live in a terminally ill society in which unusual behaviour has become the norm and acceptable.

For me hate speech, it’s so negative. Very simple. There are many words such as body shaming, bashing, we can summarize them, group them in hate speech, but if this is hate speech, we refer to criticism. (FGD 1)

While some participants view that hate speech as merely from sedition and not consider it as hate speech:

Hate speech is sedition. So, my impression on the word hate speech is a term or a phenomenon that occurs in the West and that thing, the concept was brought here and we try to match what is happening in our place, so the term my observation is the use of the term in context of Malaysia. (FGD1)

But even so, it was discovered that many participants did not understand the concept of hate speech. In fact, it was mixed up with other troubling behaviours in both the online and offline worlds.

My opinion on this hate speech. He will change over time; For me, this hate speech can be considered as hate speech if the intention and the words used are to humiliate something targeted by his audience. (FGD 1)

[…]like me, like I understand when we use slurs, it can be said as hate speech. But if he is like just attacking words, what he just commented like that alone is not considered enough to be said as hate speech. (FGD 2)

Moreover, Table showed an example of a term which consider a provocative message to demean or accuse the other person. A few examples of the term “Jakun;” which means “seeking attention/unruly,” appear to be a common phrase in everyday conversation. However, it will consider it an insult because the word “Jakun” is referring to a group of ethnic of indigenous people (Orang Asli) in Malaysia. Other terms showing racist sentiment, such as “Cina”, “Meleis”, and “Keling”, were the examples of some racist gaze that was subjugated in the name of multi-ethnic country.

Table 2. 0: Term of hate speech from Focus Group Discussion

With regard to the above, the four modes from Anderson and Barnes (Citation2022) and past studies by Cohen‐Almagor (Citation2011) and Brown (Citation2017) seemed to attain the objectives of defining hate speech, while the results based on Table add a consensus perspective from the lens of Malaysia. Additionally, it provides a simple and repeatable operationalization based on a thorough conceptualization of all forms of hate speech suppression used. These descriptions and definitions are useful for characterizing hate speech, but they miss the point by concentrating on what hate speech is rather than what it does.

This is connected to the argument by Weinstein (Citation2017) about the lack of a clear conception of hate speech. Whereby the definition itself is being constructed based on virulent dislike of a person for any reason. Table shows some of the examples of results of a few words that classify as hate speech. Those words described by participants, and merely popular used in social media were usually a collection of adjectives to indicate the traits attached other party. In particular, those words were similar to those stated by Salminen et al. (Citation2018), in which the element of the dominant model of provocative messages in hate speech usually contained adjectives and pronouns given to opponents.

4.2. Othering in online sphere

This prevalent theme characterises online media as a platform that exacerbates online othering discourse. This topic conjures up a “us” versus “them” or “othering” feeling, which is comparable to earlier social identity developments (e.g. Blackwood et al., Citation2012). The epistemic validity of the participants’ definitions is actually substantially supported by a substantial corpus of literature on race and racism. Racist discourses, according to scholars on race and ethnicity (Michael, Citation2017; Van Dijk, Citation1993), tend to mobilise notions of culture, ethnicity, and religion. This populist rhetoric of othering in the online sphere appears to facilitate even more aggressive and hateful forms of speech.

[…]He really dislikes some of those people, and when they release content or do something that he believes is not appropriate from his point of view, he will vent his rage.(FGD2)

I feel like it can be very discriminating for example when it comes to now there are a lot of girls on TikTok, especially about appearance, private parts and everything. (FGD1)

Populist rhetoric is also prevalent among celebrities and influencers. Participants said that social media users, especially trolls, are getting better at using slang and hashtags to avoid being accused of racism and get around community rules about hate speech. For example, public figures such as Dato’ Seri Alif Syukri and Baby Shima seem to have been using this platform to gain attention and did not even care about netizen’s’ remarks. However, most of the participants and social media users that were disgusted by the act and behaviours tended to express their anger and microaggressions using slurs and trolls. Hence, this significant evidence is similar to Rieger et al. (Citation2021), which showed that trolls can somehow be a mix with humour and easy to spread.

I have a little addition, recently on TikTok there was a viral story about Dato’ Aliff Syukri with Baby Shima dancing, as if pole dancing. Er. But we can see right there that he was severely criticized for what was shown. But a friend of Dato’ Aliff Syukri who is Sajad came and he thought he knew what his friend was doing. And as if that thing is not a mistake at all. As Sajad said, he said eh I know what kind of friend he is, he doesn’t disturb, he doesn’t mess up, does bad work and so on. So I think, a word that is felt, which is liberalism which is nothing, becomes an extremist, where they are closely related to our beliefs and beliefs. It means they say yes, there are a few who say yes to things that can’t be done. So it is closely related to our beliefs as Muslims and this involves words that are issued in extreme ways such as challenging our beliefs and beliefs. (FGD2)

[…]Even recently Meta, Facebook has released, there are more than 500 troll accounts (incite hate speech) that people can identify and even people think that this matter is closely related to the PDRM itself. It’s not that this trolling comes from the existing community, but it also comes from institutions that we don’t know, institutions that we don’t even know about, because we don’t have the data right, we don’t have freedom of information in Malaysia, so when this thing comes out of Meta Facebook, How, how do we say about this? (FGD 3)

Despite this, a few of the participants discussed how hate speech almost always directs its attention toward people of colour. Which, for the most part, targeted people based on their ethnicity, skin colour, and place of origin. As a result, this became significantly worse during the COVID-19 pandemic. Throughout the course of the pandemic, a number of populist narratives have emerged, which demonstrate that the pandemic not only encourages hate speech but also makes it more difficult for migrants and refugees to find safety.

For example, narratives about the Tabligh Cluster and Rohingya were examples of the spread of impressions that targeted others who inferior, and less than a human which they consider a threat to society. The vein of hate speech projected on the Tabligh Cluster and Rohingya showed immoral frames to be common within society. Hence, this is what framing is, where understanding and sharing meaning are constructed within the wider social content of the group.

In spite of the fact that Malaysia is more commonly known as a multi-ethnic society than a society with multiple civilizations, there is still a culture of conflict and confrontation among the various ethnic groups. This can be observed where such mechanisms for resolving conflict and fostering tolerance are absent, add on with there is a history of social tension and impunity for historical injustices, and hate speech can exacerbate discriminatory attitudes and polarisation in societies that can lead to an escalation in violence.

During Covid, many people have free time to comment, especially on viral issues such as tabligh and Rohingya. Many netizen comment and wanted Rohingya to get out from Malaysia (FGD 3)

Kluster Tabligh for example, become viral. People blamed these people caused the spread of COVID. How do you know? You cannot simply point fingers to them. Must check first! (FGD 3)

These discussions highlight the diversity and ever-changing existence of various categories of hate speech that vary by country. Simply put, the definition of “hate speech” is becoming more contested; some argue that discriminatory hatred alone is not enough, and that more must be demonstrated. Opinions on what constitutes “hate speech” and when it can be prohibited differ greatly, as evidenced by the findings of this study. Hence, the results based on the present study were evident to a study byAuxier and Vitak (Citation2019), where comments within online spaces are homogeneous and can act as echo chambers or heterogeneous and antagonistic.

4.3. Power and knowledge in hate speech

It is not a new concept to think that hate speech plays a role in how society constructs power. Hate speech has been argued by academics like Mari Matsuda for years as a venue for the (often predictable) perpetuation of power relations (Matsuda, Citation1993). Hate speech maintains and re-establishes the target population’s inferior status. In addition, hate speech occasionally encourages a negative view of others or a comparison between oneself and the other (Zamri et al., Citation2020).

[…] I feel like it can be very discriminating for example when it comes to now a lot of women on TikTok, especially about appearance, about private parts and everything, they would only, they would use religion to judge and to, what kind of display of hatred towards the individual involved, while it all depends on the individual himself. (FGD 1)

[…] But if it’s a woman, it can be a worse hate speech than a man. I don’t know if this is hate speech or something like that. For me, whatever sexist or racial discrimination can be bad and create hate speech. (targeted group: gender) For me, right. And I think when it comes to being an activist, it’s much worse for women to not only fight against the facts but also to start doing it … (FGD 2)

These examples show not only what hate speech is, but also what it accomplishes. These and thousands of other examples demonstrate how hate speech works to maintain oppressive systems.

While historically hate speech has been associated with a derogatory and violent speech directed at racial, sexual, linguistic, and religious minorities, this association has loosened, and hate speech as a concept has effectively been “emptied” of its past and its association with racism. It may even have become, in the words of Laclau (2005), a floating signifier that avoids strict definition and instead functions as a line where the “border work” is done. As a result, it also develops into a rhetorical device and a tool (particularly in political and politicised contexts), which further obscures the subject at hand and blurs the definition of what constitutes racism. When these issues are moved under the umbrella of hate speech, they can be weighed against the idea of free speech and it can be questioned what kinds of restrictions we should impose on spoken or written words.

Most politicians are immune; it always happens in parliament. but when they make hate speech, no action is taken … (FGD2)

[…] in parliament there are members who curse someone’s beliefs, there are members who curse someone’s physical appearance. (FGD1)

Hate speech is more than just a phrase used to disparage individuals based on fixed identity traits. It is a strategy used by those in positions of power to uphold their control over society, politics, or the economy. Because it places its victims in a position of inferiority, equality is all but impossible.

5. Conclusion

Online hate speech has been shown to be more pernicious than traditional white supremacist discourses. The proliferation of hate speech online has been made possible by the disintegration of discourse genres and discursive integration occurring in the multimedia environment. Besides, hate has been a significant defense against dangers that disrupt the system of identity and cross its boundaries, helping to keep the community alive.

The study also revealed how Malaysians understand and define hate speech similar to other parts of the country (Anderson & Barnes, Citation2022). However, certain elements and operationalisation of these understandings are more towards racially loaded toxic contents. All the contents were considered as discourses that could cause harm by racializing and othering groups of people. It also results in the other being alienated, humanity being destroyed, and society becoming demoralised (Nussbaum, Citation2004).

Has not been forgotten the long root of the history of Malaysia, how it became a multicultural country, which was shaped by the British and Japanese during their occupations, which the impact can be seen until today. Such bold form in ghettoes expressed in the online sphere showed the otherness and homogeneous thinking of Malaysian. This present study thus confirmed an earlier research by Lumsden and Harmer (Citation2019) and Zamri et al. (Citation2021a), which found that the discourse of othering is a continuation of traditional patriarchal power relations.

As can be seen, a significant trend in hate speech both in Europe (Bayer & Petra, Citation2020; Bleich, Citation2017; Howard, Citation2017) and Asian context (Husni, Citation2019; Kang et al., Citation2020; Kim-Wachutka, Citation2020; Morada, Citation2021; Wan Mohd nor & Gale, Citation2021) have similarity concerning race, religion, gender, ethnicity and sometimes sexuality. Thus, it demonstrates that scholars share a common understanding of the negative effects of hate speech and has sparked widespread concern. It is undeniable that the online sphere has become a breeding ground and provided a fertile soil for those who have marginalized thinking to instigate hatred within the different groups.

Undoubtedly, the anonymity of Internet contributors—whether they are hiding behind screen names or are simply non-existent—offers authorship options that are absent from the traditional discourse. Internet users as a result actively participate in the creation of meaning, whether consciously or unconsciously, and can be thought of as co-authors. Hence, this small movement looking into different angles of defining hate speech might be one of the restorations of the human dimension against polarisation and an antidote to hate speech.

In fact, the fine line between hate speech and free speech needs to be resolved, because in all opinions, the line between just punishment of expressions that are genuinely hateful and the application of censorship methods is very subtle. Hence, this study also share concern for the granularity of identifying hate speech that pose further complication in the discourse (Brubaker & Cooper, Citation2000). In addition, future research may need to broaden the sample of participants to include people of various ages and backgrounds, and it may also need to look at how different reporting justifications are disseminated among different demographic groups.

Acknowledgments

We would like to thank all individuals who helped us with this project, without their support and guidance it would not have been possible. Special thanks to our Research Assistant Ms. Noramira Fatehah Azman for providing a lot of resources needed in completing our project. This research is funded under Fundamental Research Grant Scheme FRGS/1/2021/SS0/UITM/02/15 and Research Ethics Approval from UiTM REC (REC/12/2022 (MR/1038).

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The work was supported by the Fundamental Research Grant Scheme FRGS/1/2021/SS0/UITM/02/15.

Notes on contributors

Norena Abdul Karim Zamri

Dr. Norena Abdul Karim Zamri is a research fellow at Institute of Malay and Civilizations (ATMA), Universiti Kebangsaan Malaysia (UKM). Before joining Universiti Kebangsaan Malaysia, she served as a senior lecturer at the Faculty of Communications and Media Studies, Universiti Teknologi MARA (UiTM) Melaka. She was an alumnus of the faculty since her Diploma. Before joining UiTM, she worked as a lecturer and Programme Coordinator at Quest International University, Perak. She holds a Ph.D. in Social Science and Humanities from Universiti Teknologi PETRONAS (UTP), Perak. She was appointed as a graduate research assistant during her studies and research officer before enrolling as a postgraduate student at UTP. Norena has published several international refereed journals and presented at several conferences locally and abroad as an active researcher. Her research interests are new media and history, Malay studies, cultural studies and media, mass media, and communication.

NurNasliza Arina Mohamad Nasir

Dr Nur Nasliza Arina Mohamad Nasir is a senior lecturer at Faculty of Communication and Media Studies, Universiti Teknologi MARA (UiTM) and is currently researching on Public Relations, Islamic Communication, Islamic Work Ethics and Communication and Media Studies.

Mohammad Nurhafiz Hassim

Dr. Mohammad Nurhafiz Hassim is a senior lecturer at Faculty of Communication and Media Studies, Universiti Teknologi MARA (UiTM) and is currently researching on New Media and Communication and Media Studies.

Syaza Marina Ramli

Syaza Marina Ramli is a lecturer at the Faculty of Communication and Media Studies, Universiti Teknologi MARA (UiTM), while currently serving as a Coordinator for Information and Publication Unit at UiTM Communication Department. Her research focuses mainly on Communication Management, Corporate Communication, Liberal Communication, and Media Studies.

References

  • Ahmad Tajuddin, M. (2020). Online Hate Speech in Malaysia. Retrieved from Malaysian Institute of defence and security: https://midas.mod.gov.my/gallery/publication/midas-commentaries/213-haze-managing-another-disaster-during-covid-19-by-lt-kol-dr-maimunah-omar-2
  • Alizen, A. N., & Fajar, M. S. (2022, September 17). Election campaigns and cyber troops. Retrieved from inside Indonesia: https://www.insideindonesia.org/election-campaigns-and-cyber-troops
  • Allen, M., Bromley, A., Kuyken, W., & Sonnenberg, S. (2009). Participants’ experiences of mindfulness-based cognitive therapy: “It changed me in just about every way possible”. Behavioural and Cognitive Psychotherapy, 37(4), 413–20. https://doi.org/10.1017/S135246580999004X
  • Almuzafer, A. A. A. (2021). The development of racist sentiment against Asians on Twitter during Covid-19 [ Doctoral dissertation], Hamad Bin Khalifa University (Qatar).
  • Aminnuddin, N. A., & Wakefield, J. (2020). Ethnic differences and predictors of racial and religious discriminations among Malaysian Malays and Chinese. Cogent Psychology, 7(1), 1766737. https://doi.org/10.1080/23311908.2020.1766737
  • Anderson, L., & Barnes, M. (2022). Hate speech. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (spring 2022 edition). https://plato.stanford.edu/archives/spr2022/entries/hate-speech/
  • Auethavornpipat, R. (2021). Hate speech and incitement in Malaysia. Preventing Hate Speech, Incitement, and Discrimination: Lessons on Promoting Tolerance and Respect for Diversity in the Asia Pacific, 119–158. https://doi.org/10.2139/ssrn.4063073
  • Australian Human Rights Commission. (n.d.). Race discrimination. Retrieved from What is racism?: https://humanrights.gov.au/our-work/race-discrimination/what-racism
  • Auxier, B. E., & Vitak, J. (2019). Factors motivating customization and echo chamber creation within digital news environments. Social Media+ Society, 5(2), 2056305119847506. https://doi.org/10.1177/2056305119847506
  • Azman, N. F., & Zamri, N. A. K. (2022, September). Conscious or unconscious: The intention of hate speech in cyberworld—a conceptual paper. Proceedings, 82(1), 29. MDPI.
  • Barbour, R. S. (2018). Doing Focus Groups (2nd Ed). Qualitative Research Kit (pp. 1–224). SAGE Publications Ltd. https://doi.org/10.4135/9781526441836
  • Baumgartner, H. (2002). Toward a personology of the consumer. Journal of Consumer Research, 29(2), 286–292. https://doi.org/10.1086/341578
  • Bayer, J., & Petra, B. Á. R. D. (2020). Hate speech and hate crime in the EU and the evaluation of online content regulation approaches.
  • Bhabha, H. (1995). Freedom’s basis in the indeterminate. In J. Rajchman (Ed.), The identity in question (pp. 47–62). Routledge.
  • Bhabha, H. (1996). The other question. In P. Mongia (Ed.), Contemporary postcolonial theory (pp. 37–54). St. Martin’s Press.
  • Bhabha, H. (1998). Staging the politics of difference: Homi Bhabha’s critical literacy. In G. A. Olson & L. Worsham (Eds.), Race, rhetoric, and the postcolonial (pp. 3–39). State University of New York.
  • Blackwood, L., Hopkins, N., & Reicher, S. D. (2012). Divided by a common language? Conceptualizing identity, discrimination, and alienation. In K. J. Jonas & T. A. Morton (Eds.), Restoring civil societies: The psychology of intervention and engagement following crisis (pp. 222–236). Wiley.
  • Blaya, C. (2018). Cyberhate: A review and content analysis of intervention strategies. Aggression and Violent Behavior, 45, 163–172. https://doi.org/10.1016/j.avb.2018.05.006
  • Bleich, E. (2017). Freedom of expression versus racist hate speech: Explaining differences between high court regulations in the USA and Europe. In Marcel, M. & Ralph, G. (Eds.), Regulation of Speech in Multicultural Societies (pp. 110–127). Routledge.
  • Bliuc, A. M., Faulkner, N., Jakubowicz, A., & McGarty, C. (2018). Online networks of racial hate: A systematic review of 10 years of research on cyber-racism. Computers in Human Behavior, 87, 75–86. https://doi.org/10.1016/j.chb.2018.05.026
  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
  • Braun, V., & Clarke, V. (2022). Conceptual and design thinking for thematic analysis. Qualitative Psychology, 9(1), 3. https://doi.org/10.1037/qup0000196
  • Brown, A. (2017). What is hate speech? Part 1: The myth of hate. Law and Philosophy, 36, 419–468.
  • Brown, A. (2018). What is so special about online (as compared to offline) hate speech? Ethnicities, 18(3), 297–326. https://doi.org/10.1177/1468796817709846
  • Brubaker, R., & Cooper, F. (2000). Beyond“identity”. Theory and Society, 29(1), 1–47. https://doi.org/10.1023/A:1007068714468
  • Cáceres‐Zapatero, M. D., Makhortykh, M., & Segado‐ Boj, F. (2022). Discursos de odio en comunicación: Investigaciones y propuestas [Hate speech in com‐ munication: Research and proposals]. Comunicar, 71, 1–4. https://doi.org/10.48350/171975
  • Cambridge Dictionary. (2022). Racism. Retrieved from Cambridge Dictionary: https://dictionary.cambridge.org/dictionary/english/racism
  • Celik, S. (2019). Experiences of internet users regarding cyberhate. Information Technology & People.
  • Cervi, L., & Marín‐Lladó, C. (2021). What are political parties doing on TikTok? The Spanish case. Profe‐ sional de la información, 30(4), 1–17. https://doi.org/10.3145/epi.2021.jul.03
  • Chandrasekharan, E., Pavalanathan, U., Srinivasan, A., Glynn, A., Eisentein, J., & Gilbert, E. (2017). You can’t stay here: The efficacy of reddit’s 2015 ban examined through hate speech. Proceedings of the ACM on Human-Computer Interaction.
  • Chin, E. S. M. (2020, April 4). Think tank: Xenophobia, racism rampant on social media amid global Covid-19 lockdowns, Malaysia included. The Malay Mail. Retrieved from https://www.malaymail.com/news/malaysia/2020/04/04/think-tank-xenophobia-racism-rampant-on-social-media-amid-global-covid-19-l/1853502
  • Cisneros, J. D., & Nakayama, T. K. (2015). New media, old racisms: Twitter, Miss America, and cultural logics of race. Journal of International & Intercultural Communication, 8(2), 108–127. https://doi.org/10.1080/17513057.2015.1025328
  • Cohen‐Almagor, R. (2011). Fighting hate and bigotry on the Internet. Policy & Internet, 3(3), 1–26. https://doi.org/10.2202/1944-2866.1059
  • The Council of Europe (2023). Hate Speech and Violence. Retrieved from https://www.coe.int/en/web/european-commission-against-racism-and-intolerance/hate-speech-and-violence.
  • Daniels, J. (2013). Race and racism in internet studies: A review and critique. New Media and Society, 15(5), 695–719. https://doi.org/10.1177/1461444812462849
  • De Groot, K., Maurits, E. E., & Francke, A. L. (2018). Attractiveness of working in home care: An online focus group study among nurses. Health & Social Care in the Community, 26(1), e94–e101. https://doi.org/10.1111/hsc.12481
  • European Parliament. (2020) . Hate speech and hate crime in the EU and the evaluation of online content regulation approaches.
  • Farkas, J., Schou, J., & Neumayer, C. (2018). Cloaked facebook pages: Exploring fake islamist propaganda in social media. New Media and Society, 20(5), 1850–1867. https://doi.org/10.1177/1461444817707759
  • Fernandez, K. (2020). Three waves of hate speech spreading faster than the pandemic in Malaysia: An analyses of outgroup populist narratives and hate speech during the COVID-19. Geografia, 16(4). https://doi.org/10.17576/geo-2020-1604-21
  • Finlayson, A. (2022). Brexit, YouTube, and the populist rhetorical ethos. In C. Kock & L. Villadsen (Eds.), Populist rhetorics (pp. 81–106). Palgrave Macmillan.
  • Fukuyama, M. (2018). Society 5.0: Aiming for a new human-centered society. Japan Spotlight, 27(5), 47–50.
  • Gelber, K. (2016). Free speech after 9/11. Oxford University Press.
  • Ghanea, N. (2012). The concept of racist hate speech and its evolution over time. United nations committee on the elimination of racial discrimination’s day of thematic discussion on racist hate speech (81st ed.). United Nations.
  • Gianan, E. R. D. Q. (2020). Disinformation trends in Southeast Asia: Comparative case Studies on Indonesia, Myanmar, and the Philippines. Jati-Journal of Southeast Asian Studies, 25(1), 1–27. https://doi.org/10.22452/jati.vol25no1.2
  • Halilovic-Pastuovic, M. (2017). Bosnia and Herzegovina: Two Decades After Dayton. Political Violence at a Glance.
  • Halilovic-Pastuovic, M. (2020). Bosnian post-refugee transnationalism: The theoretical context. In Bosnian Post-Refugee Transnationalism (pp. 19–43). Palgrave Pivot. https://doi.org/10.1007/978-3-030-39564-3_2
  • Hall, S. (1997). The spectacle of the other. Representation: Cultural Representations and Signifying Practices, 1997, 223.
  • Halliday, M., Mill, D., Johnson, J., & Lee, K. (2021). Let’s talk virtual! Online focus group facilitation for the modern researcher. Research in Social and Administrative Pharmacy, 17(12), 2145–2150. https://doi.org/10.1016/j.sapharm.2021.02.003
  • Hamid, A. (2014). Jokowi’s Populism in the 2012 Jakarta Gubernatorial Election. Sage Journals, 1–15. https://doi.org/10.1177/186810341403300105
  • Hawdon, J., Oksanen, A., & Räsänen, P. (2015). Online extremism and online hate. Nordicom Information, 37(3–4), 29–37. .
  • Haynes, C., Joseph, N. M., Patton, L. D., Stewart, S., & Allen, E. L. (2020). Toward an understanding of intersectionality methodology: A 30-year literature synthesis of Black women’s experiences in higher education. Review of Educational Research, 90(6), 751–787. https://doi.org/10.3102/0034654320946822
  • Hopkins, J. (2014). Cybertroopers & tea parties: Government use of the Internet in Malaysia. Asian Journal Communication, 24(1), 1–23. https://doi.org/10.1080/01292986.2013.851721
  • Howard, E. (2017). Freedom of expression and religious hate speech in Europe. Routledge.
  • Hughey, M. W., & Daniels, J. (2013). Racist comments at online news sites: A methodological dilemma for discourse analysis. Media, Culture & Society, 35(3), 332–347. https://doi.org/10.1177/0163443712472089
  • Husni, H. (2019). Moderate muslims’ views on multicultural education, freedom of expression, and social media hate speech: An empirical study in west java Indonesia. Jurnal Penelitian Pendidikan Islam, 7(2), 199–224. https://doi.org/10.36667/jppi.v7i2.370
  • Ittefaq, M., Abwao, M., Baines, A., Belmas, G., Kamboh, S. A., & Figueroa, E. J. (2022). A pandemic of hate: Social representations of COVID‐19 in the media. Analyses of Social Issues and Public Policy, 22(1), 225–252. https://doi.org/10.1111/asap.12300
  • Johns, A. (2017). Flagging white nationalism ‘after cronulla’: From the beach to the net. Journal of Intercultural Studies, 38(3), 349–364. https://doi.org/10.1080/07256868.2017.1314259
  • Kalsnes, B., & Ihlebæk, K. A. (2021). Hiding hate speech: Political moderation on Facebook. Media, Culture & Society, 43(2), 326–342. https://doi.org/10.1177/0163443720957562
  • Kang, M., Lee, J., & Park, S. (2020). Meta-analysis on hate speech studies in South Korea. In Hate Speech in Asia and Europe (pp. 7–22). Routledge.
  • Kelly, S. (2010). Qualitative interviewing techniques and styles. In I. Bourgeault, R. Dingwall, & R. de Vries (Eds.), The sage handbook of qualitative methods in health research (pp. 307–326). Sage Publications. https://doi.org/10.4135/9781446268247.n17
  • Kim-Wachutka, J. J. (2020). Hate speech in Japan: Patriotic women, nation and love of country. In Hate Speech in Asia and Europe (pp. 23–42). Routledge.
  • Kitzinger, J., & Barbour, R. (Eds.). (1999). Developing focus group research: Politics, theory and practice. Sage.
  • Kok Seong, T. (2019, September 11). Asas integrasi, bukan asimilasi punca perpaduan tidak kukuh. Berita Harian Online. https://www.bharian.com.my/berita/nasional/2019/09/605903/asas-integrasi-bukan-asimilasi-punca-perpaduan-tidak-kukuh-kok-seong
  • Komas, P. (2022). The Malaysia Racism Report 2022. Retrieved from https://komas.org/launch-of-the-malaysia-racism-report-2022/
  • Kopytowska, M., & Baider, F. (2017). From stereotypes and prejudice to verbal and physical violence: Hate speech in context. Lodz Papers in Pragmatics, 13(2), 133–152. https://doi.org/10.1515/lpp-2017-0008
  • Krueger, R. A. (1994). Focus Groups: A Practical Guide for Applied Research. Sage Publications.
  • Krueger, R. A. (2014). Focus groups: A practical guide for applied research. Sage publications.
  • Krueger, R. A., & Casey, M. A. (2000). Focus Groups: A Practical Guide for Applied Research (3rd ed.). Sage Publications.
  • Kumari, A., Ranjan, P., Chopra, S., Kaur, D., Kaur, T., Kalanidhi, K. B., Goel, A., Singh, A., Baitha, U., Prakash, B., & Vikram, N. K. (2021). What Indians think of the COVID-19 vaccine: A qualitative study comprising focus group discussions and thematic analysis. Diabetes & Metabolic Syndrome: Clinical Research & Reviews, 15(3), 679–682. https://doi.org/10.1016/j.dsx.2021.03.021
  • Kyslova, O., Kuzina, I., & Dyrda, I. (2020). Hate speech against the roma minority on Ukrainian web space.
  • Lamerichs, N., Nguyen, D., Carmen, M., Melguizo, P., Radojevic, R., & Lange-Böhmer, A. (2018). Elite male bodies: The circulation of alt-right memes and the framing of politicians on social media. Participations, 15(1), 180–206.
  • Langlois, G., & Elmer, G., Joss, H., Greg, E., & Ganaele, L. (2013). The research politics of social media platforms. Culture Machine, 14. https://culturemachine.net/wp-content/uploads/2019/05/505-1170-1-PB.pdf
  • Leong, P. P. (2019). Malaysian politics in the new media age. Springer.
  • Lobe, B. (2017). Best practices for synchronous online focus groups. A New Era in Focus Group Research: Challenges, Innovation and Practice, 1, 227–250. 978-1-137-58614-8. https://doi.org/10.1057/978-1-137-58614-8
  • Lumsden, K., & Harmer, E. (2019). Online othering. Palgrave Studies in Cybercrime and Cybersecurity, 1. https://doi.org/10.1007/978-3-030-12633-9
  • Makhortykh, M., & González Aguilar, J. M. (2020). Memory, politics and emotions: Internet memes and protests in Venezuela and Ukraine. Continuum, 34(3), 342–362.
  • Makhortykh, M., Urman, A., & Ulloa, R. (2021, June). Detecting race and gender bias in visual representation of AI on web search engines. In Advances in Bias and Fairness in Information Retrieval: Second International Workshop on Algorithmic Bias in Search and Recommendation, BIAS 2021, Lucca, Italy, April 1, 2021, Proceedings (pp. 36–50). Cham: Springer International Publishing.
  • Malaysian Communications and Multimedia Commission (MCMC). (2019, November 11). MCMC: Almost 22,000 reports received about insensitive ‘3R’ social media posts in six weeks. MCMC Official Websites. https://www.mcmc.gov.my/en/media/press-clippings/mcmc-almost-22-000-reports-received-about-insensit
  • Massanari, A. (2015). Gamergate and the fappening: How reddit’s algorithm, governance, and culture support toxic technocultures. New Media and Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
  • Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130
  • Matamoros-Fernández, A., & Farkas, J. (2021). Racism, hate speech, and social media: A systematic review and critique. Television & New Media, 22(2), 205–224.
  • Matsuda, M. (1993). Public response to racist hate speech: Considering the victim’s story. In M. Matsuda, C. Lawrance, R. Delgrado, & K. Crenshaw (red) (Ed.), Words that Wound. Critical Race Theory, Assaultive Speech, and the First Amendment (pp. 2320–2381). Westview Press. [w:].
  • Meddaugh, P. M., & Kay, J. (2009). Hate speech or “reasonable racism?” the other in Stormfront. Journal of Mass Media Ethics, 24(4), 251–268. https://doi.org/10.1080/08900520903320936
  • Michael, L. (2017). Anti-black racism: Afrophobia, exclusion and global racisms. In A. Haynes, J. Schweppe, & S. Taylor Eds., Critical Perspectives on Hate Crime: Contributions from the Island of Ireland (pp. 275–299). Palgrave. Basingstoke: Palgrave. https://doi.org/10.1057/978-1-137-52667-0_15
  • Mielczarek, N. (2018). The “pepper-spraying cop” icon and its internet memes: Social justice and public shaming through rhetorical transformation in digital culture. Visual Communication Quarterly, 25(2), 67–81. https://doi.org/10.1080/15551393.2018.1456929
  • Morada, N. M. (2021). Myanmar. Journal of International Peacekeeping, 24(3–4), 428–466. https://doi.org/10.1163/18754112-24030007
  • Murgado-Armenteros, E. M., Torres-Ruiz, F. J., & Vega-Zamora, M. (2012). Differences between online and face to face focus groups, viewed through two approaches. Journal of Theoretical & Applied Electronic Commerce Research, 7(2), 73–86. https://doi.org/10.4067/S0718-18762012000200008
  • Murni, W., & Ratnawati, M. (2015). Freedom without restraint and responsibility: The problem of hate speech in Malaysia. Malaysian Law Journal, 5(xliv–lxvii), 44–67. .
  • Murthy, D., & Sharma, S. (2019). Visualising YouTube’s comment space: online hostility as a networked Phenomena. New Media and Society, 21(1), 191–213. https://doi.org/10.1177/1461444818792393
  • Nadzri, M. M. (2019). The 14th general election, the fall of barisan nasional, and political development in Malaysia. Journal of Current Southeast Asian Affairs, 37(3), 139–171. https://doi.org/10.1177/186810341803700307
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://doi.org/10.2307/j.ctt1pwt9w5
  • Nussbaum, M. C. (2004). Beyond Compassion and humanity’: Justice for nonhuman animals.
  • Onwuegbuzie, A. J., Dickinson, W. B., Leech, N. L., & Zoran, A. G. (2009). A qualitative framework for collecting and analyzing data in focus group research. International Journal of Qualitative Methods, 8(3), 1–21. https://doi.org/10.1177/160940690900800301
  • Pálmadóttir, J., & Kalenikova, I. (2018). Hate speech an overview and recommendations for combating it. Icelandic Human Rights Centre, 1–27. https://www.humanrights.is/static/files/Skyrslur/Hatursraeda/hatursraeda-utdrattur.pdf
  • Peck, R. (2022). Comparing populist media: From Fox News to the Young Turks, from cable to YouTube, from right to left. Television & New Media. Advance online publication. https://doi.org/10.1177/152747 64221114349
  • Powell, R. A., & Single, H. M. (1996). Focus groups. International Journal for Quality in Health Care, 8(5), 499–504. https://doi.org/10.1093/intqhc/8.5.499
  • Prakash, P. (2021, September 6). Report: Malaysian cybertrooper teams employ full-time staff, used by politicians and businesses alike. Retrieved from Malaymail: https://www.malaymail.com/news/malaysia/2021/01/14/report-malaysian-cybertrooper-teams-employ-full-time-staff-used-by-politici/1940305
  • Rahman, H. (2020, May 14). Jangan sebar virus perpecahan - Sultan Johor. Astro Awani Online. https://www.astroawani.com/berita-malaysia/jangan-sebar-virus-perpecahan-sultan-johor-242930
  • Reid, D. J., & Reid, F. J. (2005). Online focus groups: An in-depth comparison of computer-mediated and conventional focus group discussions. International Journal of Market Research, 47(2), 131–162. https://doi.org/10.1177/147078530504700204
  • Rieger, D., Kümpel, A. S., Wich, M., Kiening, T., & Groh, G. (2021). Assessing the extent and types of hate speech in fringe communities: A case study of alt-right communities on 8chan, 4chan, and Reddit. Social Media+ Society, 7(4), 20563051211052906. https://doi.org/10.1177/20563051211033823
  • Said, E. (1994). Chapter 1. Secular interpretation, the geographical element, and the methodology of imperialism. In After colonialism (pp. 21–39). Princeton University Press. https://doi.org/10.1515/9781400821440.21
  • Saidin, M., & Othman, B. B. (2021). Perkembangan Politik Malaysia Pasca PRU-12: Satu tinjauan menerusi perspektif Etnosentrisme. Akademika, 91(3), 53–62. https://doi.org/10.17576/akad-2021-9103-05
  • Salminen, J., Almerekhi, H., Milenković, M., Jung, S. G., An, J., Kwak, H., & Jansen, B. (2018, June). Anatomy of online hate: Developing a taxonomy and machine learning models for identifying and classifying hate in online news media. Proceedings of the International AAAI Conference on Web & Social Media, 12(1), https://doi.org/10.1609/icwsm.v12i1.15028
  • Scaffidi, M. A., Gimpaya, N., Pattni, C., Genis, S., Khan, R., Li, J., Bansal, R., & Grover, S. C. (2022). Perceptions of non-technical skills in gastrointestinal endoscopy: A thematic analysis of four focus groups. Gastrointestinal Endoscopy, 95(6), Ab80. https://doi.org/10.1016/j.gie.2022.04.219
  • Schindler, M., & Domahidi, E. (2021). The growing field of interdisciplinary research on user comments: A computational scoping review. New Media & Society, 23(8), 2474–2492. https://doi.org/10.1177/1461444821994491
  • Sharma, I. (2019). Contextualising hate speech: A study of India and Malaysia. Journal of International Studies, 15, 133–144. https://doi.org/10.32890/jis.15.2019.9264
  • Shepherd, H., & Paluck, E. (2015). Stopping the Drama. Social Psychology Quarterly, 78(2), 173–193. https://doi.org/10.1177/0190272515581212
  • Shukor, S. A., Manap, N. A., & Rafiei, M. (2015). Spinning the web of hate online: A critical review from the Malaysian laws. Asian Social Science, 11(28). . https://doi.org/10.5539/ass.v11n28p183
  • Siapera, E., Moreo, E., & Zhou, J. (2018). Hate track: Tracking and monitoring online racist speech. Irish Human Rights and Equality Commission.
  • Tates, K., Zwaanswijk, M., Otten, R., van Dulmen, S. H., PM, K. W., Bensing, J. M., & Bensing, J. M. (2009). Online focus groups as a tool to collect data in hard-to include populations: Examples from paediatric oncology. BMC Medical Research Methodology, 9(1), 15. https://doi.org/10.1186/1471-2288-9-15/
  • Teti, M., Pichon, L., & Myroniuk, T. W. (2021). Community-engaged qualitative scholarship during a pandemic: Problems, perils and lessons learned. International Journal of Qualitative Methods, 20, 16094069211025455.
  • Titley, G. (2020). Is free speech racist?. John Wiley & Sons.
  • Tiung, L. K., Idris, R. Z., & Idris, R. (2018). Propaganda dan Disinformasi Politik Persepsi dalam Pilihan Raya Umum ke-14 (PRU-14). Jurnal Kinabalu, 171–198. https://doi.org/10.51200/ejk.vi.1648
  • Van Dijk, T. A. (1993). Principles of critical discourse analysis. Discourse & Society, 4(2), 249–283. https://doi.org/10.1177/0957926593004002006
  • Van Gaalen, A. E. J., Jaarsma, A. D. C., & Georgiadis, J. R. (2021). Medical students’ perceptions of play and learning: Qualitative study with focus groups and thematic analysis. JMIR Serious Games, 9(3), e25637. https://doi.org/10.2196/25637
  • Vasilenko, E. (2021). Online hate speech in Belarus: Highlighting the topical issues. Zeitschrift für Slawistik, 66(4), 558–577. https://doi.org/10.1515/slaw-2021-0026
  • Walston, J. T., & Lissitz, R. W. (2000). Computer-mediated focus groups. Evaluation Review, 24(5), 457–483. https://doi.org/10.1177/0193841X0002400502
  • Wan Mohd nor, M., & Gale, P. (2021). Growing fear of islamisation: Representation of online media in Malaysia. Journal of Muslim Minority Affairs, 41(1), 17–33. https://doi.org/10.1080/13602004.2021.1903161
  • Weinstein, J. (2017). Hate speech bans, democracy, and political legitimacy. Constitutional Commentary, 32(3), 527. https://scholarship.law.umn.edu/concomm/465
  • Welsh, B. (2020 August 18). Malaysia’s political polarisation: race, religion, and reform - political polarisation in South and Southeast Asia: Old divisions, new dangers. Carnegie Endowment for International Peace. https://carnegieendowment.org/2020/08/18/malaysia-s-political-polarization-race-religion-and-reform-pub-82436
  • Zain, H., & Ramlan, Y. (2019, March 24). Laskar siber kerjaya lumayan, tapi mencabar. Retrieved from Malaysia Kini: https://www.malaysiakini.com/news/469316
  • Zamri, N. A. K., Amin, F. M., Ab Aziz, A. A., Mohammad, N., Noh, L. M. M., Allam, S. N. S., Mohideen, R. S. (2021a). Coronavirus exacerbates xenophobia: Consciousness of Twitter posting during the pandemic.
  • Zamri, N. A. K., Amin, F. M., Ab Aziz, A. A., Mohammad, N., Noh, L. M. M., Allam, S. N. S., Mohideen, R. S. (2021b). Coronavirus exacerbates xenophobia: Consciousness of Twitter posting during the pandemic. SEARCH Journal of Media and Communication Research, GRaCe Special Issue (2021), 67. . https://fslmjournals.taylors.edu.my/wp-content/uploads/SEARCH/SEARCH-2021-Special-Issue-GRACE2020/SEARCH-2021-Special-Issue-GRACE2020.pdf
  • Zamri, N. A. K., Nasir, N. N. A. M., Hassim, M. N., Ramli, S. M., & Amin, F. M. (2022, January). Malaysian onion army and othering: radicalized trolling hunters on twitter during pandemic. In 2nd International Conference on Social Science, Humanities, Education and Society Development (ICONS 2021), Kuala Lumpur, Malaysia (pp. 153–164). Atlantis Press.
  • Zamri, N. A. K., Sulam, M., & Merican, A. M. N. (2017). Tracing the trajectories of history of Malaysia: exploring historical consciousness in sarawak. Pertanika Journal of Social Science and Humanities, 25(S), 219–227.
  • Zheng, S. (2020). Coronavirus: Nature magazine apologises for reports linking Covid-19 with China. Retrieved from https://www.scmp.com/news/china/society/article/3079293/coronavirus-nature-magazine-apologises-reports-linking-covid-19