2,523
Views
0
CrossRef citations to date
0
Altmetric
Computer Science

Navigating legal challenges of deepfakes in the American context: a call to action

Article: 2320971 | Received 16 Mar 2023, Accepted 14 Feb 2024, Published online: 22 Feb 2024

Abstract

Deepfakes are a rapidly growing technological trend that is revolutionizing many digital processes, from media production to digital access. They offer various advantages for both users and creators alike. For example, they can provide entertainment value by allowing users to create humorous or satirical versions of existing media content without having to film new scenes. They can also automate production workflows by generating background characters or extras for movies. Additionally, deepfakes could be used in healthcare applications, such as virtual avatars for doctors who require assistance during consultations with patients located far from hospitals or clinics. However, deepfakes also pose a threat to society as they can be used for malicious purposes, including spreading false information or exploiting the likenesses of others for financial gain. To address these concerns, various types of regulations have been introduced across jurisdictions worldwide. These laws prohibit doctored audio recordings and visual materials aimed at influencing election outcomes. Some countries have even gone as far as requiring companies producing deepfake materials to identify themselves as such, to make audiences aware that what they are seeing may not always be authentic representations of events. In this paper, the author will discuss what deepfakes are, their potential benefits and risks, the various types of deepfakes that exist, and how the USA has attempted to regulate them. The author will also suggest some policy recommendations aimed at mitigating the risks associated with deepfakes while still preserving their creative potential.

This is developing more rapidly than I thought. Soon, it’s going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions

Hao Li

Deepfake Pioneer & Associate Professor

1. Introduction

Recently, false news has been threatening human society, public communication, and democratic governments. Fake news is false information invented to mislead people. Untrue stories are disseminated rapidly via social media, where they can affect numerous users. Currently, one out of five users of social media obtains information through YouTube and Facebook (Westerlund, Citation2019). As video has become more popular, it is now necessary to verify whether news and media messages are authentic, as new technological equipment and facilities make videos manipulated convincingly. As it is easy to obtain and spread wrong information via the platforms of social networks, it is becoming harder to know what is trustworthy, resulting in making dangerous decisions. Truly, the world where we exist at present is what is referred to as a ‘post-truth’ era by some people, which is defined by online misinformation and information conflict controlled by malicious players who run free content campaigns to control public views (ibid).

With the latest progress in technology, creating what is now referred to as ‘deepfakes’, hyper-realistic films is easy by swapping faces which show no or little sign of being manipulated. Driven by the newest innovations in A.I. and machine learning, deepfakes allow automated systems to manufacture counterfeit material that is edgier and harder to identify (Kietzmann et al., Citation2020). Deepfake technology can create, for instance, amusing, sexual, or governmental films of individuals uttering anything, without the approval of the individuals whose pictures and voices are used. The outstanding feature of deepfakes is the range, magnitude, and complexity of the method used, as nearly anybody with a computer can produce false films that cannot be distinguished from true ones. While initial types of deepfakes concentrated on weaving the faces of government rulers, artists, clowns, and performers into pornographic films, deepfakes have the likelihood to be utilized more in the future for revenge pornography, victimization, false video proof in courts, political damage, radical propaganda, threats, market control, and false news (2020).

While it is easy to disseminate fake news, it is very difficult to combat deepfakes. To combat deepfakes, there is a need to know what they are, why they exist, and the programs used in creating them. But an academic study has just started to handle online misinformation in social networks.

Deepfake technology is an evolution of generative networks in adversarial training, which involves training an artificial intelligence (AI) system by simultaneously optimizing two neural networks, one that generates data and another that evaluates the generated data to assess its accuracy. In deepfake technology, generative adversarial networks (GANs) are used to manipulate images or videos to produce convincing fakes that are difficult to distinguish from reality. The generator network creates the fake content, while the discriminator network learns to distinguish between real and fake content. This process is repeated until the generator produces content that is almost identical to real content, resulting in a convincing deepfake. The use of GANs in deepfake technology enables bad actors to create sophisticated fakes on a large scale, presenting significant challenges to the integrity of information and democratic processes.

As deepfake technology continues to evolve, researchers recognize the need to develop defenses against it. One approach is to use the same AI technologies that generate deepfakes to detect them, creating anti-deepfake algorithms that can identify the subtle differences between real and fake content. Another approach is to develop government regulatory measures to promote transparency, accountability, and ethical use of deepfake technology. Overall, as deepfake technology evolves, it becomes increasingly urgent to create effective strategies to prevent its malicious use and educate the public about its potential risks and misuse.

Therefore, this work aims to define deepfakes, find out who created them, study their importance, the harmful effects of their technology, give some types of recent deepfakes, and how to fight against them. In this context, this work examines many news reports on deepfakes obtained from the websites of news media. This work adds to the growing literature of false news and deepfakes by presenting a comprehensive review of deepfakes, as well as rooting the evolving topic into a scholarly argument that also recognizes alternatives for political figures, reporters, businesspersons, and others to fight against deepfakes (2020).

Through a review of the existing literature, this paper demonstrates the challenges of deepfakes in the American context. The meaning of deepfakes is given in the second section. Then the article explains the deepfake problem in section three. The benefits of deepfake technology are described in section four. Types of deepfakes are explained in section five. The ‘Regulating Deepfakes: The American Perspective’ section describes the regulation of deepfakes in the United States of America. Finally, sections seven and eight will propose some solutions and recommendations to tackle deepfake technology offenses.

2. Contribution of the study

2.1. Legal complexities and regulatory framework

The current study adds insights into legal intricacies concerning the deep fake technology in the United States, through applying deepfake as an outline to analyze the existing legal structure and frameworks that help to respond to the challenges posed by this technology, privacy law, intellectual property rights, defamation law, cybersecurity issues and others that have been identified as important concerns. This comprehensive analysis sheds information about the legal space in this issue and emphasizes the need of vigorous regulatory approach to address the intricate impact of deepfakes.

2.2. Ethical and societal implications

The research touches upon ethical and societal aspects of deepfake technology and its possible aftermath attributable to instilling the customers in privacy violations, gender discrimination, violence, and political subversiveness. In doing so, the present study engages with some major issues in the wider discussion related to the ethical concerns and societal implications of the proliferation of false narrative-shaping tools such as deepfakes. It highlights the necessity of principles and sensitivity toward the bad impacts of deepfake technology.

2.3. Policy recommendations and legislative actions

Using policy action and legislative steps, the study formulates suggestions for the problems posed by deepfakes. Suggesting proactive adoption to the rapidly evolving technological scene, the study asserts the importance of policy action, which ensures persistent innovative momentum while keeping the risks that deepfakes present at hand. This contributes to designing the guiding tools for policymakers and their regulating actors based on developing appropriate mechanism of deepfake technology.

2.4. Awareness and education

Public awareness also plays a critical role in reducing deepfake misuse. The study centers on the importance of constant education of individuals, regulatory bodies, and organizations, creating digital literacy, cyber resilience, and creating a better, safer society. Such focus on awareness and education is a preventive strategy that equips people with skills to assess digital material and reduce the influence of deepfakes.

2.5. Multistakeholder collaboration

The research highlights the importance of multistakeholder partnership in overcoming the issues deepfakes pose. The study advocates for people working with all stakeholders, including legal scholars, technology experts, policymakers, and groups for civil society where the study acknowledges, that all have a stake to ensure compliance with regulatory frameworks and ethical guidelines. This addition highlights the significance of joint endeavors in using obscure characteristics.

3. What are deepfakes?

Deepfakes, an amalgamation of ‘deep learning’ and ‘fake’, entails manipulating hyper-realistic digital video to portray people uttering and carrying out activities that never really took place (CNN, Citation2022). Deepfakes depend on neural networks that examine large sets of data samples to learn how to imitate the behavior, facial expressions, inflections, and voice of an individual. They entail putting the footage of two persons into a deep learning algorithm to train it to replace faces. This means that deepfakes utilize a method and AI that maps or replaces the face of an individual in a video with that of another individual (CNN, Citation2022; Dickson, Citation2022).

Deepfakes were made public in 2017 when a Reddit user displayed films with celebrities involved in embarrassing porn activities. Although the use of deepfakes in marketing might be deleterious to the average consumer who may be influenced under the premise of misleading influencer marketing, the capacity to edit footage of prominent people has additional potentially catastrophic implications, especially with regard to political significance (Mostert & Perot, Citation2020).

It is hard to identify deepfakes as they utilize actual footage, can have real audio sound, and are enhanced for rapid dissemination on social network platforms (Westerlund, Citation2019). This makes many people think that what they are watching is real. The targets of deepfakes are the platforms of social media, where conspiracies, gossip, and lies are disseminated quickly, as users have the tendency to follow the crowd. Moreover, a continuous ‘infopocalypse’ drives individual to believe that they cannot trust any story unless it comes from their social media platforms and is supported by their family, close friends, and preexisting ideas. Indeed, many people accept anything that confirms their preexisting opinions, even if they doubt its authenticity.

Low-priced fakes, which are low-quality videos with slightly manipulated genuine information, are prevalent due to the wide availability of cheap hardware such as effective graphical processing units. There is free software used to create excellent, accurate deepfakes for misinformation. This allows users with low technological abilities and no creative knowledge to almost flawlessly alter films, replace faces, change gestures, and produce speeches [ibid].

4. Deepfakes problem

The technology of ‘deepfake’ can be utilized to show actual individuals uttering and carrying out activities that did not really happen. Deepfakes are commonly utilized to entertain people, including DeepTomCruise (funny videos that look like Tom Cruise) or the ReFace app, which enables a person to put their face on a superstar music video or film clip quickly (Laser, Citation2022). The technology of deepfake is also utilized negatively. For instance, almost 90%-95% of deepfakes on the internet are non-consenting porn, in which the face of a victim is put on porn content, leading to possible emotional and character damage [ibid]. The platform has been used to make non-consensual false pornography and sexual images, but there is a danger that it may soon be utilized for politically undesirable purposes (Gosse & Burkell, Citation2020). Deepfakes are capable of transforming existing audiovisual and artificially made content to provide the impression that they conform to a certain class of actual realities (Gregory, Citation2019). Journalism has portrayed deepfakes as a dangerous tool that may distort people’s perceptions of sociopolitical facts (Yadlin-Segal & Oppenheim, Citation2021). The dissemination of deepfakes keeps reinforcing sexual differences in visual information (Afanasyeva & Yumasheva, Citation2022; Wagner & Blewer, Citation2019) and creates a major danger to psychological safety (Pantserev, Citation2020). Deepfakes also threaten business and state safety and are utilized by players abroad in social impact campaigns. They can also be utilized to impersonate companies’ staff for financial crime, according to a Private Industry Notification by the F.B.I. in March 2021 [2020].

Furthermore, the issue of deepfake technology has become a growing concern in recent years. This technology can be used to spread misinformation on a large scale, affecting people’s lives and causing reputational damage. It can also be used to create fake news, sex videos, and to impersonate public figures. As deepfake technology continues to evolve and become more sophisticated, it poses a serious threat to the integrity of political processes and democratic institutions. Policy makers and tech companies must work together to ensure that deepfakes can be identified and prevented from being used to influence public opinion. It is important to address this issue to protect the integrity of information and prevent the spread of false and misleading content.

5. The benefits of deepfake technology

The line between reality and fiction has blurred significantly, yet this has opened fascinating new possibilities in fields as diverse as the visual arts, commerce, film, and video games (Verdoliva, Citation2020). The Deepfake program is also used positively in many companies for different purposes, such as cinemas, academic media, virtual interactions, games, shows, material science, healthcare, social media, and different areas of businesses like clothing style and e-commerce (Taulli, Citation2022). The integration of advanced technology and photography not only is prevalent but also quantitatively alters medical practice (Crystal et al., Citation2020). For example, a solution leveraging deepfake capability might safeguard privacy in medical videos (Zhu, Citation2020).

Deepfake technology can be beneficial to the movie industry in many areas and aspects. For instance, it can produce virtual voices for actors whose voices are damaged by diseases; it can also update the footage of the movie instead of reshooting the entire movie (Dickson, Citation2022). Film producers can reproduce standard parts in films, produce novel films featuring actors that died long ago, utilize special effects and high-quality face editing in post-production, and enhance substandard videos to high quality (Kan, Citation2022; Laser, Citation2022). Deepfake programs can help dub voices automatically and realistically for films in any language, allowing different audiences to enjoy movies and scholarly media. In a 2019 global malaria awareness campaign, David Beckham appeared as a multilingual, breaking down barriers to communication via a scholarly advert that utilized graphic and voice-changing technology. Additionally, the deepfake program can break down the barriers to communication on video seminar calls by converting utterances and changing facial and mouth expressions to enhance eye contact and make everybody seem to be using a similar language (Dickson, Citation2022).

The mechanism behind deepfakes allows multiplayer games and online chat worlds with high telepresence, natural-sounding and -looking smart assistants (PCM09), and digital doubles of people. This creates good relations among people and communication on the internet. Technology can also be positively used in the medical and social fields. Deepfakes can assist individuals in coping with the loss of their loved ones by virtually bringing the dead person alive again, allowing their grieving friends/relatives to bid them goodbye. In addition, it can digitally reproduce a limb that was amputated or help transgender individuals prefer their gender. It can make Alzheimer’s patients relate to a younger face they can recall (FOX05). Researchers are investigating how to utilize GANs to identify defects in X-rays and their potential to create actual chemical molecules to accelerate material science and medical findings (Westerlund, Citation2019).

Companies, like brands, can apply deepfake technology, for it has the ability to change e-commerce and adverts outstandingly. For instance, brands can employ people as highly fashionable models, but in reality, they are not, and display fashion clothes on different models with diverse skin tones, weights, and heights. Deepfakes can turn customers into models via super personal content; the technology allows actual fitting to see beforehand how they would look in clothing before buying it and can produce targeted clothing style adverts that differ based on weather, viewer, and time. The technology can enable people to hastily try on outfits virtually; it not only enables individuals to clone themselves digitally and take these personal avatars with them across e-stores but also helps to try on a wedding dress or suit digitally and then experience a wedding scene virtually. Also, A.I. can provide special fake voices that distinguish businesses and goods for branding to be easily distinguishable (Westerlund, Citation2019).

Furthermore, deepfakes can be utilized to produce videos where the management of a corporation is harmed or engages in unpleasant behavior to forcefully reduce the amount of stock. Deepfakes can also be partly used as spear-phishing attacks (efforts to manipulate a targeted receiver to distribute clandestine content or send cash to a malevolent player). Additionally, deepfakes can be employed to induce damage by wrongfully displaying politicians or armed officers involved in aggressive behavior. Moreover, as deepfakes gain more popularity, they make malicious players hold that real pictures are false, what Professors Chesney and Citron refer to as ‘the liar’s share’ (Afanasyeva & Yumasheva, Citation2022; Wagner & Blewer, Citation2019).

Using a pre-trained generative adversarial network (GAN), it is becoming easier and easier to seamlessly swap out one person’s face in footage with another (Korshunov, n.d.). Each of these uses has the potential to significantly affect modern culture, interpersonal relationships, political systems, and the foundations of law and order (van der Sloot & Wagensveld, Citation2022). The proliferation of ‘deepfakes’, in particular, raises questions about the relevance of current legislation and brings new challenges to existing rules (Mostert & Perot, Citation2020). Because perpetrators of domestic violence may use deepfakes to intimidate, extort, and assault victims (Lucas, Citation2022), deepfakes can promote the internalization of rape, support for violence against women, victimhood and repression, sexual privilege, and gender issues (Laskovtsov, Citation2020).

6. Types of deepfakes

We have four main kinds of deepfake creators: (1) the public of deepfake hobbyists, (2) governmental actors like politicians abroad and different promoters, (3) other malicious players like swindlers, and (4) real players like television establishments. It is hard to detect people in deepfake hobby societies. In late 2017, after a user introduced celebrity porn deepfakes into Reddit, the recently created deepfake hobbyist community gained 90,000 members within a few months (Patterson, Citation2022). Many hobbyists concentrate on deepfakes related to pornography, while others place popular stars in movies in which they were never involved to create humorous impacts. Generally, hobbyists consider AI-crafted videos as a novel kind of virtual comedy and support the advancement of such technology, viewing it as solving an intelligent puzzle rather than as a means to deceive or intimidate individuals. Their deepfakes primarily entertain, are humorous, or used to satirize governments, and they can assist in gaining supporters on social networks. Some hobbyists seek tangible gains for themselves, such as making people aware of the uses and benefits of deepfake technology to obtain a paid job related to deepfakes, for instance, with television programs or music videos. Therefore, hobbyists and real players like television enterprises can work together (ibid).

While meme-like deepfakes created by hobbyists can amuse people virtually, more malevolent players are also involved. Many government actors, political activists, hacktivists, radicals, and external countries can utilize deepfakes in misinformation campaigns to manipulate public opinion and undermine trust in the establishments of a particular nation. In these periods of hybrid warfare, deepfakes are weapons used for misinformation, interfering with votes, and sowing civil conflict. We can expect many internal and external ‘troll farms’ sponsored by states online that apply AI to produce and distribute false political videos designed to exploit internet users’ particular prejudices. Deepfakes are highly used by scammers to manipulate markets and stocks, as well as for other different economic frauds. Fraudsters use false videos produced by AI to mimic a manager via phone, demanding a quick transfer of money. Video calls can also be falsified in real-time in the future. Graphical tools needed to impersonate managers can be found online. Deepfake technology can impersonate managers visually and via audio; TED Talk videos on YouTube are an example (ibid). The identification conundrum is assessed by a diverse group of stakeholders, including experts from academia, technological platforms, media organizations, and civil society groups (Leibowicz et al., Citation2021).

Numerous deepfakes on social media nowadays, such as Facebook or YouTube, can be perceived as inoffensive humor or creative works where living or dead celebrities are used. However, deepfakes also have negative implications, including personality and revenge pornography, as well as efforts to manipulate politically and non-politically. Many deepfakes concentrate on public figures, political elites, and business executives because their real photos and videos flood the internet, which is used to develop the extensive image stockpiles required for training AI deepfake systems (Westerlund, Citation2019).

Many of these deepfakes include errors, tricks, and humorous memes. For example, a deepfake may depict Nicolas Cage in movies he was not actually involved in, such as ‘Terminator 2’ or ‘Indiana Jones’. Some fascinating instances of deepfakes include a movie that swaps Alden Ehrenreich with a young Harrison Ford in clips obtained from ‘Solo: A Star Wars Story’, and a video of Bill Hader appearing on ‘Late Night with David Letterman’. While Hader talks about Tom Cruise, his face morphs into Cruise’s. Some deepfakes even portray deceased public figures, like Freddie Mercury’s face imposed on Rami Malek’s face, the former vocalist of the band Queen, singing Beyonce’s ‘Halo’ (ibid).

There have been various applications of deepfake technology, such as a U.S. art museum using it to resurrect Salvador Dali to welcome guests. Another AI device makes people dance like a chief female ballet dancer by superimposing the moves of an actual dancer onto the body of a targeted person, creating a video that showcases the target as an expert dancer.

Dangerous deepfakes are increasing daily. The deepfake method allows for the creation of public figures and revenge pornography, which involves using images of public and non-public figures in pornographic content on social platforms without their consent. Consequently, celebrities like Scarlett Johansson have been falsely portrayed in adult films, with their faces being superimposed over the faces of pornography actors (ibid).

In the world of politics, deepfakes have been used for various purposes. In 2018, Hollywood movie producer Jordan Peele created a deepfake video featuring former US President Obama, highlighting the harm of misinformation and mocking President Trump. In 2019, a video of Nancy Pelosi, an American political figure, was altered and quickly spread on the internet, making her appear drunk by slowing down the video. In another deepfake video from 2018, Donald Trump advised Belgian citizens on climate change. This video, created by the Belgian political party ‘sp.a’, aimed to mobilize people to sign an appeal urging the Belgian government to take immediate action on climate change. The video angered many people who believed that the US president was interfering in Belgium’s climate policy. Tom Perez, the chairman of the Democratic Party of the US, was deepfaked in 2019 by his own party to highlight the dangers of deepfakes in the 2020 election. While these examples demonstrate restricted political manipulation, other deepfakes can have lasting effects. In Central Africa in 2018, Ali Bongo, the president of Gabon, who had not been seen for a long time and was rumored to be dead or sick, was deepfaked and cited by the Gabonese military as the reason for a failed coup. In Malaysia, a deepfake video featuring a man confessing to having an affair with a local cabinet minister sparked a political dispute. Deepfakes have also been created by individuals outside of politics. In June 2019, two British actors created a high-quality deepfake video featuring Mark Zuckerberg, the CEO of Facebook, which garnered millions of views. The video falsely portrayed Zuckerberg as praising Spectre, an imaginary villainous organization from the James Bond series, demonstrating how it could gain absolute control over the personal information of countless people, thereby owning their lives. The video aimed to reveal how technology, including deepfake technology, voice manipulation, and news footage, can be deployed to manipulate information (Ibid).

7. Regulating Deepfakes: The American Perspective

People who are the subjects of malicious deepfakes do not have many causes of action under the law to bring their claims. In addition, criminal codes, both federally and in many states, are not designed to handle the chimeric nature of deepfake videos, where the person displayed in the video, the particular face combined with the particular body, is not real. This analysis will focus on four separate causes of action: copyright infringement, right of publicity, defamation, and actions arising from non-consensual pornography laws.

Due to the novelty of the technology, deepfakes bring to light a unique issue that raises several fundamental questions: Is it wrong to use a publicly available photo of a person’s face and then creatively transform that photo into something else for a non-monetary purpose? Additionally, if the creation of deepfakes is wrong, would it still be wrong if the video was labeled as a work of fiction? Moreover, is the dissemination of a deepfake wrong if it is not defamatory? These questions are important because, unlike China, which outright prohibited the dissemination of false speech, the First Amendment protects citizens from the government attempting to ban content-based speech laws. Within the framework of the First Amendment, there are several possibilities by which the rampant and flagrantly malicious use of deepfakes can be curtailed. However, due to the technology behind deepfakes, some typical methods of curbing malicious media are not available.

Collectively, these tendencies point to an exaggerated belief in the reach and pervasiveness of ‘fake news’ and misleadingly edited visual media, as well as an increased propensity for people to spread such content (Altay et al., Citation2022), rising ‘radical skepticism’ in amplified experience, and diminishing trust in the media and civil rights (Chouliaraki, Citation2015). In order to better connect democratic accountability, reasonable and appropriate state-based legislation is required (Henry & Witt, Citation2021), and to combat the widespread problem of fake news and other information online, it is imperative that digital media be verifiably legitimate (Kietzmann et al., Citation2020; Khodabakhsh et al., Citation2019).

7.1. Criminal statues

There are criminal statutes available in the U.S.A. to prosecutors who wish to discourage rampant malicious deepfake use. Identity theft statutes are one interesting avenue that could be used in very specific circumstances when fraud is involved, such as a deepfake video or voice recording requesting money or accessing an account (Suslavich, n.d.). However, in cases like Ms. Martin’s where there is no actual fraud involved, identity theft does not apply. If a malicious deepfake were interpreted as a threat, 18 U.S.C. § 875(c) charges or imprisons a person for two years if the person:

  1. Knowingly makes a communication containing a true threat to injury in interstate commerce or foreign commerce, and

  2. Intends the communication to be a true threat to injure another or knows that the recipient of the threat would understand it to be a threat.

In many cases, however, a prosecutor may have trouble proving that the deepfake is a true threat. Perhaps the most effective tool for prosecutors would be federal cyberstalking statutes. 18 U.S.C. § 2261 A applies to conduct that ‘places [a] person in reasonable fear of the death of, or serious bodily injury to […] (i) that person; (ii) an immediate family member; or (iii) a spouse or intimate partner of that person’. The defendant must have the mens rea to ‘kill, injure, harass, intimidate or place [the victim] under surveillance’. As mentioned below, while criminal prosecutions do have the advantage of being more of a deterrent than the potential of civil liability alone, such statutes alone do not clearly prohibit the practice of creating malicious deepfakes. In addition, for reasons which will be discussed, simply prohibiting the creation of deepfakes might not actually practically solve the problem of removing the deepfakes from the web (ibid). In essence, deepfake evidence is currently not particularly governed by any evidentiary approach, and the present legal criteria limiting evidence authenticity are inadequate. When dealing with deepfake evidence in court, judges and lawyers must avoid proof traps while also dealing with the prosecution’s skepticism and mistrust (Delfino, Citation2022). The problem also rests in determining who created the deepfake – whether it was created by artificial intelligence or by humans (Nema, Citation2021).

7.2. Copyright infringement

As a preliminary matter, copyright law is an avenue that would most likely not be successful in curbing malicious deepfakes. This is because, for the most part, deepfakes are made for non-commercial uses, and the results of the deepfake are likely to be found ‘transformative’. In Dhillon v. Doe, the District Court for the Northern District of California, U.S. claimed that using the headshot of a political figure in a noncommercial website to condemn the political figure was ‘exactly envisioned as a paradigmatic fair use by the Copyright Act’ (Suslavich, n.d.). This analysis has held true for deepfakes as well. For instance, the music artist Jay-Z tried to use a copyright strike to remove deepfake audio of himself from YouTube. However, the strike was unsuccessful, and the video of Jay-Z reciting the ‘To Be or Not To Be’ soliloquy from Hamlet and ‘We Didn’t Start the Fire’ by Billy Joel can still be found online. It is the central tenet of copyright law that only particular expressions and not ideas or facts are eligible for copyright. This implicates an even deeper question regarding deepfakes: Is a person’s face or voice copyrightable? The answer is that a voice or a face is almost certainly unprotectable under copyright law. The court in Butler v. Target Corp. held that although lyrics to a song are copyrightable, the underlying voice is not, since there is no limit to the number of words or phrases that a person may utter with their distinctive voice (87). In addition, deepfakes likely fall under derivative works under 17 U.S.C. §103. Thus, any copyright protection that existed would not be expanded to encompass the deepfake. Copyright protection simply does not extend to a person’s inherent qualities (Ibid).

7.3. Right of publicity

One method to fight back against malicious deepfakes is based on privacy law, where the harm from deepfakes arises from the appropriation of an individual’s likeness (Suslavich, n.d.). While this approach may work for a situation where a large corporation uses an individual’s face for commercial gain, it would likely be ineffective at providing relief for a victim of a malicious deepfake. That being said, as deepfake technology becomes more widespread and is adopted into the marketplace, it is not inconceivable that a merchant would use deepfakes to gain a competitive advantage. One possible way a company could do this is through a false celebrity or political endorsement.

The right to publicity is likely to be an uphill battle for a litigant for the typical non-commercial deepfake. It only defends using one’s name, image, and likeness for commercial gain. Unfortunately for a litigant, many deepfakes are artistic passion projects, and their creators do not look for commercial gains. When work is expressive and artistic, the Rogers test applies to determine if the First Amendment protects artistic work. The test begins by asking whether the likeness used is artistically relevant to the original work. If it is, then the test’s second prong applies. The second point of the Rogers test asks whether the similarity ‘obviously misinforms to the source of the work content’. This second part of the test is meant to filter out instances where the likeness of a star is falsified as an authorization. Applying this to deepfakes, one can see how a deepfake impersonating a political figure or individual for comedic or shock value or creating a nude video (but not obscene pornography) of an individual, would be protected under the Rogers test as forms of artistic expression which could be sold for a profit (ibid).

Even if a deepfake was found to appropriate one’s image and likeness for commercial gain, a plaintiff might still not be able to remedy the problem. Under § 230 of the Communications Decency Act, websites are not liable for third-party posted content. If an individual creates a deepfake and posts it on a third-party site, that third-party site is not legally responsible for the video. Courts have broadly interpreted the Communications Decency Act as insulating these third parties from all civil liability, excluding liability from copyright infringement. For example, in Barnes v. Yahoo, Inc., the Ninth Circuit determined that § 230 barred a claim for ‘not providing services’ when Yahoo did not remove obvious sexual pictures of the plaintiff that her ex-boyfriend displayed. This indemnification poses a large problem for individuals like Ms. Martin since, as she experienced, the actual creators of the deepfakes can be difficult or even impossible to find.

However, assuming that profits can be inferred from the personal advertising revenue gained from the display of the deepfake, any claim would still need to be drafted in accordance with state law. Roughly twenty states do not even recognize the right of publicity, and of those which do, most do not have language explicitly directed toward deepfake technology. New York has recently become the first state to explicitly extend an individual’s right to publicity to computer-generated likenesses or digital replicas. In addition, the right to publicity extends for forty years after the individual’s death. However, litigants should not expect most states to explicitly extend the right of publicity to computer-generated likeness any time soon. However, there are always defamation claims available to persons who are the subject of a malicious deepfake (ibid).

7.4. Defamation

Depending on whether the subject of the allegedly defamatory language is a public figure/officer or an ordinary individual, the legal standards of the suit change (Suslavich, n.d.). The tort of defamation is typically broken down into two forms of speech that are not protected by the First Amendment: libel (written or similar communications such as deepfakes) and slander (spoken communications). While defamation laws vary from state to state, the Second Restatement of Torts provides the following elements to establish liability for defamation: (a) an untrue and offensive speech regarding another; (b) an unprivileged publication to a third party; (c) negligence on the part of the publisher causing harm; and (d) either the speech being actionable without the need for special damage or the presence of special damage caused by the publication.

However, the third element of defamation, (c), is different for public officials or persons otherwise in the public light. In New York Times Co. v. Sullivan, the Supreme Court ruled that when the speech at issue is directed at a political figure or criticizes official conduct, the standard of ‘negligence’ is replaced with the higher standard of proving that the speaker acted with actual malice. The Court defined ‘actual malice’ as speech made ‘knowing that it was false or with reckless disregard for the truth’. Creating a convincing deepfake currently requires a certain level of computer proficiency. Furthermore, multiple facial images of the subject need to be gathered, and the host video must be carefully selected to ensure a genuine appearance. Therefore, with today’s technology, it may be nearly impossible to accidentally create a malicious and convincing deepfake. However, creators of malicious deepfakes may attempt to defend defamation claims by arguing that the deepfake is a creative parody and not a convincing representation of the truth (Suslavich, n.d.).

In the USA defamation statutes provide a legal framework that can be used to hold those who use deepfake technology to spread false information accountable. In some cases, individuals or institutions that have been targeted by deepfake may sue the creators of deepfake content for defamation and seek damages in court. This approach provides a means of deterrence and can help to reduce the prevalence of deepfake technology in political campaigns and media. However, it is important to balance the protection of free speech with the need to prevent the spread of false information. As deepfake becomes more sophisticated, more innovative methods to detect and prevent its use will be needed. Overall, it is essential for governmental agencies, technology companies, and the general public to work together to promote greater awareness of the risks posed by deepfake and to take the necessary steps to prevent its misuse.

The only defamation lawsuit that has explicitly mentioned the use of a 'deepfake’ was filed in June of 2021. In this case, a middle-aged man named Mr. Kwon was running for a board position within his company. Mr. Kwon and his company sued for defamation after an unidentified individual posted ‘a manipulated deepfake’ photo depicting Mr. Kwon kissing a much younger woman. For private individuals like Mr. Kwon or individuals such as Ms. Martin, the Supreme Court has applied fewer constitutional protections to defamation laws than what public figures or officials receive. In Gertz v. Robert Welch, the Supreme Court held that states could define their own standards for liability for ‘defamatory falsehoods[s] injurious to a private individual’ as long as the state does not establish strict liability for such falsehoods. Therefore, in cases involving individuals not in the public light, plaintiffs will need to carefully craft claims to demonstrate that the malicious deepfake is defamatory. This is a question of state law, and many factors regarding whether the statement was ‘truly false’ would apply (which may be problematic if the video explicitly stated it was fake).

However, all hope is not lost for someone who has been the subject of a malicious deepfake. Many states have enacted nonconsensual pornography or ‘revenge porn’ laws similar to defamation laws. Nonconsensual pornography entails distributing sexually explicit pictures of people without their consent. Although deepfakes can be sexually explicit, they do not actually reveal the intimate identity of the victim but can still have the same devastating effects as a real video. Individuals suffer from stigmatization, shame, humiliation, and may face difficulties in securing future employment. States should regulate nonconsensual deepfake pornography for the same reasons they regulate standard nonconsensual pornography. States have the authority to do so since the Supreme Court, in Miller v. California, recognized that the First.

7.5. Relevant policies

There are several policies in the USA aimed at countering the threat posed by deepfake. In 2019, the US Congress passed the Deepfake Report Act, which requires the Department of Homeland Security to issue an annual report that assesses the state of deepfake technology and potential threats to national security. Furthermore, the National Institute of Standards and Technology has launched a program focused on developing standards and guidance for detecting and combating fake media. The social media platforms Facebook and Twitter have also instituted measures to flag and remove deepfake content. They have developed algorithms to detect manipulated media and applied fact-checking labels to the content that is found to be false. These measures are significant steps towards limiting the circulation of deepfake.

However, many scholars argue that more needs to be done to combat deepfake, and that regulatory responses to the issue should not be limited only to defamation lawsuits. They recommend stronger regulation of social media platforms, particularly in terms of algorithmic transparency. In addition, they suggest that regulatory agencies should set clear technical standards for media authenticity, and work closely with stakeholders to ensure that these standards are met. Balancing the need for greater regulation with individual rights to free speech and expression remains a complex issue, but it is clear that only a multi-pronged approach involving different stakeholders will be effective in combating the spread of deepfake.

In addition, several state-level initiatives have been launched in the US to combat deepfake technology. For example, California has established a Deepfake Advisory Task Force, which consists of legal experts, technology experts, and representatives from the law enforcement and academic communities. The task force is responsible for studying the potential impacts of deepfake the implications of deepfake content on society, and making recommendations for policy and legislation to address the issue. These efforts demonstrate the importance of a collaborative approach to fighting deepfake through the implementation of comprehensive policy measures.

7.6. Detection tools

In the US, various research organizations, academic researchers, and tech companies are working on developing deepfake detection tools. One such tool is the DeepTrace project, led by researchers at Princeton University, which uses machine learning techniques to detect manipulated videos. The tool works by analyzing various features of a video, such as spatial and temporal features, and identifying whether they have been manipulated. The project has also developed a dataset of manipulated videos to help train their machine learning algorithm to detect deepfakes more accurately. Another tool is the Deepfake Detection Challenge, launched by DARPA, Facebook, and Microsoft, among others, a competition to develop the most accurate algorithm for detecting deepfakes, which uses a standardized dataset of deepfake videos to evaluate the accuracy of each algorithm.

Further, tech companies such as Google have also developed deepfake detection tools, such as the FaceForensics detection suite, which is capable of identifying both generative deepfakes and face-swaps. Another tech company, Truepic, has developed a tool called Truepic Foresight, which leverages blockchain technology to verify the authenticity of photos and videos and detect any signs of manipulation. These detection tools are still developing and improving in accuracy, but they represent important steps in combating the threat of deepfake technology by empowering individuals and organizations to identify manipulated content and take appropriate action to minimize its impact.

8. Recommendations to tackle deepfakes

Rules and regulations are clear means to combat deepfakes. Currently, criminal laws do not handle deepfakes, particularly in a number of nations, though law authorities have recommended adjusting present rules to apply to slander, insult, personality crime, or impersonation of a politician with deepfakes. Virginia state law against revenge pornography lately considered the distribution of fake pictures and videos an offense and thus extended the law to apply to deepfakes. The increased sophisticated AI technology requires novel kinds of rules and control structures. For instance, deepfakes cause worry over privacy and copyright, as the graphic display of individuals on deepfake videos is not the precise copies of any content in existence; instead, they are novel depictions created by AI. Regulators need to search for a tough legal setting around free speech and possession rules to correctly control the employment of deepfake technology.

However, a correct lawful solution to the propagation of destructive deepfakes would not be to stop the technology completely, which is not lawful. While novel rules can be created to stop deepfakes, means should be provided for them to be enforced. Social network companies of today are greatly protected from the materials users send to their sites. One legal solution would be to take away the lawful immunity of social platform companies in relation to the materials their users display. In this way, both the social media platforms and the users will become accountable for the contents that are displayed. However, laws mildly affect malicious players like external states and bombers that can be involved in great misinformation campaigns against other countries on social networks.

Deepfakes can be combated through education and training greatly. Though reasonable news has been presented by establishments, the public does not know or experience the consequences of deepfakes. Generally, the public has to be educated about the tendency of AI to be misused. Although deepfakes are novel instruments provided to digital fraudsters for social engineering, firms and establishments should be highly smart and create virtual resilience plans. Administrators, regulatory bodies, and people have to understand that video, unlike looks, may not accurately represent what occurred and know which sensual signals can be used to detect deepfakes. School children should be taught how to think creatively and rationally, and they should be computer literate so that they can easily identify false content and communicate with respect among themselves on the internet.

An anti-deepfake program has various ways and instruments used to 1) identify deepfakes, 2) validate material, and 3) help in preventing material from being utilized to create deepfakes. Generally, this method cannot detect and validate fake content on a large scale. This is because people that produce deepfakes have more study materials to use than the technology available for detecting them. For example, users can post 500 hours of content in a minute on YouTube. Twitter fights with 8 million addresses weekly that try to distribute manipulated material. This makes it very difficult to check all the content uploaded quickly using technologies. Also, deepfake producers seem to utilize works published on deepfake to advance their program and know how to avoid being detected by novel detection technologies.

Detection techniques that leverage fingerprints have emerged as one of the most promising methods to detect deepfakes. These techniques involve the analysis of small but unique traces left behind by the camera used to record the video, such as sensor noise, lens distortion, and emulated compression. Similar to fingerprints, these traces act as an identification system that is unique to each camera, and researchers have found that they are retained even after the deepfake is generated. By comparing the fingerprints of the original and deepfake video, it is possible to identify instances of deepfake content. These machine-learning-based techniques focus on minute differences between real and fake video content and leverage the fingerprinting approach to increase the accuracy of detection.

Scientists, for instance, discovered that early deepfake technologies could not imitate the frequency at which people wink; but, current technologies have solved the lack of winking or artificial winking in the publication of the results. As it is majorly national security agencies like The Defense Advanced Research Projects Agency (DARPA) that fund the development of deepface recognition technology, private digital security firms can also find ways to detect deepfakes, create trustworthy networks, remove illegal bots, and combat crime and cyber contamination. Developing anti-deepfake programs only will not solve the problem of deepfakes. Establishments need to apply them. For instance, the governments of different.

9. Discussion and conclusion

GANs involve an artificial intelligence approach that can be easily used to construct deepfakes (Mirza & Osindero, Citation2014), allowing for convincing human-mimicking audio and video content (Suwajanakorn et al., Citation2017). The average person may not have the tools necessary to spot a deepfake and avoid being tricked by one (Rössler, Citation2018). The revolutionary nature of deepfake technology necessitates a fresh examination of the moral, ethical, and psychological discourse surrounding it (Farish, Citation2020).

From a false social and political narrative, deepfake technology can be employed to exploit the technological power and political discourse that dominates our global media landscape (Cox & Williams, Citation2021; Slater & Rastogi, Citation2022). Both the ‘knowledge impairment’ of deepfakes, which threatens the credibility and safety of first responders, and the responses to this ‘disorder’ carry risks, exacerbating existing inequalities (Gregory, Citation2021). There are potential epistemic promises and risks associated with deepfake technology that affect how we fare as enlightened individuals (Kerner & Risse, Citation2021), and choices in the digital era may benefit more from a shift from instrumental to social rationality, which may be facilitated by Deepfake (Etienne, Citation2021). Advertising strategies, including creative development, media buying, and audience segmentation, are all susceptible to radical transformation in the face of deepfakes (Campbell et al., Citation2022).

Existing literature on deepfakes suffers from a number of shortcomings, including problems with definition, insufficient demographic representation, and a lack of theoretical models (Vasist & Krishnan, Citation2022). Many of the issues that deepfake raises need, at least in theory, adjustments to the existing framework of international law (Maas, Citation2019). The local environment and social setting are also crucial when analyzing deepfake and taking remedial actions (de Seta, Citation2021). More countries must swiftly pass laws akin to Europe’s General Data Protection Regulation to safeguard individuals against the unethical and destructive uses of deepfakes (Truby & Brown, Citation2020).

In conclusion, deepfakes are recent advancements that have led to the creation of institutional vulnerabilities in various domains. This study presents several aspects of deepfake technologies, their uses and abuses, and their associated legal challenges. Deepfakes can positively revolutionize entertainment, media, medicine, and other sectors. However, this technology also has a dark side, with a high potential for privacy abuse, gender discrimination, violence, and political propaganda. Considering that A.I. is often used to generate seemingly real videos filled with ‘misinformation’, traditional legal and personal precautions are inadequate for dealing with this abuse. This paper stimulates expert and enthusiastic readers by providing an overview of the legal framework, deepfake technology, authentication, and means of regulating the use of deepfakes.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Mohamed Chawki

Dr. Mohamed Chawki holds a (Ph.D.) in cyberlaw from the University of Lyon III in France for a dissertation on French, British and American cybercrime legal systems. This was followed by a 3-year post-doctoral research at the Faculty of Law, Aix-Marseille University, France. In 2023 Dr. Chawki obtained the French Habilitation (HDR) the highest academic qualification a French university can confer. He is currently an Associate Professor of Cybercrime and Criminology at Naif Arab University for Security Sciences (NAUSS), Saudi Arabia; and serves as a Senior Judge in Egypt – Vice-president of the Council of State. Dr. Chawki holds over 25 prizes for this academic achievement and was awarded the Medal of Excellence by the President of the Arab Republic of Egypt in 1998, the International Prize Claire l’Heureux Dubé from Canada in 2007 and the Distinguished Services Medal from the government of Brazil in 2009.

References

  • Afanasyeva, T., & Yumasheva, I. (2022). Research on the effects of “DeepFake” Technology for the modern digital space. In Challenges and solutions in the digital economy and finance (pp. 57–65). Springer.
  • Altay, S., Hacquin, A.-S., & Mercier, H. (2022). Why do so few people share fake news? It hurts their reputation. New Media & Society, 24(6), 1303–1324. https://doi.org/10.1177/1461444820969893
  • Campbell, C., Plangger, K., Sands, S., Kietzmann, J., & Bates, K. (2022). How deepfakes and artificial intelligence could reshape the advertising industry: The coming reality of AI fakes and their potential impact on consumer behavior. Journal of Advertising Research, 62(3), 241–251. https://doi.org/10.2501/JAR-2022-017
  • Chouliaraki, L. (2015). Digital witnessing in conflict zones: The politics of remediation. Information, Communication & Society, 18(11), 1362–1377. https://doi.org/10.1080/1369118X.2015.1070890
  • CNN. (2022). The Fight to Stay Ahead of Deepfake Videos before the 2020 U.S. Election. Retrieved November 27, 2022, from https://edition.cnn.com/2019/06/12/tech/deepfake-2020-detection/index.html.
  • Cox, J., & Williams, H. (2021). The unavoidable technology: How artificial intelligence can strengthen nuclear stability. The Washington Quarterly, 44(1), 69–85. https://doi.org/10.1080/0163660X.2021.1893019
  • Crystal, D. T., Cuccolo, N. G., Ibrahim, A. M. S., Furnas, H., & Lin, S. J. (2020). Photographic and video deepfakes have Arrived: how machine learning may influence plastic surgery. Plastic and Reconstructive Surgery, 145(4), 1079–1086. https://doi.org/10.1097/PRS.0000000000006697
  • de Seta, G. (2021). Huanlian, or changing faces: Deepfakes on Chinese digital media platforms. Convergence, 27(4), 935–953. https://doi.org/10.1177/13548565211030185
  • Delfino, R. (2022). Deepfakes on trial: a Call to expand the trial Judge’S gatekeeping role to protect legal proceedings from technological fakery. Available at SSRN 4032094.
  • Dickson, B. (2022). Blurs the line between reality and fiction. Retrieved from https://www.pcmag.com/news/when-ai-blurs-the-line-between-reality-and-fiction.
  • Etienne, H. (2021). The future of online trust (and why Deepfake is advancing it). AI and Ethics, 1(4), 553–562. https://doi.org/10.1007/s43681-021-00072-1
  • Farish, K. (2020). Do deepfakes pose a golden opportunity? Considering whether English law should adopt California’s publicity right in the age of the deepfake. Journal of Intellectual Property Law & Practice, 15(1), 40–48. https://doi.org/10.1093/jiplp/jpz139
  • Gosse, C., & Burkell, J. (2020). Politics and porn: how news media characterizes problems presented by deepfakes. Critical Studies in Media Communication, 37(5), 497–511. https://doi.org/10.1080/15295036.2020.1832697
  • Gregory, S. (2019). Cameras everywhere revisited: How digital technologies and social media aid and inhibit human rights documentation and advocacy. Journal of Human Rights Practice, 11(2), 373–392. https://doi.org/10.1093/jhuman/huz022
  • Gregory, S. (2021). Deepfakes, misinformation and disinformation and authenticity infrastructure responses: Impacts on frontline witnessing, distant witnessing, and civic journalism. Journalism, 23(3), 708–729. https://doi.org/10.1177/14648849211060644
  • Henry, N., & Witt, A. (2021). Governing image-based sexual abuse: Digital platform policies, tools, and practices. In J. Bailey, A. Flynn, & N. Henry (Eds.), The Emerald international handbook of technology-facilitated violence and abuse (pp. 749–768). Emerald Publishing Limited.
  • Kan, M. (2022, November 28). This A.I. can Recreate Podcast Host Joe Rogan’s voice to say anything. Retrieved from: https://www.pcmag.com.
  • Kerner, C., & Risse, M. (2021). Beyond porn and discreditation: Epistemic promises and perils of deepfake technology in digital lifeworlds. Moral Philosophy and Politics, 8(1), 81–108. https://doi.org/10.1515/mopp-2020-0024
  • Khodabakhsh, A., Ramachandra, R., & Busch, C. (2019). Subjective evaluation of media consumer vulnerability to fake audiovisual content. In Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 2019, pp. 1–6. https://doi.org/10.1109/QoMEX.2019.8743316
  • Kietzmann, J., Lee, L. W., McCarthy, I. P., & Kietzmann, T. C. (2020). Deepfakes: Trick or treat? Business Horizons, 63(2), 135–146. https://doi.org/10.1016/j.bushor.2019.11.006
  • Korshunov, P. n.d. Vulnerability assessment and detection of Deepfake videos. In Marcel, S., 2019 International Conference on Biometrics (ICB). Crete, Greece: IEEE, pp. 1–6.
  • Laser, C. D. (2022, November 27). Privacy & freedom of speech. Retrieved from https://yourwitness.csulaw.org/uncategorized/deepfakes-privacy-and-freedom-of-speech/.
  • Laskovtsov, A. (2020). Navigating the manosphere: An examination of the Incel movements’ attitudes of sexual aggression and violence against women.
  • Leibowicz, C. R., McGregor, S., & Ovadya, A. (2021). The Deepfake Detection Dilemma: A multistakeholder exploration of adversarial dynamics in synthetic media. In AIES'21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, July 2021, pp. 736–744.
  • Lucas, K. T. (2022). Deepfakes and domestic violence: Perpetrating intimate partner abuse using video technology. Victims & Offenders, 17(5), 647–659. https://doi.org/10.1080/15564886.2022.2036656
  • Maas, M. M. (2019). International law does not compute: Artificial intelligence and the development, displacement or destruction of the global legal order. Melbourne Journal of International Law, 20(1), 29–57.
  • Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784.
  • Mostert, F., & Perot, E. (2020). Fake it till you make it: an examination of the US and English approaches to persona protection as applied to deepfakes on social media. Journal of Intellectual Property Law & Practice, 15(1), 32–39. https://doi.org/10.1093/jiplp/jpz164
  • Nema, P. (2021). Understanding copyright issues entailing deepfakes in India. International Journal of Law and Information Technology, 29(3), 241–254. https://doi.org/10.1093/ijlit/eaab007
  • Pantserev, K. A. (2020). The malicious use of AI-based deepfake technology as the new threat to psychological security and political stability. In Cyber defence in the age of AI, smart societies and augmented humanity (pp. 37–55). Springer.
  • Patterson, D. (2022). From deep fake to cheap fake, it’s getting harder to tell what’s true on your favorite apps and Websites, [C.B.S. News]. Retrieved November 30, 2022, from https://www.cbsnews.com/.
  • Rössler, A. (2018). Faceforensics: A large-scale video dataset for forgery detection in human faces. arXiv preprint arXiv:1803.09179.
  • Slater, K., & Rastogi, A. (2022). Deep-Rooted Images: Situating (Extra) institutional appropriations of deepfakes in the US and India. Fast Capitalism, 19(1), 93–101. https://doi.org/10.32855/fcapital.202201.007
  • Suslavich, B. n.d. Nonconsensual Deepfakes: A deep problem for victims, SSRN. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4137782.
  • Suwajanakorn, S., Seitz, S. M., & Kemelmacher-Shlizerman, I. (2017). Synthesizing obama: learning lip sync from audio. ACM Transactions on Graphics, 36(4), 1–13. https://doi.org/10.1145/3072959.3073640
  • Taulli, T. Deepfake: What you need to Know, Forbes. (2022, November 28). Retrieved November 27, 2022, from https://www.forbes.com/sites/tomtaulli/2019/06/15/deepfake-what-you-need-to-know/?sh=1298d58d704d.
  • Truby, J., & Brown, R. (2020). Human digital thought clones: the Holy Grail of artificial intelligence for big data. Information & Communications Technology Law, 30(2), 140–168. https://doi.org/10.1080/13600834.2020.1850174
  • van der Sloot, B., & Wagensveld, Y. (2022). Deepfakes: regulatory challenges for the synthetic society. Computer Law & Security Review, 46, 105716. https://doi.org/10.1016/j.clsr.2022.105716
  • Vasist, P. N., & Krishnan, S. (2022). Deepfakes: an integrative review of the literature and an agenda for future research. Communications of the Association for Information Systems, 51(1), 590–636. https://doi.org/10.17705/1CAIS.05126
  • Verdoliva, L. (2020). Media Forensics and DeepFakes: An overview. IEEE Journal of Selected Topics in Signal Processing, 14(5), 910–932. https://doi.org/10.1109/JSTSP.2020.3002101
  • Wagner, T. L., & Blewer, A. (2019). “The word real is no longer real”: Deepfakes, gender, and the challenges of AI-altered Video. Open Information Science, 3(1), 32–46. https://doi.org/10.1515/opis-2019-0003
  • Westerlund, M. (2019). The emergence of Deepfake technology: A review. Technology Innovation Management Review, 9(11), 39–52. https://doi.org/10.22215/timreview/1282
  • Yadlin-Segal, A., & Oppenheim, Y. (2021). Whose dystopia is it anyway? Deepfakes and social media regulation. Convergence, 27(1), 36–51. https://doi.org/10.1177/1354856520923963
  • Zhu, B., Hao Fang, Yanan Sui, and Luming Li. (2020). Deepfakes for medical video de-identification: Privacy protection and diagnostic information preservation. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, ACM, pp. 414–20.