Protecting Against Sexual Violence Linked to Deepfake Technology

Scholars and researchers navigate the evolving challenges posed by deepfake technology.

Over 95 percent of deepfakes are pornographic. In one prominent example, an explicit, deepfake image of Taylor Swift was circulated online earlier this year. This “photo of Swift was viewed a reported 47 million times before being taken down.”

As digital technology evolves, so do the risks of deepfake technology. Deepfake, a term derived from “deep learning” and “fake,” refers to highly convincing digital manipulations in which individuals’ faces or bodies are superimposed onto existing images or videos without the individuals’ consent.

This emerging form of “image-based sexual abuse” presents unprecedented challenges. In 2021, the United Nations declared this form of violence against women and girls a “shadow pandemic.”

Amid the rapid evolution of deepfake technology, current laws struggle to keep pace. Although some jurisdictions have recognized the non-consensual distribution of intimate images as a criminal offense, the specific phenomenon of deepfakes often goes unpoliced.

In addition, traditional legal frameworks designed to address privacy violations or copyright infringement lack the nuance to effectively combat deepfake-related abuses. The use of deepfake technology invades privacy and inflicts profound psychological harm on victims, damages reputations, and contributes to a culture of sexual violence.

Proponents of reform argue that existing legislation must be expanded to explicitly include deepfakes within the scope of “image-based sexual abuse.” Such reform would include recognizing the creation and distribution of deepfakes as a distinct form of abuse that undermines individuals’ sexual autonomy and dignity. To address deepfake abuse, experts recommend a multi-faceted approach that includes enhancing victim support services, raising public awareness about the implications of deepfakes, and fostering collaboration between technology companies, legal experts, and law enforcement agencies.

Furthermore, advocates of reform urge social media platforms and content distribution networks to implement more stringent procedures for detecting and removing deepfake content and to promote digital literacy to help individuals safely navigate the complexities of online spaces.

But navigating the complex landscape of deepfake regulation presents significant challenges, requiring nuanced approaches that balance privacy protection and free expression with the need to combat online abuse and exploitation. For example, the global nature of the Internet presents a critical challenge that allows deepfake content to cross national boundaries, complicating enforcement issues. Human rights advocates have noted the need for international cooperation and the uniformity of laws to protect victims across borders.

In this week’s Saturday Seminar, researchers and scholars explore the current landscape of deepfakes and sexual violence and the attempts to regulate this emerging technology.

  • Nonconsensual deepfakes are an “imminent threat” to both private individuals and public figures, argues judicial clerk Benjamin Suslavich in an article in the Albany Law Journal of Science & Technology. Deepfake technology generates lifelike videos of a subject with just a single image, which is often misused for creating nonconsensual pornographic content, Suslavich notes. He argues that current legal protections are inadequate for providing recourse for victims. Suslavich calls for the adoption of legislative and regulatory frameworks that would enable individuals to reclaim their identities on the internet. Specifically, Suslavich recommends reducing statutory protections for internet service providers—which currently have blanket immunity—if they fail to quickly remove identified nonconsensual pornographic deepfakes.
  • In an article in the New Journal of European Criminal Law, Carlotta Rigotti of Leiden University and Clare McGlynn of Durham University discuss the European Commission’s proposal for a “landmark” directive to combat “image-based sexual abuse” by criminalizing non-consensual distribution of intimate images. Rigotti and McGlynn explain that this form of abuse includes creating, taking, sharing, and manipulating intimate images or videos without consent. Although they find the Commission’s proposal ambitious, they critique the narrow scope of its protections. To better protect women and girls, Rigotti and McGlynn urge the Commission to revise its approach toward online violence, by removing the limiting language in the proposal and adding broader terms that encompass the evolving technological landscape.
  • Deepfake pornography can constitute a form of image-based sexual abuse, argues practitioner Chidera Okolie in an articlefor the Journal of International Women’s Studies. Like other types of legally recognized sexual abuse, deepfake pornography inflicts psychological and reputational damage on its victims, Okolie emphasizes. Although many countries have moved to regulate deepfake pornography, Okalie criticizes recently enacted laws for being overbroad and encompassing otherwise legitimate and legal content. To address the ambiguity, Okalie suggests that legislators enact laws that target technologies and practices specific to deepfake pornography. She also urges governments to enforce laws that are already in place to protect victims of sexual violence.
  • Collective, international effort is necessary to combat the global dissemination of deepfake pornography, contends practitioner Yi Yan in an articlefor the Brooklyn Journal of International Law. Yan argues that efforts to regulate deepfakes on an international scale are ineffective because of their fragmented nature. Instead, nations should target deepfake technology by focusing on extra-territorial jurisdiction and cooperation between nation-states, Yan argues. As a first step, Yan suggests that nations should adopt language into international law that explicitly criminalizes AI-generated revenge pornography, a subject on which it is currently silent.
  • Instead of relying on a patchwork of state laws, legislators should implement a federal law punishing the publication of technology-facilitated sexual abuse, proposes Kweilin T. Lucasof Mars Hill University in an article in Victims and Offenders. Even though most states have enacted laws to curtail non-consensual pornography, deepfakes are exempt from existing regulations because the victim’s own nudity is not displayed in such videos, explains Creators of deepfake pornography can also evade punishment under existing state revenge porn laws because their intent is not to harm or harass the victim, notes Lucas. To protect people’s images from being manipulated, federal law should punish the publication of non-consensual deepfakes that humiliate or harass the victim or facilitate violence, suggests Lucas.
  • In a British Journal of CriminologyarticleAsher Flynn of Monash University and several coauthors interviewed online image-based violence survivors to identify whether certain populations are targets of exploitation. The Flynn team examine the harm the spread of non-consensual sexual imagery has on certain groups. Flynn and her coauthors find that individuals with mobility needs, members of the LGBT+ community, and racial minorities are more vulnerable to image-based abuse. Victims reported experiencing severe trauma and significant changes in their lives, such as limiting their online or public engagement, notes the Flynn team. Image-based sexual violence prevention efforts should consider factors like racism, ableism, and heterosexism to better protect groups disproportionately targeted, suggest Flynn and her coauthors.