Gender Violence in the Digital Age

Rangita de Silva de Alwis discusses how technology enables online violence toward women and girls.

In a recent discussion with The Regulatory Review, Rangita de Silva de Alwis, a renowned women’s human rights expert, delves into the complexities of online gender-based violence.

De Silva de Alwis argues that online platforms foster spaces for community and connection but may also provide new venues for manipulation and abuse. She claims that online gender-based violence perpetuates power imbalances that adversely impact women and girls. And, although online abuse shares features with offline abuse, she cautions that the speed and anonymity of technology create unique regulatory challenges.

De Silva de Alwis warns that new technologies, such as artificial intelligence (AI), are weaponized to produce deepfakes and objectify women’s bodies. Beyond individual harms, de Silva de Alwis cautions that the societal impacts of online gender-based violence are great and could even discourage women from running for public office.

De Silva de Alwis underscores the importance of integrating a gendered perspective into international legal frameworks to address the unique vulnerabilities that women face in the digital realm. In support of her position, she is now engaged in drafting the first global treaty on cybercrime, the United Nations’s Comprehensive International Convention on Countering the Use of Information and Communication Technologies for Criminal Purposes, which is commonly known as the Cybercrime Convention.

De Silva de Alwis is on the faculty of the University of Pennsylvania Carey Law School as well as an expert member on the treaty body to the U.N. Convention on the Elimination of Discrimination and the Women, Peace and Security Focal Points Network. She is also the Hillary Rodham Clinton Fellow on Global Gender Equity at Georgetown University’s Institute for Women, Peace and Security. Before entering academia, de Silva de Alwis was the inaugural director of the Global Women’s Leadership Initiative and the Women in Public Service Project launched by former U.S. Secretary of State Hillary Rodham Clinton. She serves on the U.N. Secretary General’s Task Force on Poverty Eradication and the Advisory Board on Gender Equality to the President of the U.N. General Assembly and as Vice Chair of the International Bar Association’s Human Rights Institute.

The Regulatory Review is pleased to share the following interview with Rangita de Silva de Alwis.

The Regulatory Review: What is online gender-based violence? What are some examples?

De Silva de Alwis: Online gender-based violence harnesses new and growing digital technology to instigate threats, intimidation, and harassment. Some examples include deepfakes, synthetic media, cyberstalking, online harassment, non-consensual dissemination of intimate images, doxing, slut-shaming, trolling, cyber-flashing, gendered hate speech, disinformation, misinformation, cyber smear campaigns, threats of sexual violence and murder, and morphing.

TRR: What role does AI play in online gender-based violence?

De Silva de Alwis: There is a concerning trend of AI-powered chatbots and online forums providing spaces for abusers to share tactics and strategies for further harming virtual partners. The proliferation of AI-generated images, videos, and other media content against women is another emerging category of violence that must be addressed. AI and facial mapping technology merge, combine, and superimpose images and videos to generate authentic-looking media called deepfakes. Pornographic deepfakes reinforce a culture that commodifies and objectifies women’s bodies. Pornographic deepfakes have become the new sites for gender-based violence against women and technology-facilitated abuse.

TRR: How does online gender-based violence relate to other forms of gender-based violence?

De Silva de Alwis: Although the direct physical act of sexual violence is different from online violence, there are also similarities. First, both acts share the structural gender and intersectional inequities that lie at the root of such conduct. Second, paralleling the defense that women and girls are free to leave an abusive relationship, the defense that women and girls are free to leave an abusive online environment denies their right to assembly and expression in the online public square. Finally, technology-facilitated gender-based violence, like other forms of gender-based violence against women, is about power, control, and power imbalances. The AI revolution, including large language models and generative AI, reproduces old power disparities and creates new ones both online and offline.

But online gender-based violence also differs from direct physical violence. The anonymity provided by the digital realm facilitates violence, and the automation capabilities offered by technology amplify the scope and impact of abusive behavior. Moreover, geographic distance emboldens the pile-on effect in the online space, whereby multiple offenders from disparate locations can join forces to harass and bully a single woman, shaping a culture of sexism and misogyny.

TRR: What have been the impacts of these online threats on women?

De Silva de Alwis: These online threats against women have resulted in physical and mental health challenges, including self-harm and suicide. Moreover, gendered forms of online hate speech have indirectly resulted in the shrinking of civic space and the erasure of women in public life. Women have sometimes left public office or have been discouraged from running for political or public office due to potential online threats to them and their families. Women parliamentarians, women in political life and decision-making, women journalists, and women human rights defenders may be doubly attacked because of their positions against power and patriarchy.

TRR: You have been involved in the Cybercrime Convention. How do you bring a gender perspective to these important conversations?

De Silva de Alwis: My comments to the drafters of the Cybercrime Convention have highlighted three points.

First, although data breaches are not gendered, they may carry differentiated gendered impacts. In July 2016, the municipality of São Paulo, Brazil, was crippled by a data breach exposing the personal data of an estimated 650,000 patients from the Brazilian public health system. In a country where abortion is illegal, the data breach included information on 4,237 abortions, exposing doctors and women to potential criminal charges. In another data breach, this time in Chile in 2016, more than 3 million health records, including the names, ID numbers, and addresses of women and girls who asked for the morning-after pill in a public hospital, as well as of people living with HIV, were exposed to the public. Breaches of women’s personal information, especially in relation to reproductive care, might have harmful gendered consequences.

Second, over the last few decades, the internet has given rise to new tools for human trafficking, especially the trafficking of women and girls on social media platforms. Facebook, Snapchat, WhatsApp, and Xbox Live are used to recruit victims through either direct messaging or “catfishing.” New forms of internet-based exploitation, such as cybersex dens where sexual performances are livestreamed via webcam, were revealed in the 2021 Nth Room case in the Democratic People’s Republic of South Korea.

Finally, while recognizing the importance of the new Cybercrime Convention, we also need to recognize that national security law may have the indirect result of imperiling the rights of women human rights defenders and their freedom of association, opinion, and expression. The draft Cybercrime Convention does not see human rights and national security as dichotomized goals in fixed opposition to each other. Rather than trump human rights, national security and the new Cybersecurity Convention must complement the human rights of those most at-risk communities that will be disadvantaged in different ways by cyberattacks. Article 5 of the draft Cybercrime Convention is an important step in enshrining state parties’ human rights obligations through a gender lens.

TRR: Are there certain legal or non-legal actors that you think would be particularly adept at solving these challenges from digital technologies?

De Silva de Alwis: In my class, AI and Implicit Bias Policy Lab, I bring together a diverse group of experts from across disciplinary boundaries and regional borders to cross-fertilize ideas. These experts—including policymakers, technologists, business leaders, venture capital leaders, innovators, medical experts, philosophers, journalists, human rights experts, and artists—dialogue with my students. The hope is to bring together stakeholders across multidisciplinary fields, including creative thinkers like my students, to the problem-solving table.

TRR: What is the next great challenge in technology?

De Silva de Alwis: The next great challenge is AI-driven lethal autonomy weapons systems (LAWS). The U.N. Secretary-General António Guterres has maintained that LAWS are “morally repugnant.” In a recent report, Guterres has called for a legally binding instrument by 2026 to prohibit LAWS that function without human control and violate international humanitarian law and human rights.

In my ongoing research of the regulatory approaches of states and multilateral organizations, the European Union stands out in its position by recognizing that “it is important to take into account a gender perspective when discussing the issue of lethal autonomous weapons systems, given the nexus between gender equality and emerging technologies.”