Scholars debate current challenges in regulating offensive and harmful speech on the internet.
Hundreds of rioters stormed the U.S. Capitol Building on January 6, 2021, motivated in part by an online campaign calling for people to “stop the steal” of the 2020 election by any means necessary.
In 2015, a months-long coordinated online campaign of misogynistic threats and sexual harassment, known as “GamerGate,” forced a trio of high-profile female gamers offline.
In one ongoing criminal trial against former President Donald J. Trump, the judge has excused jurors from the trial due to fears they may be “doxed”—a term that refers to publishing someone’s sensitive personal information online without their consent.
As people increasingly interact with one another more online than offline, the impact of harmful speech on the internet has likewise increased. The growth of internet-based hate speech and harassment has prompted a debate among scholars and regulators on the best ways to regulate internet speech while upholding First Amendment principles.
Some scholars argue that courts should reform their approach to regulating violent speech to better address internet-specific concerns. These scholars emphasize that courts’ current practice of prohibiting only violent speech that poses an immediate threat allows most internet speech to go unregulated.
Other scholars blame Section 230 of the Communications Decency Act for the proliferation of harmful internet speech. Section 230 provides online communication platforms, including social media sites, with broad protection from liability stemming from content users post on the platforms’ websites. Critics of the policy argue that Section 230 provides internet platforms with too much protection and allows large internet companies to profit from the harmful speech they host.
But other scholars worry that proposed regulations of internet speech ultimately will infringe upon constitutionally protected speech. These scholars emphasize that concerns about harmful internet speech can often serve as a cover for attempts to suppress constitutionally protected endorsements of unpopular political opinions.
Advocates for free speech also allege that criminalizing or banning doxing would violate the First Amendment since it would require the censorship of entirely truthful speech.
Yet concerns over harmful internet speech have prompted regulatory interventions from both private and public actors. The social media site X—formerly known as Twitter—banned former-President Trump for more than a year because of his actions on January 6th, and Reddit has banned users who publish revenge porn on the site.
In each of the last two U.S. congressional sessions, Senator Mark R. Warner (D-Va.) introduced the SAFE TECH Act, which would reform Section 230’s immunity shield for social media companies. The bill has not gained much traction, however. At the state level, an Illinois anti-doxing law took effect in January 2024, making it illegal to publish another person’s personal information without their consent online with the intent to cause bodily harm, economic injury, or emotional distress. The effects of these policy interventions, however, remain unclear for both harmful and non-offensive speech.
In this week’s Saturday Seminar, scholars discuss the current regulatory boundaries, or lack thereof, for hateful and harassing internet speech and propose solutions to protect internet users.
- In a forthcoming article in the New York University Law Review, Howard Schweber of the University of Wisconsin, Madison and graduate student Rebecca Anderson suggest courts eliminate First Amendment protections afforded to calls for political violence. Schweber and Anderson explain that under the test established in Brandenburg v. Ohio, the First Amendment protects calls for criminal wrongdoing unless that wrongdoing is imminent. The authors conclude that this test fails to adequately address increasingly common calls for political violence that take place online, far removed in time and space from the target of their speech. Schweber and Anderson propose that courts develop an internet-specific Brandenburg standard to account for the unique dangers posed by web-based calls for lawlessness.
- The First Amendment should not protect the sharing of sexually explicit content against candidates for political office, argue recent University of Virginia Law School graduates Zachary Starks-Taylor and Jamie Miller in a forthcoming article in the New York University Law Review. Starks-Taylor and Miller explain that internet and social media companies allow members of the public to share sexually explicit images and videos of candidates they dislike. The authors note, however, that 48 states have recently adopted laws criminalizing nonconsensual sharing of sexually explicit content, known as revenge porn. Starks-Taylor and Miller conclude that such statutes are adequate to protect political speech and do not erode the First Amendment’s traditionally robust protections for election-related speech .
- Seeking to better understand public perceptions of online sexual harassment, Inbal Lam of Israel’s Western Galilee College and Gustavo S. Mesch of Israel’s University of Haifa discuss their survey of lay people in a working paper. The results show that respondents perceived online sexual harassment as less serious than offline sexual harassment. Lam and Mesch hypothesize that respondents’ perception of online sexual harassment may be influenced by the fact that it provokes only low levels of fear in most people due to the physical separation between victim and perpetrator. They argue that permissive public attitudes toward online sexual harassment, especially among men, allow such behavior to persist—despite its negative effect on mental health.
- In a U.S. Government Accountability Office (GAO) report, Triana McNeil and several co-authors recommend that the Bureau of Justice Statistics take steps to measure “bias-motivated criminal victimization” on the internet. The GAO team finds that about one-third of internet users experience hate speech—which is often associated with hate crimes and domestic violence extremism. Yet the Department of Justice has incomplete information about such victimization because state law enforcement agencies are not required to report data, targeted communities often distrust law enforcement, and internet hate crimes are under-investigated due to the difficulty of proving that offenders are motivated by bias. The GAO team reports that the Bureau of Justice Statistics has agreed to pursue better measurement of bias-motivated criminal victimization.
- Most state anti-doxing laws violate the First Amendment while failing to protect the most vulnerable potential victims of doxing, Frank D. LoMonte of the University of Georgia School of Law and Paola Fiku of the Brechner Center for the Advancement of the First Amendment contend in an article in the University of Missouri Kansas City Law Review. LoMonte and Fiku observe that, of the states that have passed anti-doxing laws, most laws focus on protecting politicians and public officials from exposure while providing little attention to vulnerable populations such as women subject to misogynistic internet attacks. Furthermore, because most state statutes purport to criminalize the publication of even mundane information, such as an office phone number, LoMonte and Fiku conclude they constitute an overbroad restriction on speech under the First Amendment.
- In a recent paper, Avery Bartagna, a law student at the Texas Tech University School of Law, discusses the need to increase internet platform responsibilities for content moderation in the Metaverse. The Metaverse—a combination of virtual reality user interfaces that companies such as Meta are currently developing—expands the potential for harassing and harmful internet behavior, Bartagna argues. She points to the problems users experienced while playing Second Life, an existing virtual reality game with similarities to the Metaverse, to illustrate her point. Bartagna proposes that Meta and other platforms developing virtual reality interfaces adopt a “notice and takedown” system where a platform would remove offensive content once a user flagged it.