Election Security and Misinformation Regulation

Scholars discuss legal solutions and limits for targeting election misinformation.

As the 2024 U.S. presidential election draws near, the spread of misinformation has intensified, posing serious risks to democratic processes both within the United States and worldwide.

Recently, Facebook users disseminated artificial intelligence (AI) generated images of Donald Trump wading through floodwater intending to bolster his image by using the damage and chaos caused by Hurricane Helene. The list of concerns surrounding election misinformation has grown to include the dissemination of AI-generated deepfakes, social media-based voting disenfranchisement efforts, and fundraising scams.

In response to these concerns, scholars have analyzed the viability of using the legal system to prevent the spread of misleading or outright false information. U.S. lawmakers face an uphill battle against existing First Amendment caselaw, which limits the government’s ability to regulate communication and could make banning derogatory political speech difficult, even at the state level.

Experts have also warned of the increased risks marginalized communities may face, particularly among immigrants who have recently fled authoritarian, corrupt, or repressive regimes and communicate on unregulated services such as WhatsApp, where misinformation can flourish. Similarly, malicious actors often target the African-American community with voter misinformation.

In recent years, the rise of digital media has fueled the spread of election misinformation. For example, some commentators have noted the rise of misinformation and scaremongering content now present on X, formerly Twitter, following Elon Musk’s purchase of the platform. After his purchase, Musk unbanned many high profile and controversial political figures. This resulted in X-users promoting the baseless conspiracy theory that the federal government played a vital role in creating Hurricane Helene. In addition, five state secretaries of state have accused X’s artificial intelligence, known as Grok, of generating false information about ballot deadlines.

The problem of election misinformation is not limited to the United States: In 2022, the European Union implemented the Digital Services Act to protect the “fundamental rights” of internet users and foster a competitive market. Regulators recently invoked the Act’s authority to prevent the spread of false information about the European parliamentary elections earlier this year, with a focus on platforms such as X, YouTube, and TikTok.

And prior to recent elections in the United Kingdom and France, reporters noticed an increase in AI-generated video and audio content leveraging false claims of corruption against Ukrainian President Volodymyr Zelenskyy. These claims, like false claims about U.S. elections, were propagated via unregulated platforms such as Telegram or Rumble.

Some scholars have proposed new international legal regimes to combat the spread of false information on the reasoning that it diminishes nations’ sovereignty over their own electoral systems. Relatedly, several U.S. states have put forward bills to limit foreign spending in U.S. elections on the reasoning that foreign influence strips voters of their rights to self-determination.

This week’s Saturday Seminar explores potential regulatory solutions to the problem of election misinformation both within the United States and internationally.

  • In an article in the First Amendment Law Review, David S. Ardia of the University of North Carolina School of Law and Evan Ringel of the University of North Carolina School of Journalism discuss the limits of state law in addressing election misinformation. The authors highlight that federal laws, which mandate “disclosure and recordkeeping requirements” for online advertising, do not apply to political advertisements, and that election-related speech is not regulated outside of broadcasting. The authors argue that social media companies should be encouraged to self-regulate and promote transparency, as they are private entities whose actions are not subject to the First-Amendment limitations imposed on government actors.
  • In an article in the UCLA Law Review, Seana Shiffrin of the UCLA School of Law argues that social media companies should have the ability to restrict the accounts of government officials who disseminate “unconstitutional government speech.” Shiffrin emphasizes that the courts would need to alter current Free Speech Clause doctrine, as courts currently only restrict government officials’ speech when the speech is both targeted and harmful to particular individuals. The doctrine would also have to be expanded to include “government lies regarding public affairs”—a type of speech that, according to Shiffrin, is antithetical to the furtherance of self-governance and governmental accountability. Thus, Shiffrin concludes that such speech should not receive First Amendment protection.
  • In a recent article in the European Convention on Human Rights Law Review, Paolo Cavaliere of the University of Edinburgh Law School discusses the EU’s legal framework for curbing the spread of misinformation. Cavaliere notes that the primary document defining social media platforms’ obligations in the EU is the EU Code of Practice, which requires companies to take actions against the dissemination of disinformation. Cavaliere observes, however, that the Code may clash with existing caselaw under the European Convention on Human Rights. Cavaliere concludes that as EU authorities continue to refine their regulations, they should ensure alignment with the Human Rights Convention and avoid imposing obligations on platforms and key industry stakeholders that conflict with existing jurisprudence.
  • In an article in the Human Rights Law Review, Katie Pentney, a Ph.D. candidate at the University of Oxford, argues that governments are not merely regulators of disinformation but may also actively facilitate and spread it. Pentney explains that governments interfere with the flow of information and ideas in various ways, including overt censorship, withholding information, “fake news” accusations, and deceptive statements by officials on matters of public importance. Pentney notes that the European Court on Human Rights considers withholding information to be an interference with freedom of expression. She concludes that, to safeguard democracy, the European Court should expand this definition to include intentional misrepresentations by government officials.
  • In an article in the Journal of Strategic Security, Irem Işik, Ömer F. Bildik, and Tayanç T. Molla of Turkey’s Galatasaray University argue that states can use existing principles of international law to combat state-sponsored disinformation campaigns targeting elections. Işik, Bildik, and Molla suggest that these operations violate sovereignty by interfering with key governmental functions, breach the non-intervention principle through coercive manipulation of the electorate, and infringe upon states’ rights to self-determination by undermining voters’ free political will. Işik, Bildik, and Molla advise states to invoke these principles to hold perpetrators accountable, demand cessation, and deter future operations, thereby strengthening election security and protecting democratic processes without the need for new international laws.
  • In an article in American Journal of Comparative Law, Leslie Gielow Jacobs, a professor at the University of the Pacific McGeorge School of Law, argues that although misinformation and disinformation pose significant threats to democracy, the U.S. Supreme Court’s broad protection of speech under the First Amendment restricts responsive government action. Existing laws address harmful false speech such as fraud and defamation, but broader regulation of misinformation on digital platforms presents constitutional challenges, Jacobs explains. Jacobs concludes that a multifaceted approach—combining legal measures, platform accountability, and public education—is necessary to effectively counter misinformation, safeguard election integrity, and protect democratic values.