The Federal Election Commission will not propose a rule banning deepfakes in campaign communications.
The Federal Election Commission (FEC) announced that it would not revise its existing rules to regulate the use of artificial intelligence (“AI”) in political campaign communications. This announcement responded to a petition for rulemaking submitted in July 2023 by self-described consumer advocacy group Public Citizen.
Public Citizen requested that the FEC issue a rule clarifying that deepfakes—realistic, AI-generated visual or audio content—constitute “fraudulent misrepresentation” when used maliciously to depict a candidate saying or doing something they did not.
The Federal Election Campaign Act (FECA) and the FEC’s current rules prohibit federal candidates or their agents from “fraudulently misrepresenting” themselves as “speaking or writing or otherwise acting for or on behalf of any other candidate or political party” for purposes of damaging another candidate or for fundraising.
In its petition, Public Citizen expressed concerns about rapid advances in AI technology and the extent of its existing usage. For instance, the organization pointed to a deepfake depicting then-Chicago mayoral candidate Paul Vallas expressing indifference last year toward police killings.
The FEC declined Public Citizen’s petition to undertake notice-and-comment rulemaking, in which the agency would have been offered opportunity for public comment before issuing a rule carrying the force of law. Instead, the FEC adopted an interpretive rule—an agency guidance document that does not bind regulated entities—clarifying that existing laws against fraudulent misrepresentation apply “irrespective of the technology used.”
The interpretive rule does not specify, however, whether deepfakes falsely depicting a candidate’s words or actions constitute fraudulent misrepresentation. The agency will make case-by-case decisions in adjudications rather than categorically regulating deepfake content.
The FEC’s disposition of the request for rulemaking followed nearly a year of deliberation. In response to Public Citizen’s petition, the FEC sought public comment on a potential rule. The agency received more than 2000 comments, including from lawmakers and political parties as well as advocacy groups.
Commenters disagreed about the limits of the FEC’s statutory authority to regulate deepfakes. Even supporters of a proposed rule acknowledged that the FECA’s language prohibiting fraudulent misrepresentation applies specifically to candidates and their agents, not to political action committees or other actors.
Public Citizen still maintained that even a limited rule would be beneficial because federal candidates had already distributed deepfake content, citing an instance in which Florida Governor Ron DeSantis’s presidential campaign shared AI-generated images of President Donald Trump hugging former director of the National Institute of Allergy and Infectious Diseases Dr. Anthony Fauci.
In contrast, the Republican National Committee submitted a comment arguing that the FECA’s language addressing fraudulent misrepresentation only prohibits impersonation, not deception more broadly. Using the example of deepfake imagery portraying President Trump hugging Dr. Fauci, the RNC contended that so long as the DeSantis campaign did not impersonate Trump or claim to act on his campaign’s behalf, this content did not violate the FECA.
Commenters also debated a potential rule’s potential First Amendment implications. Public Citizen did not request a blanket ban on deepfake content in campaign communications. Rather, it contended that a FECA violation requires intent to deceive someone for political sabotage or fundraising. For example, Public Citizen acknowledged potential exceptions where a campaign discloses the use of AI or where AI-generated content constitutes parody. In addition, nonpartisan advocacy organizations Campaign Legal Center and Protect Democracy commented that a rule prohibiting fraudulent electoral communications would enhance rather than detract from First Amendment values.
The Institute for Free Speech, a nonprofit advocacy organization, commented that requiring the FEC to determine whether free speech is intentionally damaging would establish an unworkable regulatory standard if the material does not explicitly attack or promote a candidate. For example, the Institute contended that finding the deepfake depicting President Trump hugging Dr. Fauci intentionally damaging required too much context and individual understanding to survive judicial scrutiny. The Institute also argued that the difficulty of distinguishing misrepresentation and parody could chill protected speech.
The day the FEC denied Public Citizen’s petition, Republican Commissioner and FEC Chair Sean Cooksey released a statement defending the commission’s vote against initiating a rulemaking. Citing several recent bills seeking to regulate deceptive AI in elections, he echoed concerns that a proposed rule would exceed the FEC’s statutory authority. Chairman Cooksey also maintained that the issue’s novelty and the agency’s lack of experience made case-by-case adjudication preferable to a rulemaking.
Democratic Commissioner and Vice Chair Ellen Weintraub released a separate statement, conveying alarm at the proliferation of deepfakes in political communications and contending that the FEC could have issued a rule before the November 2024 elections had it reacted more quickly. Although she acknowledged the agency’s statutory limitations, Commissioner Weintraub maintained that the FEC could nonetheless have addressed key questions that could arise in enforcing the FECA, such as criteria for identifying fraudulent misrepresentation, the role of disclaimers, and whether deepfakes must be “deliberately deceptive” to violate the law.
After the FEC rejected Public Citizen’s petition, a bipartisan group of legislators in the House of Representatives introduced the NO FAKES Act, which would grant an individual right to one’s likeness and voice and establish penalties for unauthorized AI imitations. The Senate had already introduced its own bill in July 2024.
One of the House bill’s sponsors, Pennsylvania Representative Brian Fitzpatrick, contended that the NO FAKES Act would expand the FEC’s authority to regulate deepfakes in campaign communications. Despite bipartisan support, however, he and other cosponsors do not expect Congress to pass the bill this year.
This summer, the Federal Communications Commission also proposed a rule regulating AI in political advertising. The proposed rule focuses on disclosure, requiring television and radio broadcasters to issue an on-air announcement for all political advertisements containing AI-generated content. The agency stated that this rule would complement, rather than replace, the FEC’s efforts.