Scholars argue that video and audio fabrications could threaten modern government but lack satisfactory regulatory solutions.
Emma Gonzalez, a gun control advocate and survivor of the Parkland, Florida school shooting, defiantly ripped up the U.S. Constitution on camera. Or did she?
A grainy animated image of Gonzalez’s supposed subversion circulated throughout the Internet before being corrected by multiple users. It was a fake: a deep fake, because the video image looked so seemingly real.
For law professors Robert Chesney and Danielle Citron, the video manipulation of Gonzalez represents only the beginning of the significant threat deep fakes pose to democracy.
In a recent paper, they explain that a web of emerging technologies can, in the wrong hands, facilitate the fabrication of fake but convincing “video or audio making it appear that someone said or did something.” According to Chesney and Citron, these deep fakes pose serious risks to individuals’ reputations, democratic discourse, and well-functioning government. Despite these significant problems, Chesney and Citron say that no effective legal solutions yet exist.
Experts have developed a variety of methods for creating deep fakes, many of which rely on neural networks—computer code that, like the human brain, learns from experience to get better at a given task. In the deep fake context, neural networks can learn to create fake pictures and videos by sampling original images and video clips. And as Chesney and Citron explain, academics and various businesses are driving similar research in the generation of fake audio.
Researchers and industry are constantly improving these methods. Scholars at the University of Washington fabricated a video—albeit using real audio—purporting to show an address by President Barack Obama after a 2016 mass shooting. Shortly after, another team of researchers claimed to have developed an even better method that allowed for more robust simulation of head movement and facial expression. And in a startling video for Buzzfeed, Jordan Peele convincingly impersonated President Obama, who is falsely shown to say that “Ben Carson is in the ‘sunken place.’”
Although at present many non-experts would likely find this technology too complex to use, Chesney and Citron anticipate that, much as Photoshop democratized photo manipulation, consumer-friendly software is on the horizon. And once individuals begin working with such applications, deep-fake content will likely interact with social media in familiar ways. Without a centralized content moderator, the Internet allows broad dissemination of unverified media. People’s psychological tendencies will likely lead users to propagate and remember mostly negative stories. Echo chambers could prevent users from viewing real news that debunks the fakes.
Chesney and Citron imagine a wide range of potential misuses of deep fake technology. Perpetrators might sabotage individuals’ careers and relationships, extort value from them with fake blackmail, or commandeer their likeness to portray false endorsements of certain products or political positions.
Malicious individuals or organizations could also use deep fakes to disrupt society at large by attempting to manipulate public opinion, impede constructive discourse, improperly sway elections, erode trust in the neutrality and efficacy of government institutions, and antagonize subsections of the population against one another.
And even if the public were to become aware of the deep fake problem, the threat would likely still persist. Wrongdoers who really have committed the alleged wrongful act may claim that the real, inculpatory footage is a deep fake. As Chesney and Citron warn, “beware the cry of deep-fake news.”
Unfortunately, experts are not making sufficient progress on technology that detects deep fakes, Chesney and Citron claim. Although some researchers and businesses are developing anti-counterfeit programs, such efforts “are tailored to particular products rather than video or audio technologies generally.” Lacking reliability and scalability, these programs do not resolve the deep-fake problem on their own.
Instead, Chesney and Citron suggest that social media platforms may be the most effective target for legal action. Rather than pursuing deep fake creators, victims could sue the platforms that host the harmful content. This potential liability would encourage platforms to ensure that their users are following platform policies concerning harassment, impersonation, and the like.
But this approach would require changes to a provision governing online communications, Section 230 of the Communications Decency Act (CDA), which provides immunity to platforms for the content their users create and post. The CDA further provides that even if a platform does attempt to moderate user content, it cannot be held liable for shortcomings in its efforts, so long as they are made in good faith.
Chesney and Citron propose that, instead, immunity for platforms should be conditional on a website taking “‘reasonable’ steps to ensure that its platform is not being used for illegal ends.” Congress has recently proved its willingness to amend the CDA to fight sex trafficking, although not without controversy.
Alternatives to amending the CDA are bleak, as Chesney and Citron argue that regulatory agencies are ill-suited to meet the deep-fake challenge. Although the Federal Trade Commission (FTC) regulates deceptive advertising, “most deep fakes will not take the form of advertising,” Chesney and Citron say. And although the FTC also has jurisdiction over “unfair” business practices, the agency may overstep its boundaries if it regulates on this basis. According to Chesney and Citron, most deep fakes will try to “sell” people a belief, not a product.
The Federal Elections Commission (FEC) might regulate deep fakes to the extent that they interfere with elections, but Chesney and Citron suggest that such an approach could run into significant First Amendment issues. Because of the free speech concerns, the FEC tends to focus on transparency in the financing of campaigns, not the truthfulness of statements about such campaigns.
The First Amendment poses the same problem for an outright ban on deep fakes through congressional legislation. That said, a patchwork of existing laws offers some hope, Chesney and Citron say. They point to various tort law standards, such as defamation or the tort law prohibition on portraying someone in a “false light.” But these laws remedy only individual harms, not societal ones. Furthermore, private plaintiffs will likely have great difficulty identifying the individual who has made a deep fake, and victims may not want to expose the embarrassing fake video to even more publicity by going to trial.
Criminal liability, enforced by public actors, could overcome some of these limitations, but Chesney and Citron point out that law enforcement often lacks sufficient training to police online abuse effectively. Furthermore, crimes such as incitement of violence have their own First Amendment protections.
If Chesney and Citron are right and the CDA should be modified to open platforms to legal liability, the terms of service and internal review procedures that social media platforms currently use to regulate content on their site may be the primary—if not the only—viable option for policing deep fakes.