Recent lawsuits challenge artificial intelligence tools’ use of original works as copyright infringement.
In a class action lawsuit filed last September, professional writers claim that ChatGPT is destroying their livelihoods and threatening human creativity. As use of artificial intelligence (AI) tools such as ChatGPT rapidly grows, these plaintiffs hold out hope that copyright law can protect their work.
The lawsuit, Authors Guild v. OpenAI, is the latest in a series of class actions alleging that OpenAI, Google, Facebook, and other companies have stolen “vast troves” of copyrighted works to create AI programs that steal jobs from human creators, and far worse, threaten the future of art and literature. The tech giants contend, in response, that their AI technology uses copyrighted material fairly and is highly creative in its own right.
Cases such as Authors Guild mark a high-stakes battle over copyright law’s unsettled role in regulating AI’s use of original works.
Copyright law flows from the Intellectual Property Clause of the U.S. Constitution, which grants “authors and inventors” exclusive rights to their work to “promote the progress of science and useful arts.” The main aim of copyright law, as the U.S. Patent and Trademark Office explains, is to encourage the creation and distribution of works for public benefit.
Federal copyright law protects creative works from exploitation but also allows for “fair use” of those works—that is, copying them for innovative and socially valuable purposes.
Generative AI programs, such as ChatGPT, copy the information contained in protected works and then “train” themselves by processing millions of these works. In fact, AI companies acknowledge that training a generative AI tool “necessarily involves first making copies” of “human-generated data,” which includes copyrighted works.
The question of whether these tools are mere plagiarists or instead make fair use of protected material to create something new is at the root of these class action suits.
Plaintiffs in the recent class actions argue that generative AI tools and their outputs are merely “derivative works,” of the creative labor of others. They claim that, but for these original, copyrighted works, AI would have nothing to say.
The plaintiffs in the Authors Guild case allege that ChatGPT can summarize the plaintiffs’ books and churn out new stories with the same style and characters. These abilities suggest that the chatbot ingested “the entirety of these books.” This large-scale theft, as the plaintiffs characterize it, is what enables ChatGPT to produce “human-seeming text.” In turn, by “flooding the market with mediocre, machine-written” texts based on writers’ work, ChatGPT and similar programs threaten writers’ jobs, are an assault on creativity, and infringe on their copyrights.
But generative AI makers tell a different story—and courts so far seem to agree.
Generative AI makers emphasize the creativity of generative AI programs and how they repurpose copyrighted materials to create a new material.
In an influential 1990 article, Judge Pierre N. Leval argued that uses of copyrighted works by adding “new information, new aesthetics, new insights and understandings” to the original are at the heart of fair use protection. Citing Judge Leval’s article, the U.S. Supreme Court held in Campbell v. Acuff-Rose Music that creations that depend on copyrighted works for raw inputs or inspiration but substantially transform their purpose and meaning are fair uses that do not infringe upon the original work. The key factor is how “transformative” the secondary use is.
Generative AI makers contend that their tools satisfy this fair use test, and that courts should apply the test to AI creations. In fact, in holding that the tools are infringing copyrighted works, courts would severely limit the possible uses of AI, “stifling the very creativity copyright is supposed to protect,” the tech companies assert.
Generative AI tools learn the patterns that emerge from massive samples of human works and, from there, produce wholly new media, according to the companies. The say that because generative AI outputs result from a complex analysis of millions of datapoints, they do not replicate any given creator’s expressive choices and therefore do not infringe on copyrights.
Stanford Law Professor Mark Lemley has likewise argued for fair use protection for generative AI. Lemley concedes that an infringement claim is viable when an AI reproduces large portions of a protected work or generates works in the style of a certain artist. He argues, however, that machines deserve broad license to use copyrighted materials to learn how to create on their own, transformative works. Lemley explains that, in doing so, an AI program “changes the purpose for which the work is used.” If the technology were barred from using copyrighted materials, it would ultimately harm the development and progression of ideas.
But a recent Supreme Court case may weaken generative AI companies’ best argument for fair use and could shift future decisions in favor of plaintiffs.
In a case decided last term, Andy Warhol Foundation v. Goldsmith, the Supreme Court narrowed its transformative use doctrine. In a case involving photographic artwork, the Court ruled that not just “any use that adds some new expression, meaning, or message” will pass the test for what constitutes a permissible transformation.
At least until the lower courts begin to rule on the current cases before them, the question remains open whether generative AI constitutes the permissible transformation that qualifies for the fair use exception to copyright. Ultimately, the courts may determine whether generative AI will amount to a boon or bane for the progress of science and the useful arts.