
Scholars discuss how copyright law can manage AI Art.
Art generated by artificial intelligence (AI) first became mainstream when Refik Anadol’s Unsupervised was featured in the Museum of Modern Art in 2022. The AI art industry was valued at approximately $257 million that year, and is projected to reach over $900 million by 2030.
Legal challenges to AI-generated art, however, threaten the entire industry.
Copyright law guarantees artists the exclusive rights to their works for the life of the author plus 70 years. That exclusivity encourages artists to be more productive by granting them a powerful economic incentive to create and sell new works of art.
Although generative AIs appear to create new pieces of art from nothing, AI algorithms are typically trained on millions of annotated images. Unsupervised, for example, was a constantly changing image that used as fodder every piece of art featured in the Museum of Modern Art over the last 200 years.
Because generative AI uses other images to train itself and create new pieces of art, it presents several potential and pressing copyright issues.
Legal experts’ and artists’ primary concern is whether generative AI software can be trained with copyrighted material. Many artists and authors have sued AI companies claiming that the companies’ models were trained with their copyrighted material in breach of their exclusive use rights. Although a few courts have held that machine learning software may use copyrighted material under the “fair use exception,” no court has applied that exception to a generative AI art program.
Commentators also question whether art created by generative AI can itself be copyrighted. The Copyright Act of 1976 states that a work can only receive a copyright if it was created by a human.
Finally, even assuming that AI-generated art can be copyrighted, experts are uncertain who would own the resulting copyright. The Copyright Office stands by the human authorship requirement and has stated that it will only register copyrights for works where the traditional elements of authorship, such as “literary, artistic, or musical expression or elements of selection [or] arrangement,” are conceived and executed by a human. The Office gives the example of an AI art generator creating a complex image from a single prompt by a human as insufficient to meet this requirement.
But the Office acknowledges that there will be many issues to face as the technology develops, and they have launched an agency-wide investigation into generative AI to determine how it can fit into the current U.S. copyright regime.
This week’s Saturday Seminar examines scholars’ suggestions on how AI art can be regulated through copyright law or other avenues.
- Generative AI boasts the potential to enhance creativity but threatens the knowledge ecosystem it depends on, Frank Pasquale of Cornell Law School and Haochen Sun of the University of Hong Kong Faculty of Law contend in a Virginia Law Review essay. AI companies often exploit authors, artists, and journalists by training models on these creators’ content but without their consent, Pasquale and Sun argue. They propose two solutions: First, an opt-out mechanism allowing creatives to forbid nonconsensual use of their work, and second, a levy on AI providers to compensate creators whose content they use without licensing. This approach balances the interests of creatives and AI firms, ensuring the sustainability of human creativity and technological development, Pasquale and Sun conclude.
- To address the dilemma of identifying the “authors” of AI art, practitioner Mackenzie Caldwell of Latham and Watkins argues in a note in the Houston Law Review that lawmakers should expand copyright law to allow AI users to obtain rights to AI-produced art. Caldwell compares a user inputting creative ideas into AI software to a photographer’s use of a camera to capture a creative vision. Classifying AI users as authors rather than the creators of the AI software is desirable because developers have existing protections for their own work, Caldwell contends. Caldwell suggests that this approach to copyright protections for AI users may also make AI art more ethical.
- In a recent article in the Iowa Law Review, Haochen Sun of the University of Hong Kong Faculty of Law contends that traditional copyright laws inadequately address the contributions of AI developers, who often invest substantial time and resources in training and optimizing models that generate creative works. Sun argues for the creation of a “sui generis rights” model that would grant AI developers exclusive rights over the outputs of their AI systems, thus incentivizing innovation while respecting the rights of human creators. The model would protect AI-generated works without undermining existing copyright laws, thereby promoting a collaborative ecosystem where both AI developers and human creators can thrive, Sun concludes.
- AI-generated artwork is best considered “pseudo art” that should remain in the public domain, argues Ioan-Radu Motoarcă in an article in the Yale Journal of Law and Technology. An artist’s intention to place a piece of work within the historical context of art history is what makes the work art, Motoarcă explains. Motoarcă contends that AI pseudo art lacks this historical context. AI pseudo art, Motoarcă claims, is also not intended to be interpreted or to provoke thought in the way that traditional art is. Motoarcă concludes that a product’s eligibility for copyright protection depends on society’s willingness to consider it within popular understanding of art practices, and that integrating AI pseudo art into this understanding remains problematic.
- In an article in the Yale Law Journal, Micaela Mantegna of the Berkman Klein Center for Internet and Society argues that a holistic approach to AI art regulation is necessary to prevent an “ouroboros copyright.” Mantegna explains that if generative AI has a single copyrighted input material, then all of its output material potentially could be exposed to copyright litigation. This issue is further complicated if AI output can be copyrighted and fed back into other generative AIs, notes She argues that this complicated web of copyrights highlights why copyright law is insufficient to regulate AI art. Mantegna suggests instead a holistic approach to regulation that employs principles from human rights law, labor law, and data protection law.
- Copyright law is ill equipped to regulate AI art, argues Eleni Polymenopoulou of Qatar’s Hamad Bin Khalifa University College of Law in an article in the Washington Journal of Law, Technology and Arts. Polymenopoulou argues that the “human authorship” requirement in copyright law will stifle the innovation it seeks to promote. The missing piece, she claims, is human rights law. Polymenopoulou suggests that states should focus AI Art laws on the human rights of authors and creators by imposing due diligence obligations. This approach would provide states with more tools to balance the rights of artists and the rights of the public, explains Polymenopoulou.
The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.