The legal ambiguity of art created by artificial intelligence adds confusion to controversy.
A picture may be worth a thousand words. But what about a picture generated entirely by a machine?
That is the question scholars, advocates, and internet users have been considering lately, as art generated by artificial intelligence (AI) has exploded in popularity. Some commentators have asked who regulates this digitally created art and whether the courts can prevent theft of creative ideas and techniques in the process of its generation.
But the reality is that little regulation protects the copyrighted works used to train these AI-based technologies, and privacy protections for images used in the creation of AI-based art are scant. Advocates have called for regulatory solutions rooted in copyright and privacy law.
Toward the end of last year, popular use of the Lensa AI app, which generates stylized portraits based on users’ uploaded selfies, spurred the latest round of controversy over the ethics of AI-generated art. Debate over AI art had been raging since earlier last year, when other popular AI models such as DALL-E 2 and Stable Diffusion rapidly gained popularity.
Some commentators have noted that these programs have made art more accessible. Stable Diffusion generates images for free based on strings of text entered by users, and Lensa sells its portraits for as little as $3.99. Queer users of Lensa have shared that the avatars created by the app, which allows users to specify their gender, have made them feel joyful and aligned with their true gender identity.
But many others have voiced concerns that stem from the mechanisms that such algorithms use to generate new images. Their creators collect and use captioned images to train the AI algorithm on the relationships between textual and visual representations. For example, Stable Diffusion trained its algorithm on data sets collected by the German nonprofit LAION, which has collected billions of captioned images from art shopping sites and websites such as Pinterest.
And it has done so without consent, causing artists and advocates to raise copyright concerns.
One artist, Greg Rutkowski, has reportedly complained that AI-generated images mimicking his art are drowning out his own work. Users had apparently prompted Stable Diffusion with text including Rutkowski’s name nearly a hundred thousand times as of September 2022.
But LAION disclaims copyright liability for its use of the images—and whether it is correct is unclear. The Copyright Act of 1976 provides copyright owners of artistic works with exclusive rights to reproduce and adapt their works. But for someone to be liable for violating the right to reproduce images, they must create copies that are fixed.
And courts have found that intermediate copies generated for only 1.2 seconds to be insufficiently fixed, raising questions as to whether the intermediate copies used to train the machinery in AI programs can give rise to liability.
Copyright owners may not have better luck alleging violations of adaptation rights because of the doctrine of fair use. Fair use allows for the creation of a new work based on a copyrighted work—without the copyright owner’s permission—if the work is sufficiently transformative, meaning it somehow changes the work’s meaning or message or carries a different purpose.
Traditionally, fair use has presented a relatively low bar for those claiming it. Soon, however, the U.S. Supreme Court will decide a fair use case, and it could adopt a stricter standard.
In addition to expressing concerns about copyright, commentators have also expressed privacy concerns over the use of personal and private images to train the AI.
DALL-E 2 recently began allowing users to upload real people’s faces, and Stable Diffusion has operated without any limitations or moderation since its inception. Users have expressed concerns about personal data being used. One person, for example, found out that LAION had collected her personal medical images when she looked herself up on Have I Been Trained, a website built by German artists to help identify any artwork or personal images used by AI.
Lensa has also garnered scrutiny over its privacy policy, which allowed its technology to rely on user-uploaded images to train its algorithms. Prisma, the company that owns Lensa, claimed that it permanently deletes user images after creating avatars. In December 2022, it updated its privacy policy to state that it does not use personal data to train Prisma’s other AI tools.
Individuals with personal privacy concerns do not have many options. European Union residents can file General Data Protection Regulation (GDPR) complaints to request the takedown of images used in LAION. But this only prevents the images from being used in the future—it does not reverse past usage or training. And U.S. users are without even such limited recourse. The LAION website only permits users to request image takedowns via email, with the same limitation, and no federal privacy protections akin to the GDPR exist in the United States.
Advocates have proposed a variety of regulatory solutions. To address copyright concerns, some experts have argued that the U.S. Congress should pass legislation aimed specifically at AI. Others have likened the problem to illegal file-sharing in the early 2000s and have suggested that legislators pursue a broad licensing scheme for underlying works.
To mitigate privacy risks, companies running AI generators could self-regulate to prevent the use of personal data. But some commentators have argued that relying on this strategy is insufficient.
Other experts have pushed for federal statutory privacy protections that would allow people to protest the use of their images by these platforms. Still others have suggested that the Federal Trade Commission employ algorithmic destruction, an enforcement tool that it has used to address illegal or bad faith collection of personal data.
Regulation often lags behind technology. Even now, just a few months into AI-generated art’s explosion in popularity, the regulatory path ahead remains hazy.