The NO FAKES ACT may undermine performers’ control over their intellectual property rights.
The NO FAKES ACT, recently released as a discussion draft by a bipartisan group of senators, proposes establishing a new federal digital replica right that would extend 70 years after a person’s death. The one-pager accompanying the draft highlights that the proposed legislation is driven by concerns that “unauthorized recreations from generative artificial intelligence” will substitute for performances by the artists themselves. Unfortunately, this laudable goal is undercut by some of the provisions contained within the current working draft.
I appreciate that what was released is a discussion draft and that the sponsoring senators are seeking feedback on it. With that in mind, it is important to recognize that, as currently drafted, this legislation could make things worse for living performers by making it easier for them to lose control of their performance rights, encouraging the use of deceased performers, and creating conflicts with existing state rights that already apply to both living and deceased individuals.
State right of publicity laws already protect against unauthorized digital performances, and state and federal trademark and unfair competition laws restrict many of the possible uses and promotion of computer-generated performances. Copyright law may also limit some of these new audiovisual works. So, the bar to adding a new layer on top of the existing legal structure should be high and should certainly not make anyone worse off than under the status quo.
The draft legislation proposes creating a new federal right in the work of dead performers—and more broadly, in all dead people’s digital replicas. This will further encourage a market in digital replicas of the dead that can displace work for the living by substituting newly created performances by famous deceased celebrities. Artificial intelligence (AI) poses a monumental threat to employment opportunities for everyone, including performers. The inclusion of a postmortem provision exacerbates rather than protects against such a threat.
Furthermore, the proposed legislation seeks to protect individual, living performers from the harms that flow from substitutionary performances made possible by ever-improving AI technology. The current draft, however, leaves performers potentially worse off by empowering record labels, movie studios, managers, agents, and others to control a person’s performance rights, not just in a particular recording or movie, but in any future “computer-generated” contexts.
As written, the proposed legislation would set up a world in which AI-generated performances can be created without any specific involvement or approval of the individuals—beyond some broad unlimited license. Nor does it require any disclosure that the performances were AI-generated or that those depicted did not agree to the specific performances. AI-generated performances will be able to portray individuals saying, doing, and singing things they never said, did, or sang. This will exacerbate the dangers of misinformation, false endorsements, and deceptive “recordings,” rather than combat them. As U.S. Senator Amy Klobuchar (D-MN) and others have warned, such deceptive performances pose one of the greatest current threats to democracy and truth.
Another key challenge with the current draft is that it has conflicting objectives. Its stated objective is to “protect” the rights of performers. Its unstated but implicit objective is to protect those who hold copyrights in sound recordings—the recording industry. These goals are in tension, at least as currently addressed in the draft. If the primary concerns are those of the recording industry, the bill could tackle them more directly and more narrowly. Alternatively, if the primary goal is to protect the interests of performers, the legislation should engage more with state publicity laws.
The one-pager released along with the draft legislation highlights concerns over the computer-generation of performances by real people without their permission. The document points to two recent examples. The first is the viral AI-generated song, “Heart on My Sleeve,” which imitated the voices of Drake and The Weeknd. The song became a hit—until it was removed from various platforms—and the public initially thought it was a real song by the performers. The second example is the recent AI-generated version of Tom Hanks used without authorization in an advertisement for a dental plan.
Combating such creations—especially fabricated performances of politicians making public statements—is a compelling goal. But an important point missing from the one-pager is that Tom Hanks, Drake, and the Weeknd are not left adrift under current law. Absent jurisdictional hurdles, each would have straightforward lawsuits under state right of publicity laws for these types of uses of their identities. Federal trademark, unfair competition, and false advertising laws under the Lanham Act and similar state laws would also provide claims.
Having a new federal law might send an additional signal and make filing in federal court in a preferred jurisdiction easier, but it is not filling a gap in the law. A broader federal right of publicity law could harmonize and clarify state right of publicity laws and exceptions to them—but this legislation does not do this.
One benefit for performers of the proposed legislation is that it explicitly designates the new performance right as “intellectual property” for purposes of Section 230 of the Communications Decency Act, which would facilitate the removal of unauthorized performances and the ability to obtain damages for them from online platforms. Federal appellate courts are currently split about whether right of publicity claims fall within the immunity provisions of Section 230 or instead fall under the IP exception to it. This matters because if the exception does not apply, it is difficult to get platforms to take down infringing content. The Section 230 problem, however, could be addressed more directly by amending Section 230, without need to create a new performance right.
For these reasons, the draft bill instead makes more sense when understood as primarily addressing the concerns of the record labels about AI-generated songs that might substitute for their legitimate releases.
This focus is evident in the explicit extension of the digital replica rights the bill would give to any person or entity that has an “exclusive personal services” contract with “a sound recording artist as a sound recording artist.” This means that record companies get—and can enforce—rights to performers’ digital replicas, not just the performers themselves. This opens the door for record labels to cheaply create AI-generated performances, including by dead celebrities, and exploit this lucrative option over more costly performances by living humans. There are many ways to address the recording industry’s concerns but giving record labels a federal right to digital replicas of individual people may not be the best way to do so.
The recording industry has some legitimate concerns, especially when newly generated performances are passed off as authentic ones by artists with whom they have exclusive contracts or that unfairly disrupt the release of their sound recordings. The recording industry, however, should not be able to block nondeceptive computer-generated musical tracks that simply emulate the style but do not replicate the voice of known performers or otherwise cause confusion as to a performer’s involvement with the work. Because the recording industry is not certain of how copyright litigation in this area will play out, it is looking for a straightforward fix. But this cure may be worse than the disease.
Looking ahead, rather than the senators going forward with their proposed legislation as currently drafted, I would offer them and other legislators five recommendations.
- Consider Whether this is the Right Fix for the Problem. If the primary focus is not the inadequacy of current law for performers, but instead the challenges posed by Section 230 and concerns about the effectiveness of copyright law to protect the recording industry, the proposed digital replica right may not be the best way to tackle these concerns. Instead, amending Section 230 to clarify that state right of publicity and appropriation-based claims may proceed against interactive computer services would help performers, the recording industry, and the broader public. A more targeted sound recording right also could be drafted that expressly focuses on the recording industry’s concerns without jeopardizing performers and their control over future performances.
- Limit to Rights for the Living. A bill seeking to protect the livelihood of performers is not the proper place to create a novel federal right in the work of dead performers that for 70 years after their deaths can substitute for living performers. This aspect of the proposed bill seems at odds with its stated purposes and instead will shore up reanimated replacements for up-and-coming performers. To the extent that the postmortem provision is driven by the recording industry wanting to protect recordings by deceased artists, it can be done in a more limited fashion.
There may be reasons to provide federal postmortem rights, particularly to harmonize state laws in this area, but doing so requires diving deeper into why we are doing so, tackling the variety of state laws in the area, and giving consideration to who should be able to own and profit from these dead performers’ rights. The postmortem provision has little to do with addressing the problems with AI and will primarily enrich companies that own and manage the rights of dead people while doing little to address the concerns of the living.
- Better Protect Performers by Restricting Scope and Duration of Licenses and Add Disclosures. If legislation similar to this draft does proceed, the licensing of performance rights should be far more limited so as not to be a subterfuge for long-term, perpetual, or global licenses that are akin to transferring all future rights in a person’s performances to others.
Licenses should not exceed seven years, an outer limit we see in the regulation of personal services contracts. Any licenses involving children should expire when they turn 18 and should be reviewable by a court. Any income earned under these contracts should be held in a trust for the child performer.
Any license should only authorize a specific performance or set of performances over which a person has control. This will have the important bonus of protecting against deceptive performances in which a performer played no role. In the absence of such individualized approval, the law should mandate clear and prominent disclosures that the performances in question were not by the individual nor specifically reviewed or approved by the depicted persons.
- Clarify Protections for Free Expression. The First Amendment provides latitude for works in the style or genre of others. Accordingly, unlike what the draft bill provides, there should not be strict liability for works merely in the style of a Taylor Swift or Drake song. Liability should require more. For example, legislation could require liability to turn on the use of a person’s identity to advertise or promote the works or a showing that there is confusion as to the use of the performer’s voice or participation in the new work. Disclaimers that a person’s voice or performance was not used should not insulate a defendant from liability.
Some of the current exclusions would benefit from greater clarity, especially around the use of real people in fictional works. The current exclusions do not provide sufficient protection to works outside of the traditional film and television model of audiovisual works, including musical works, video games, and interactive computer programs, including those which are educational in nature and part of school curriculum.
- Address Conflicts with Potential Plaintiffs, Existing Rights-Holders, and Licensees. Finally, the current draft does not address conflicts with existing rights and contracts. For example, what happens if a performer approves of a computer-generated recording, but a licensee or holder of a personal services contract for sound recordings with that person does not approve of the use. The draft bill suggests that a record label could sue performers themselves for violating the legislation in such a context.
The draft also leaves in place state laws but does not clarify what happens to existing licensing agreements that already cover the same rights at issue and that may conflict with this newly created right. Similarly, the draft does not address what happens when someone uses computer-based technology to create authorized derivative works on the basis of copyrighted works in which performers initially agreed to appear—something allowed under copyright law, but barred by the terms of the draft legislation. If the legislation proceeds, future drafts should provide guidance on how these conflicts should be addressed.
This essay draws on the author’s blog post, “Draft Digital Replica Bill Risks Living Performers’ Rights over AI-Generated Replacements.” A more detailed summary and analysis of the key provisions of the draft legislation by the author can also be found online at the author’s website.
Copyright © 2024 Jennifer E. Rothman