Snapshot: The Copyright Office’s First Generative AI Report

The U.S. Copyright Office has released the first part of a series of reports that examine copyright law and policy issues raised by artificial intelligence (“AI”), including the scope of copyright in AI-generated works and the use of copyrighted materials in AI training. In Part 1, the Copyright Offices takes on the topic of digital replicas – or “the use of digital technology to realistically replicate an individual’s voice or appearance.”

Setting the stage in its initial report, the U.S. Copyright Office (the “Copyright Office” or “USCO”), reflecting on the rise and widespread adoption of generative AI, states that artists, for example, “have harnessed the power of AI to find new ways to express themselves and new ways of connecting with audiences.” At the same time, it states that “AI-generated deepfakes have proliferated online – from celebrities’ images endorsing products to politicians’ likenesses seeking to affect voter behavior.”

Delving into digital replicas, the Copyright Office primarily asserts that they do “not fall neatly under any one area of existing law.” While some experts characterize digital replicas as a form of intellectual property, it says that protection against the use of unauthorized digital replicas raises “overlapping issues” including ones in the realm of privacy, unfair competition, consumer protection, and fraud. The issue relates to copyright in particular in a number of ways: (1) creators such as artists and performers are particularly affected; (2) copyrighted works are often used to produce digital replicas; and (3) the replicas are often disseminated as part of larger copyrighted works.

At the same time, the USCO asserts that “the copying of an individual’s identity is not an entirely new topic for the Copyright Office, [which] published a report on the moral rights of attribution and integrity in the U.S. in 2019, in which [it] recommended that Congress consider adopting a federal right of publicity.” As distinct from that, the Copyright Office says that “the current study has a narrower focus – assessing the need for federal protection specifically with respect to unauthorized digital replicas.”

> Some Background: Much of the basis of the USCO’s AI report comes from comments that it solicited by way of a Notice of Inquiry (“NOI”) on AI and Copyright that it published August 2023. In furtherance of the NOI, the USCO sought input on questions related to digital replicas, among other AI-specific issues. In particular, the NOI asked “what existing laws apply to AI-generated material that features the voice or likeness of a particular person; whether Congress should enact a new federal law that would protect against unauthorized digital replicas; and, if so, what its contours should be.”

“We also inquired whether there are or should be protections against AI systems generating outputs that imitate artistic style,” as well as “how, for sound recordings, section 114(b) of the Copyright Act relates to state laws protecting against the imitation of an individual’s voice.”

New Federal Legislation is Needed

The Copyright Office says that received approximately one thousand comments responding to this group of questions, 90 percent of which came from individuals, with the majority of commenters advocating for the enactment of new federal legislation. Based on its analysis of “the comments received, independent research, and a review of work being done at other agencies,” the Office concludes that there is “an urgent need for a robust nationwide remedy beyond those that already exist.” It cited the “speed, precision, and scale of AI-created digital replicas,” which it says “calls for prompt federal action.” Without a “robust nationwide remedy, their unauthorized publication and distribution threaten substantial harm not only in the entertainment and political arenas, but also for private individuals.”

Not without entirely without remedy, the USCO notes that a “variety of legal frameworks provide protection against the unauthorized use of aspects of an individual’s persona,” including rights of privacy and publicity at the state level and federal laws, including the Copyright Act, the Federal Trade Commission Act, the Lanham Act, and the Communications Act. However, despite such existing protections, the USCO maintains that new legislation is necessary, as “state laws are both inconsistent and insufficient in various respects,” with some states currently not providing rights of publicity and privacy, while others only protect certain categories of individuals.

Meanwhile, existing federal laws are “too narrowly drawn to fully address the harm from today’s sophisticated digital replicas,” according to the USCO’s report.

What a New Law Would Look Like

As for what new legislation might look like, the USCO suggests that a law specifically aimed at addressing digital replicas should include the following …

> Subject Matter: The statute should target those digital replicas, whether generated by AI or otherwise, that are so realistic that they are difficult to distinguish from authentic depictions. Protection should be narrower than, and distinct from, the broader “name, image, and likeness” protections offered by many states.

> Persons Protected: The statute should cover all individuals, not just celebrities, public figures, or those whose identities have commercial value. Everyone is vulnerable to the harms that unauthorized digital replicas can cause, regardless of their level of fame or prior commercial exposure.

> Term of Protection: Protection should endure at least for the individual’s lifetime. Any postmortem protection should be limited in duration, potentially with the option to extend the term if the individual’s persona continues to be exploited.

> Infringing Acts: Liability should arise from the distribution or making available of an unauthorized digital replica, but not the act of creation alone. It should not be limited to commercial uses, as the harms caused are often personal in nature. It should require actual knowledge both that the representation was a digital replica of a particular individual and that it was unauthorized.

> Secondary Liability: Traditional tort principles of secondary liability should apply. The statute should include a safe harbor mechanism that incentivizes online service providers to remove unauthorized digital replicas after receiving effective notice or otherwise obtaining knowledge that they are unauthorized.

> Licensing & Assignment: Individuals should be able to license and monetize their digital replica rights, subject to guardrails, but not to assign them outright. Licenses of the rights of minors should require additional safeguards.

> First Amendment Concerns: Free speech concerns should expressly be addressed in the statute. The use of a balancing framework, rather than categorical exemptions, would avoid overbreadth and allow greater flexibility.

> Remedies: Effective remedies should be provided, both injunctive relief and monetary damages. The inclusion of statutory damages and/or prevailing party attorney’s fees provisions would ensure that protection is available to individuals regardless of their financial resources. In some circumstances, criminal liability would be appropriate.

> Relationship to State Laws: Given well-established state rights of publicity and privacy, the Office does not recommend full federal preemption. Federal law should provide a floor of consistent protection nationwide, with states continuing to be able to provide additional protections. It should be clarified that section 114(b) of the Copyright Act does not preempt or conflict with laws restricting unauthorized voice digital replicas.

Protection of Artistic Style

In its report, the USCO stated that it received “many comments” seeking protection against AI “outputs that imitate the artistic style of a human creator.” (That is the issue at the heart of this case.) As for such requests, the USCO said that “while [it] acknowledges the seriousness of this concern,” it believes that “existing laws may provide sufficient protection at this time.”

Among the existing laws that provide protection, the USCO argued that while copyright law’s application in this area is “limited, as it does not protect artistic style as a separate element of a work,” it said that the Copyright Act “may, however, provide a remedy where the output of an ‘in the style of’ request ends up replicating not just the artist’s style but protectible elements of a particular work. Additionally, “as future Parts of this Report will discuss, there may be situations where the use of an artist’s own works to train AI systems to produce material imitating their style can support an infringement claim.”

Moreover, the Copyright Office asserts in the report that “although state right of publicity statutes do not explicitly refer to style, where a particular style is closely identified with an individual performer, it may be protected.”

USPTO’s Comment & Next Steps

On the heels of the USCO releasing Part 1 of its report, the U.S. Patent and Trademark Office’s director Kathi Vidal released a statement saying, “There is almost nothing more personal, and from artists to athletes almost nothing more valuable, than an individual’s name, voice, and likeness,” said Under Secretary of Commerce for Intellectual Property and Director of the U.S. Patent and Trademark Office. Vidal further stated that the USPTO “will consider the report’s findings as we prepare recommendations for potential executive action on these issues to ensure the safe, secure, and trustworthy development and use of AI technologies.”

As for additional reports, the Copyright Office will turn to other issues raised in the NOI, including the copyrightability of works created using generative AI, training of AI models on copyrighted works, licensing considerations, and allocation of any potential liability.