With the Rise of GenAI, a Federal Right of Publicity is Taking Center Stage

Image: Unsplash

With the Rise of GenAI, a Federal Right of Publicity is Taking Center Stage

The rising adoption of generative artificial intelligence (“AI”) has brought with it no shortage of legal questions and concerns. As deepfakes, fake-swapping apps, and AI-powered avatars start to become more prominent, one recurring issue centers on how individuals can ...

October 16, 2023 - By TFL

With the Rise of GenAI, a Federal Right of Publicity is Taking Center Stage

Image : Unsplash

Case Documentation

With the Rise of GenAI, a Federal Right of Publicity is Taking Center Stage

The rising adoption of generative artificial intelligence (“AI”) has brought with it no shortage of legal questions and concerns. As deepfakes, fake-swapping apps, and AI-powered avatars start to become more prominent, one recurring issue centers on how individuals can protect their likenesses from being used without their authorization. This is proving to be an issue across industries, with actors highlighting the availability of increasingly sophisticated deepfakes in connection with the Screen Actors Guild strike. You will likely also recall that Drake and The Weeknd’s voices were used in the AI-generated song “Heart on My Sleeve” without their authorization, and more recently, a deepfake of Tom Hanks appeared in ads for a dental plan that he has not endorsed. 

All the while, concerns about the ability of generative AI platforms to replicate artists’ works have given rise to litigation, including the case that a trio of artists is waging against Stability AI for allegedly engaging in “blatant and enormous infringement” by using their artworks – without authorization – to enable AI-image generators, including Stable Diffusion, to create works in their signature styles and “in the style of” other artists without their consent. At the same time, deepfakes are at the heart of separate lawsuit, in which reality TV personality Kyland Young claims that his likeness and the likenesses of other stars has been used in connection with a face-swap app without authorization. 

With generative AI at play, existing ambitions to adopt a statute that establishes a federal right of publicity as a way to protect against unauthorized uses of individuals’ likenesses (i.e., their names, voices, photographs, etc.), as well as to artists’ signature “styles,” have gained new relevance. The most recent effort on this front comes by way of a generative AI-centric proposal that was released last week by Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN), and Thom Tillis (R-NC). 

Called the Nurture Originals, Foster Art, and Keep Entertainment Safe (“NO FAKES”) Act of 2023, the bill “would protect the voice and visual likeness of all individuals from unauthorized recreations from generative AI.” The senators revealed that the NO FAKES Act aims to address “the use of non-consensual digital replications in … audiovisual works or sound recordings” by holding individuals, companies, and third-party platforms liable if they produce and/or host an unauthorized digital replica of an individual in a performance. There would be an exception for “certain digital replicas from coverage based on recognized First Amendment protections.” 

Still in the early stages, the NO FAKES Act has already won the approval of SAG-AFTRA, whose president Fran Drescher said in a statement last week, “A performer’s voice and their appearance are all part of their unique essence, and it’s not ok when those are used without their permission. Consent is key, and I’m grateful that Sens. Coons, Blackburn, Klobuchar and Tillis are working to give performers recourse and providing tools to remove harmful material.” 

The Recording Industry Association of America similarly welcomed the introduction of the NO FAKES Act, saying, “Our industry has long embraced technology and innovation, including AI, but many of the recent generative AI models infringe on rights – essentially instruments of theft rather than constructive tools aiding human creativity.” The organization further asserted that “unauthorized uses of one’s name, image, likeness, and voice are a clear threat to artists, songwriters, performers, authors, journalists, photographers, and the entire creative community, [and] we look forward to engaging in a robust bipartisan process with a strong bill that effectively protects against this illegal and immoral misappropriation of fundamental rights that protect human achievement.”

Right of Publicity & AI: The Bigger Picture

The introduction of the proposed NO FAKES Act comes amid a larger, AI-focused push for a federalized right of publicity among lawmakers and individual organizations, alike. While about half of U.S. states have recognized the right of publicity, a lack of uniformity among such state laws – and the resulting unpredictability – has prompted renewed calls for a federal statute in light of the rapid rise of deepfakes. 

In September, for instance, Adobe’s EVP, General Counsel, and “Chief Trust Officer” Dana Rao published a blog post, in which he advocates for “an anti-impersonation right [to] protect artists from economic harm from the misuse of AI tools.” He is essentially pushing for a federal cause of action that provides protections against “misusing AI to intentionally impersonate [artists’] style for commercial gain.” 

Rao has been joined by the likes of the Copyright Office, for instance, which appears to be considering where right of publicity (and potentially, a federal right) fits into the AI equation. In a question that it posed to commenters in a notice of inquiry  this summer, the Copyright Office ask: “What legal rights, if any, currently apply to AI-generated material that features the name or likeness, including vocal likeness, of a particular person?” Around the same time, the issue of a federal right of publicity dominated a bit of the discussion in a copyright and AI-specific hearing before the Senate Judiciary Committee’s Subcommittee on Intellectual Property. 

Emory Law professor Matthew Sag, for instance, stated that there are “limits to what Congress can do to address” the issues posed by generative AI, asserting that he believes that “a national right of publicity law is needed to replace the current hodgepodge of state laws, and that we are overdue for a national data privacy law.” If generative AI can “recreate someone’s distinctive appearance or voice, that person should have recourse under right of publicity,” he asserted, arguing that Congress should enact a national right of publicity law “to ensure nationwide and uniform protection of individuals’ inherently personal characteristics.”

And still yet, Sen. Coons, who is co-sponsoring the NO FAKES Act, has been raising the idea of a federal right of publicity statute connection with AI-specific hearings hosted by the Senate Judiciary Committee’s Subcommittee on IP which he chairs. 

related articles