The intersection of right of publicity and generative artificial intelligence (“AI”) came to the fore again this week thanks to a headline-making clash between actress Scarlett Johansson and OpenAI. On the heels of OpenAI introducing GTP-4o, its “newest flagship model that provides GPT-4-level intelligence,” complete with audio capabilities that enable users speak to the chatbot and receive real-time responses, Johansson accused the AI giant of creating a voice – called “Sky” – that sounds exactly like her voice – even though she said she declined an offer from OpenAI to voice the chatbot herself.
While OpenAI paused the use of Sky on Sunday and published a blog post detailing how it developed five different AI voices, including “Sky,” that did not stop Johansson from publishing a since-widely-read statement of her own on Monday, in which she stated …
“Last September, I received an offer from [OpenAI CEO] Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and Al. He said he felt that my voice would be comforting to people.
After much consideration and for personal reasons, I declined the offer. Nine months later, my friends, family and the general public all noted how much the newest system named ‘Sky’ sounded like me.
When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. Mr. Altman even insinuated that the similarity was intentional, tweeting a single word ‘her’ a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.
Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there.
As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr. Altman and OpenAl, setting out what they had done and asking them to detail the exact process by which they created the ‘Sky’ voice. Consequently, OpenAl reluctantly agreed to take down the ‘Sky’ voice.
In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”
In response to Johansson’s claims, Altman said in a statement that OpenAI “never intended” for the Sky voice to mirror Johansson’s, and since, the Washington Post has revealed that “while many hear an eerie resemblance between ‘Sky’ and Johansson’s ‘Her’ character, a [different] actress was hired in June to create the Sky voice, months before Altman contacted Johansson, according to documents, recordings, casting directors and the actress’s agent.”
As for whether that technicality is meaningful from a legal point of view, it probably is not, since the issue would center on whether consumers are likely to believe the voice is Johansson’s.
A number of cases come to mind here – from Waits v. Frito-Lay and Santana v. Miller Brewing (the latter of which settled) to the suit that Bette Midler successfully waged against Ford and advertising agency Young & Rubicam back in the 1980s. You may recall that Midler waged a right of publicity case after the automaker and ad agency aired a commercial that included a Midler song sung by a Midler “sound alike” after failing to successfully sign the music star to sing in the commercial.
In 1992, the Supreme Court upheld a lower court’s finding that copying a well-known singer’s voice for commercial purposes violates her right of publicity, thereby, enabling Midler to collect a $400,000 judgment from the advertising agency responsible for the commercial in a decision that was characterized at the time as “stand[ing] to represent a major expansion of the right of publicity.”
Johansson has not filed suit against OpenAI, and instead, her counsel is reportedly going back and forth with OpenAI as we speak.
For a take on how such an as-of-now-purely-hypothetical-case could play out (although it very well may stay out of court), University of Colorado Law School professor Harry Surden stated in a thread this week …
The bigger picture here is, of course, looming legal tensions between creators and creatives and generative AI platform developers like OpenAI, which are being plagued with copyright-centric litigation over their practice of allegedly using others’ works to train the language models that power their generative AI platforms. As the Washington Post put it recently, “Johansson’s claim – that her likeness was stolen without consent – echoes growing scrutiny of [OpenAI’s] practice of scraping copyrighted content and creative work from the internet to train tools such as AI chatbots.”
After all, it notes that “tech companies need massive amounts of data to make their products sound human but have only recently begun getting permission.”
The scenario (and hypothetical case) also sheds light on the rising relevance of the right of publicity when it comes to generative AI. Stakeholders in the U.S. have been pushing for the adoption of a federal right of publicity cause of action, which we previously dove into here. More recently, the right of publicity came up in the hotly-anticipated Bipartisan Senate AI Working Group’s report. In a clear nod right of publicity concerns, the Chuck Schumer-led working group of U.S. Senators called on the relevant subcommittees in its policy roadmap that identifies on key priorities in the realm of AI to “consider whether there is a need for legislation that protects against the unauthorized use of one’s name, image, likeness, and voice.”