The intersection of right of publicity and generative artificial intelligence (“AI”) came to the fore again this week thanks to a headline-making clash between actress Scarlett Johansson and OpenAI. On the heels of OpenAI introducing GTP-4o, its “newest flagship model that provides GPT-4-level intelligence,” complete with audio capabilities that enable users speak to the chatbot and receive real-time responses, Johansson accused the AI giant of creating a voice – called “Sky” – that sounds exactly like her voice – even though she said she declined an offer from OpenAI to voice the chatbot herself.
While OpenAI paused the use of Sky on Sunday and published a blog post detailing how it developed five different AI voices, including “Sky,” that did not stop Johansson from publishing a since-widely-read statement of her own on Monday.
In response to Johansson’s claims, OpenAI CEO Sam Altman said in a statement that the company “never intended” for the Sky voice to mirror Johansson’s, and since, the Washington Post revealed that “while many hear an eerie resemblance between ‘Sky’ and Johansson’s ‘Her’ character, a [different] actress was hired in June to create the Sky voice, months before Altman contacted Johansson, according to documents, recordings, casting directors and the actress’s agent.”
As for whether that technicality is meaningful from a legal point of view, it probably is not, since the issue would center on whether consumers are likely to believe the voice is Johansson’s.