“When there are so many haters, I really don’t care because their data has made me rich beyond my wildest dreams,” Kim Kardashian says in a new clip that, at first glance, appeared to be an excerpt from the 73 questions video she did for Vogue in April. But unlike the Vogue-produced video, in which Kardashian discussed everything from paparazzi and negative press to her hidden talents while strolling around her sprawling Axel Vervoordt-designed house, this one was created without Kardashian’s participation or authorization.
Created and uploaded to YouTube on May 29 by artists Bill Posters and Daniel Howe as part of a visual art installation called “Alternate Realities” at Site Gallery in Sheffield, England, it was removed after Condé Nast filed a copyright claim with YouTube, the video is a deepfake. A portmanteau of "deep learning" and "fake,” deepfakes are a “class of synthetic media generated by artificial intelligence,” per MIT Technology Review. In short: they are video and audio recordings created by using use existing footage of the subject merged with a computer-generated image of the person’s face, and a machine-learning model (or in the case of the Kim K video, an actor) that copies the sound of the subject’s voice and applies it to a text script.
The result? A video that looks and sounds a lot like the real thing.
While publishing giant Condé Nast, which owns Vogue, and thus, the original video of Kardashian, was able to succeed in getting the video removed from YouTube on copyright infringement grounds (Facebook, Inc. has refused to remove the videos, saying "If third-party fact-checkers mark it as false, we will filter it from Instagram's recommendation surfaces like Explore and hashtag pages.”), some legal experts are skeptical as to whether that will actually be an effective approach going forward.
“The problem with using a copyright claim against the Kardashian deepfake was that its creators didn't just reupload Vogue's original [11 minute] video,” Samantha Cole wrote for Motherboard. “They deliberately manipulated the video to make a statement,” one that centers on “global scandals about data privacy and influence-peddling, such as that concerning Cambridge Analytica,” and that “delivers a cautionary message about the digital influence industry, technology, and democracy,” according to Artnet.
That statement-making element is precisely what could shield the video’s creators from copyright liability, since, copyright law treats at least unauthorized uses as fair use (and thus, not infringement) when the work is transformative in nature, such as when, as Cole aptly notes, the use is for the purpose of making a cultural commentary.
"The deepfake video transforms [the original] substantially,” Electronic Frontier Foundation policy analyst Joe Mullin told Motherboard. More than that, a potential fair use argument is bolstered by the fact that the less-than one minute-long deepfake video makes use of only a small portion of Vogue’s 11 minute version, does not serve “as a replacement for the original video and it's hard to imagine that it would hurt the market value of the original.”
Nonetheless, "Copyright owners unfortunately don't always consider possible fair use cases before sending Digital Millennium Copyright Act takedowns that censor speech—even though the 9th Circuit's Lenz decision makes it clear that they must do so,” Mullin says, noting that right of publicity claims – which enables an individual to prevent unauthorized commercial uses of their likeness – can also be a tool to fight deepfakes.
Kardashian is just the tip of the iceberg. The fight against deep fakes – and what the U.S. House of Representatives Intelligence Committee called “a potentially grim, 'post-truth' future” – becomes increasingly critical when it comes to politically-motivated messages and national security.
When deepfakes first began – which dates back to at least 1997 by way of the Video Rewrite program that could modify existing video footage “to create automatically new video of a person mouthing words that she did not speak in the original footage” – Cole asserts that while the challenges it posed were “very clear,” while the law was, unsurprisingly, not. This imbalance is unsurprising given that technology almost always develops at a rate much faster than the laws needed to regulate it.
As of now, the law is still developing. The House Intelligence Committee recently held its first-ever hearing on the national security challenge of artificial intelligence, manipulated media, and “deepfakes,” noting that “advances in machine learning algorithms have made it cheaper and easier to create deepfakes – convincing videos where people say or do things that never happened.” All the while, Representative Yvette Clarke, the Democratic congresswoman representing New York’s 9th District, introduced proposed legislation aimed at combatting the spread of disinformation through restrictions on deepfake video technology by requiring that deepfakes include disclaimers about their nature.
"There is conversation. There is a certain level of awareness. There just hasn't been action," Clarke told Politico. "And I think that what we're not as conscious of how quickly this type of technology can be deployed."
Still yet, because technology does tend to routinely outpace legislation, researchers and developers are racing to develop digital forensics tools that can unmask “deepfakes.” For instance, Hany Farid, a professor and image-forensics expert, and Shruti Agarwai of University of California, Berkeley, are building software that can detect deepfakes, and perhaps authenticate genuine videos called out as fakes, “in hopes of stopping deepfake-related misinformation from circulating,” particularly on the political nature, per CNN.
Farid told CNN that they hope to roll out the tools out to journalists in December by way of a website where they can check the authenticity of a video.