How Do You Solve a Problem Like Deepfakes?

Image: Instagram

How Do You Solve a Problem Like Deepfakes?

“When there are so many haters, I really don’t care because their data has made me rich beyond my wildest dreams.” That is Kim Kardashian says in a brief video clip that appears to have been taken from the 73 questions video she did for Vogue in April 2019. However, ...

January 31, 2020 - By TFL

How Do You Solve a Problem Like Deepfakes?

Image : Instagram

Case Documentation

How Do You Solve a Problem Like Deepfakes?

“When there are so many haters, I really don’t care because their data has made me rich beyond my wildest dreams.” That is Kim Kardashian says in a brief video clip that appears to have been taken from the 73 questions video she did for Vogue in April 2019. However, unlike the Vogue-produced video, in which Kardashian discusses everything from her feelings about incessant paparazzi to the most rewarding thing about being a mother, while strolling around her sprawling, beige-hued Axel Vervoordt-designed house, this excerpt was created without Kardashian’s participation or authorization.  

Created by artists Bill Posters and Daniel Howe as part of a visual art installation called “Alternate Realities, which was hosted at the Site Gallery in Sheffield, England, the video was subsequently uploaded to YouTube in May 2019. That is, until it was removed in response to a copyright claim that counsel for Vogue’s parent company Condé Nast filed with YouTube. As it turns out, the video was a deepfake. 

A portmanteau of “deep learning” and “fake,” deepfakes are a “class of synthetic media generated by artificial intelligence,” according to MIT Technology Review. In short: they are video and audio recordings created by using existing footage of the subject (Kardashian in the instance described above) merged with a computer-generated image of the person’s face, and a machine-learning model that copies the sound of the subject’s voice and applies it to a text script.  

The result? A video that looks and sounds a whole lot like the real thing. 

While publishing giant Condé Nast – which owns a whole roster of media properties, including Vogue, and thus, maintains the exclusive rights in the fashion magazine’s original video of Kardashian – was able to successfully get the video removed from YouTube on copyright infringement grounds (Facebook, Inc. refused to remove the videos, saying, “If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.”), it remains to be seen whether copyright claims will prove an effective approach to the issue of deepfakes going forward. 

“The problem with using a copyright claim against the Kardashian deepfake” – at least in theory – is that “its creators didn’t just reupload Vogue’s original [11 minute] video,” Samantha Cole wrote for Motherboard. “They deliberately manipulated the video to make a statement,” one that centers on “global scandals about data privacy and influence-peddling,” and that “delivers a cautionary message about the digital influence industry, technology, and democracy.” 

That level of transformativeness between the original video and the one created by Posters and Howe is precisely what could shield the video’s creators from copyright liability, since, copyright law treats unauthorized uses of another party’s creative work as fair use and not infringement when the work is transformative in nature, i.e., when the subsequent work “adds something new” to the original and does not merely substitute for the original use of the work.

A key example of a transformative use? One that is done for the purpose of social or cultural commentary. 

The deepfake video of Kardashian “transforms [the original] substantially,” Electronic Frontier Foundation policy analyst Joe Mullin told Motherboard. More than that, a potential fair use argument is bolstered by the fact that the less-than one minute-long deepfake video makes use of only a small portion of Vogue’s 11 minute version, and does not serve “as a replacement for the original video.” 

Kardashian and art-centric examples of deepfakes are, of course, just the tip of the iceberg, as is the likelihood that copyright infringement and even potentially right of publicity causes of action will prove effective. In reality, the fight against deep fakes becomes increasingly critical when it moves beyond videos of Kim Kardashian to politically-motivated messages that could pose issues to national security and that are  part of what the U.S. House of Representatives Intelligence Committee has called “a potentially grim, ‘post-truth’ future.” 

Not an entirely novel phenomenon, the roots of deepfakes date back to at least 1997 when “Video Rewrite,” an academic project that “proposed solutions for manipulating video and for syncing audio with video,” was released. Yet, even when the “landmark” project was first published by Christoph Bregler, Michele Covell, and Malcolm Slaney (who were working together at Palo Alto technology incubator, Interval Research Corporation, at the time), the challenges tied to the capablility of modifying existing video footage “to create automatically new video of a person mouthing words that she did not speak in the original footage” were “very clear,” according Cole. 

The laws necessarily to regulate such a program were, unsurprisingly, not as straightforward. This imbalance is unsurprising given that technology almost always develops at a rate much faster than the laws needed to regulate it.  

Now, more than 20 years later, the law is still developing when it comes to deepfakes. 

The House Intelligence Committee held its first-ever hearing on the national security challenge of artificial intelligence, manipulated media, and “deepfakes” in June 2019, noting that “advances in machine learning algorithms have made it cheaper and easier to create deepfakes – convincing videos where people say or do things that never happened.” All the while, Representative Yvette Clarke, the Democratic congresswoman representing New York’s 9th District, introduced proposed legislation aimed at combatting the spread of disinformation through restrictions on deepfake video technology by requiring that deepfakes include disclaimers about their nature.  

“There is conversation. There is a certain level of awareness. There just hasn’t been action,” Clarke told Politico. “And I think that what we’re not as conscious of how quickly this type of technology can be deployed.”

More recently, the House of Representatives Ethics Committee is “advising lawmakers against posting manipulated videos and photos on their social media accounts,” political news site The Hill reported this month, citing a memo from the Committee informing its members, as well as their officers and employees that “posting deep fakes or other audio-visual distortions intended to mislead the public may be in violation of the Code of Official Conduct.”  

Still yet, because technology does tend to routinely outpace legislation, researchers and developers are racing to develop digital forensics tools that can unmask “deepfakes.” For instance, Hany Farid, a professor and image-forensics expert, and Shruti Agarwai of University of California, Berkeley, are building software that can detect deepfakes, and perhaps authenticate genuine videos called out as fakes, CNN reported this spring, “in hopes of stopping deepfake-related misinformation from circulating,” particularly when it is of a political nature.

related articles