A Snapshot of Fashion Industry Concerns Over Generative AI

The Federal Trade Commission hosted a roundtable discussion – entitled, “Creative Economy and Generative AI” – on Wednesday in order to “better understand the impact of generative artificial intelligence on creative fields.” Ahead of the discussion, FTC chair Lina Khan stated that the web-based roundtable comes as “we see growing use of automated systems, including those sometimes marketed as artificial intelligence,” and as the consumer protection agency “want[s] to make sure that we’re fully understanding how these new tools could be affecting people on the ground in positive ways, but also in harmful and potentially unlawful ways.”

Among the speakers on Wednesday (which included authors, artists, musicians, representatives from the Artists Rights Alliance, SAG-AFTRA, Software Freedom Conservancy, etc.) was Sarah Ziff, who is the founder and executive director of the Model Alliance, a New York-based advocacy group focused on research and policy for models and others employed in the fashion industry. Here is a quick dive into some of the fashion industry-specific points she made …

– There are rising concerns about the use of generative AI among models, who have “very little insight into how their work or likeness is being used in general, let alone in the context of generative AI.” This is because, “normally, they do not see their contracts with brands and often do not know how their image will be used or how much they will be paid.” So, generative AI “introduces the potential for further exploitation in an already exploitative work environment,” Ziff said. When talking about how generative AI is impacting workers, “We need to consider the context of an industry that is truly like the Wild West where workers have fewer protections at baseline and [as independent contractors] cannot collectively bargain here in the U.S.”

– A recent poll conducted by the Model Alliance revealed that concerns among models and other fashion industry creators generally fall into two key areas: the first is around the use of 3D body scans in connection with generative AI, and the second has to do with the creation of AI generated models, particularly AI models of color.

– Increasingly companies are asking models to undergo scans that generate a 3D model of their body or face. In its poll, the Model Alliance found that “nearly 18 percent of models have already been asked to undergo a scan by a brand or management company.” As for those who have been scanned, Ziff said that they “described not being given information about how their scans would be used, unknowingly handing away rights to their image and not being fairly compensated. For people whose livelihoods are their image, this is particularly troubling in light of the rise of deepfake technology, specifically, deepfake pornography.”

– As for AI generated models and influencers, Ziff asserted that individuals in the fashion industry are worried that this use of AI could replace jobs – not only for models, but also photographers, stylists, and hair and makeup artists, among others. “Members of our community are particularly concerned about brands’ use of AI generated models as part of their diversity and inclusion initiatives.” One example, according to Ziff comes by way of Levi’s, which announced earlier this year that it is “creating AI generated models to increase the number and diversity of their models.”

– With the FTC’s aim of protecting consumers from fraudulent, deceptive, and unfair business practices in mind, Ziff asserted that “there is a real risk that AI may be used to deceive investors and consumers into believing that a company engages in fair and equitable hiring practices and is diverse and inclusive when they are not.”

In terms of the transparency Ziff said that the Alliance’s members want: “(1) requirements for explicit consent [for the use of their likeness for AI purposes], (2) notification of use [of their likenesses], (3) compensation [for such use], and (4) liability for misrepresentation.”

The Bottom Line: These elements mirror those cited by many of the speakers, who seemed to agree that what they are seeking is: control of their creative works, fair compensation, opt-in consent, and labeling/disclosure of AI-generated content to ensure that consumers are fairly informed.