Using AI-Generated Content? You Likely Need to Disclose That

Image: Unreal Engine

Using AI-Generated Content? You Likely Need to Disclose That

“Imagine browsing an e-commerce site while hunting for new clothes. You click on a sweater that catches your eye and see it displayed on a range of different models. Then you look closer: Is that a person, or the work of artificial intelligence?” That is the scenario that ...

September 29, 2023 - By TFL

Using AI-Generated Content? You Likely Need to Disclose That

Image : Unreal Engine

Case Documentation

Using AI-Generated Content? You Likely Need to Disclose That

“Imagine browsing an e-commerce site while hunting for new clothes. You click on a sweater that catches your eye and see it displayed on a range of different models. Then you look closer: Is that a person, or the work of artificial intelligence?” That is the scenario that the Wall Street Journal recently posed, noting that AI-generated models – which can be created using generative AI platforms like Midjourney and Dall-E – “offer brands and retailers a fast, cost-effective alternative to traditional, resource-intensive photo shoots.”

The potential for AI generated models has been put into action by companies like Levi’s, for instance, which announced in March that it was planning to test uses of AI-generated models on its e-commerce channels. The denim giant stated at the time that digital fashion studio “Lalaland.ai’s technology – and AI more broadly – can assist us by allowing us to publish more images of our products on a range of body types more quickly.” Around the same time, supermodel Eva Herzigová made headlines when she debuted a digital twin. Created with Epic Games’ Unreal Engine suite, Herzigová’s avatar – which replicates her appearance, as well as her face and body movements – can be used for virtual fashion shows and ad campaigns.

Still yet, AI startup Metaphysic is also among the growing number of companies looking to enable famous figures to create – and profit from – digital versions of their likenesses, with the company working with Anne Hathaway, Octavia Spencer, Tom Hanks, Rita Wilson, Paris Hilton, and Maria Sharapova to build “a portfolio of their most valuable digital assets and AI training datasets.” 

A Dive into AI Disclosure

The seemingly budding market for AI models and virtual avatars brings with it a number of practical questions and regulatory concerns. For example, will consumers be able to tell the difference between an ad campaign that features a “real-life” Eva Herzigová and one created using her digital twin? Does it matter if they cannot? And if the difference is not immediately obvious to consumers, should they be alerted? 

While the use of AI-generated imagery and digital twins is still in a very nascent stage, early indications from Washington, companies, and even the recently-concluded Writers Guild strike suggests that disclosure will play an important role with regard to AI – and may be on its way. 

It is worth noting that the risk of consumers confusing a “real-life” supermodel and her digital twin is almost certainly notwhat is driving lawmakers to push for transparency in the realm of AI; more pressing matters – such as the use of deepfakes for political messaging purposes – is what is garnering attention from lawmakers. Even still, the potential for sweeping rules that mandate disclosures for AI-generated content could go beyond the political arena and impact consumer goods brands. 

So, what does the state of AI-focused disclosures look like now? 

The legislative perspective: A number of bills have been introduced that call for AI-generated materials to be identified as such. Among them are … 

(1) Advisory for AI-Generated Content Act. Introduced in the Senate in September, this bill would make it unlawful for “an AI-generating entity to create covered AI-generated material unless such material includes a watermark that meets the standards established by the Federal Trade Commission.”

(2) AI Labeling Act of 2023. Introduced in the Senate in July, this bill would require AI systems to include a clear and conspicuous disclosure that identifies the content as AI-generated content. 

(3) AI Disclosure Act of 2023. Introduced in the House in June, this bill would require all material generated by artificial intelligence technology to include the following – “DISCLAIMER: this output has been generated by artificial intelligence.” 

The regulatory perspective: The Federal Trade Commission (“FTC”) has not issued new AI disclosure rules amid the rising use of generative AI platforms by both consumers and companies, which is not surprising or necessary since the agency’s existing rules apply across different mediums and types of tech. In other words, even without specific AI rules, the use of generative AI in ways that is “unfair or deceptive” may land parties on the wrong side of the FTC Act and other existing legislation/regulations. 

Nonetheless, the FTC has provided an array of AI-centric guidance from which insight can garnered. For example, the consumer protection regulator stated in a blog post in May that in addition to refraining from using AI chatbots to manipulate consumers’ purchasing decisions/behaviors, companies need to make sure that consumers “know if they are communicating with a real person or a machine,” which suggests a level of necessary disclosure. 

More recently, the FTC asserted in a generative AI-focused blog post that “selling digital items created via AI tools is obviously not okay if you’re trying to fool people into thinking that the items are the work of particular human creators,” and “when offering a generative AI product, you may need to tell customers whether and the extent to which the training data includes copyrighted or otherwise protected material.” These points also seem to drive home the point that disclosure is necessary in certain situations in order to avoid running afoul of the FTC Act. (The first point also hints at right of publicity issues, which we delve into in a follow-up article.)

Companies’ approaches: An increasing number of companies are requiring – or preparing to require – users to label AI-generated content as such. Google, for one, announced this month that beginning in November, it will require verified election advertisers to include clear and conspicuous disclosures when ads contain AI generated content. Specifically, Google says that “prominent” disclosure will be necessary when ads contain “synthetic content that inauthentically depicts real or realistic-looking people or events.” 

According to Google, examples of ad content that would require a clear and conspicuous disclosure include: “An ad with synthetic content that makes it appear as if a person is saying or doing something they didn’t say or do, and an ad with synthetic content that alters footage of a real event or generates a realistic portrayal of an event to depict scenes that did not actually take place.” The election disclosure requirements follow from Google DeepMind’s launch of a watermarking tool – called SynthID – in August, which labels images that have been created using generative AI. The debut of SynthID came after CEO Sundar Pichai stated at the company’s annual I/O conference in May that it was looking to build large language models “to include watermarking and other techniques from the start.” 

Still yet, on September 19, TikTok announced the launch of a new tool “to help creators label their AI-generated content,” noting that it is also starting to test ways to label AI-generated content automatically. “Our goal with these efforts is to build on existing content disclosures – such as our TikTok effects labels – and find a clear, intuitive and nuanced way to keep our community informed about AI-generated content,” TikTok said in a statement. 

And finally, the writers’ strike is worth mentioning. The 148-day Hollywood strike is over thanks to the Writers Guild of America reaching a new deal with major Hollywood studios. Among the terms in the deal: Companies “must disclose to writer if any material given to writer has been generated by Al or incorporates Al-generated material.” (This serves to “curb the more likely scenario—that writers would be asked to adapt or edit something written by a large language model or tool like ChatGPT, for less pay than producing an original work, possibly without their knowledge,” per Wired.)

THE BOTTOM LINE: Despite the relative novelty of the technologies at play, InfoLawGroup’s Rosanne Yang states that “one thing that government and industries seem to agree on is that transparency is a key factor in the use of AI, and in particular generative AI, in consumer-facing services.” While there is a lot of novelty at play amid the rising adoption of AI, Yang asserts that “this is not the wild west, [as] a myriad of existing laws and regulations currently apply to AI operations.” As such, as companies incorporate AI into their services and features, existing “transparency considerations and understandings of current laws and regulations should be taken into account when designing them, as part of the communications plan, and in ongoing operations.” 

related articles