On the heels of warning companies not make unsupported claims about “new tools and devices that supposedly reflect the abilities and benefits of AI,” the Federal Trade Commission (“FTC”) released new guidance on Monday that focuses on potentially harmful/deceptive uses of generative AI tools, noting that “companies are always looking for new ways – such as the use of generative AI tools – to better persuade people and change their behavior.” According to the regulator’s latest commentary, a “new wave of generative AI tools” is “expanding rapidly” and include things like chatbots that are designed to provide outputs (in response to user prompts) that can appear to have been created by a human.
Delving into part of what makes chatbots attractive to the companies that are making use of them, FTC Division of Advertising Practices attorney Michael Atleson states that many of these models are “effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional.” Given that consumers tend to trust the output of these tools – due in part to “automation bias,” whereby people may be unduly trusting of answers from machines that may seem neutral or impartial, “many commercial actors are interested in these generative AI tools and their built-in advantage of tapping into [such] unearned human trust.”
Against this background, Atleson states that companies that are thinking about novel uses of generative AI – such as “customizing ads to specific people or groups” – should know that “design elements that trick people into making harmful choices are a common element in FTC cases, such as recent actions relating to financial offers, in-game purchases, and attempts to cancel services.” As such, companies are essentially being put on notice that manipulation – via generative AI bots – can be a deceptive or unfair practice under the FTC Act “even if not all customers are harmed and even if those harmed do not comprise a class of people protected by anti-discrimination laws.”
Beyond the potential for companies to use AI chatbots to manipulate consumers’ purchasing decisions/behaviors, the FTC warns marketers about the use of “generative AI tools and their manipulative abilities to place ads within a generative AI feature,” such as how ads are placed within search results. “The FTC has repeatedly studied and provided guidance on presenting online ads, both in search results and [other native advertising scenarios], to avoid deception or unfairness.” Among other things, the FTC states that “it should always be clear that an ad is an ad, and search results or any generative AI output should distinguish clearly” between what is organic output and what is paid [for].” Specifically, “People should know if an AI product’s response is steering them to a particular website, service provider, or product because of a commercial relationship,” per Atleson.
“And, certainly, people should know if they are communicating with a real person or a machine.”
THE BOTTOM LINE: In a very clear nod to the agency’s current focus, Atleson states that “if we have not made it obvious yet, FTC staff is focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers.” In the event that the “FTC comes calling,” companies will “want to convince us that you adequately assessed risks and mitigated harms.” He states, noting that among other things, risk assessments and corresponding mitigations “should factor in foreseeable downstream uses and the need to train staff and contractors, as well as monitoring and addressing the actual use and impact of any tools eventually deployed.”