Image: Unsplash

The Federal Trade Commission (“FTC”) is sending its latest message about artificial intelligence (“AI”), alerting marketers to avoid making unsupported claims about “new tools and devices that supposedly reflect the abilities and benefits of AI.” In a post published on the FTC’s site on Monday, the FTC Division of Advertising Practices’ Michael Atleson stated that while AI is “an ambiguous term with many possible definitions,” often referring to a variety of “tools and techniques that use computation to perform tasks such as predictions, decisions, or recommendation,” one thing is for sure: AI is currently a hot marketing term that will inevitably be “overus[ed] and abus[ed].”  

Against that background and in light of the “AI hype [that] is playing out today across many products – from toys to cars to chatbots” like OpenAI-created ChatGPT (and even fashion), Atleson sheds some light on what the FTC “may be wondering” when it comes to companies putting AI at the center of their advertising. Among the questions that are on the minds of those at the FTC …  

Are you exaggerating what your AI product can do?  Or even claiming it can do something beyond the current capability of any AI or automated technology? For example, we’re not yet living in the realm of science fiction, where computers can generally make trustworthy predictions of human behavior. Your performance claims would be deceptive if they lack scientific support or if they apply only to certain types of users or under certain conditions.

Are you promising that your AI product does something better than a non-AI product? It’s not uncommon for advertisers to say that some new-fangled technology makes their product better – perhaps to justify a higher price or influence labor decisions. You need adequate proof for that kind of comparative claim, too, and if such proof is impossible to get, then don’t make the claim.

Are you aware of the risks? You need to know about the reasonably foreseeable risks and impact of your AI product before putting it on the market. If something goes wrong – maybe it fails or yields biased results – you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a “black box” you can’t understand or didn’t know how to test. 

Does the product actually use AI at all? If you think you can get away with baseless claims that your product is AI-enabled, think again. In an investigation, FTC technologists and others can look under the hood and analyze other materials to see if what’s inside matches up with your claims. Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it.

The FTC’s post follows from earlier business guidance on the AI front, including a post from April 2021 that focuses on “harness[ing] the benefits of AI without inadvertently introducing bias or other unfair outcomes?” and in which the agency “warned businesses to avoid using automated tools that have biased or discriminatory impacts.”