Snapshot: The ESG, AI Risk Disclosure Nexus

Dovetailing from our recent deep dive, which provided a look at some of the most critical pieces of environmental, social, and corporate governance (“ESG”)-centric legislation in the EU, it is worth keeping an eye on trends in the ESG litigation space, as cases in this realm are expected to continue to climb in number (and importance) this year. Speaking on a panel recently, Perkins Coie partner Brian Sylvester said that “ESG claims made against companies have gained momentum in recent years as the focus on ESG issues increase,” and likely will continue to increase ahead of regulatory efforts, such as the impending release of the updated Green Guides by the Federal Trade Commission. (Stakeholders have urged the FTC to provide clarity regarding many of the most commonly-use “green” buzzwords, which largely lack legal definitions, and to establish “consistent, multi-geographical standards” around environmental marketing claims.)

Sylvester specifically pointed to companies’ use of climate-specific claims, including their marketing of themselves and/or their products as “carbon neutral,” “net zero,” etc., as well as interest among consumers in eco-friendly products and services, as driving some of the legal squabbles that have landed before courts in recent years. But beyond claims that companies are making to cater to climate-conscious consumers, Sylvester noted the failure by companies to disclose material ESG risks is giving rise to shareholder-initiated litigation, specifically, stemming from companies’ failure to provide adequate disclosures of the “environmental liabilities, supply chain risks, labor issues or climate change related risks” that come hand-in-hand with their operations.

And speaking of ESG risk reporting, there is an interesting nexus here with artificial intelligence that is worth paying attention to. As AI, and generative AI, in particular, continues to pay an increasingly larger role in the workings of companies across industries, the issue of risk – and risk reporting – is becoming a topic of interest and importance.

As we have seen by way of a number of cases, the use of generative AI brings with it the risk of infringement and other IP-related claims, with a number of copyright and trademark holders arguing (primarily in cases waged against AI platform providers like OpenAI) that generative AI platforms’ outputs infringe their copyright-protected works – from visual artworks to books and articles. It also carries the risk that companies will provide inaccurate outputs to consumers, investors, etc., in light of the widely-reported tendencies of AI models to “hallucinate,” or generate false information.

More broadly, as Orrick’s Amy Walsh and Stephen Cazares stated in a note, companies’ use of AI stands to “increase compliance risk” in the event that: (1) Companies and their employees “disclose any untrue statement of material fact or omit a material fact about the company in connection with the purchase or sale of the company’s securities;” and/or (2) Publicly traded companies “selectively disclose material non-public information about the company to certain third parties without sharing that information broadly with the investing public.” Both practices are strictly prohibited under U.S. securities laws.

Meanwhile, Weil, Gotshal & Manges attorneys stated in a note of their own that the rising usage of AI in-house poses a slew of risks for companies, with operational risk factors including “the impact of unpredictable disruptions, technical challenges and errors in AI projects that could affect the companies’ financial results and business operations” and competition centric risks coming in the form of “delay[s] in investing in, adopting and integrating AI, [which] could lead to an erosion of market share.” Other risks include those on the regulatory, cybersecurity and ethics fronts, and also take the form of risks that come about as a result of companies’ reliance on third-party service providers, especially those involving third-party AI systems.

With the foregoing in mind, an informal survey conducted by Weil that as of Q4 2023, “approximately 18% of S&P 500 companies and 12% of Russell 3000 companies discussed or mentioned AI in their risk factors on Form 10-K.”

A regulatory risk disclosure example (courtesy of Weil) …

An operational risk disclosure example (courtesy of Weil) …

THE BIGGER PICTURE: It will not be news to most readers that companies across various market segments operate without much standardization in terms of disclosure standards and metrics when it comes to measuring and reporting on ESG issues. This has garnered attention from the Federal Trade Commission and the Securities and Exchange Commission, alike, in order to stomp out widespread cherry-picking of information/metrics by companies and to ensure that stakeholders can not only accurately gauge companies’ ESG progress or lack thereof, but can also make reasonable comparisons among different market players.

Unsurprisingly, the same is currently true of the state of AI-related risk reporting.

Uniform ESG reporting/disclosure standards are expected in the U.S. and beyond, with the SEC, for instance, publishing its Sunshine Act Notice for the open meeting at which its climate disclosure rules will be considered. (The date for that is set for March 6.) Something similar will likely follow for AI. Stay tuned.