ChatGPT Owner is Being Sued for Libel Over AI Generator’s “Hallucinations”

Law

ChatGPT Owner is Being Sued for Libel Over AI Generator’s “Hallucinations”

In one of the latest lawsuits to be filed over artificial intelligence (“AI”)-generated output, OpenAI is being sued for libel. According to the complaint that he filed in a Georgia state court on June 5, Plaintiff Mark Walters asserts that the company behind ChatGPT is on ...

June 12, 2023 - By TFL

ChatGPT Owner is Being Sued for Libel Over AI Generator’s “Hallucinations”

Case Documentation

ChatGPT Owner is Being Sued for Libel Over AI Generator’s “Hallucinations”

In one of the latest lawsuits to be filed over artificial intelligence (“AI”)-generated output, OpenAI is being sued for libel. According to the complaint that he filed in a Georgia state court on June 5, Plaintiff Mark Walters asserts that the company behind ChatGPT is on the hook for libel as a result of misinformation that it provided to a journalist in connection with his reporting on a federal civil rights lawsuit filed against Washington Attorney General Bob Ferguson and members of his staff, alleging that Ferguson used the power of his office to chill the activities of the Second Amendment Foundation.

According to the complaint that he filed early this month, Walters asserts that journalist Fred Riehl interacted with ChatGPT about the Second Amendment Foundation v. Robert Ferguson case, which he was reporting on. “In the interaction with ChatGPT,” Walters claims that “Riehl provided a (correct) URL of a link to the [Second Amendment Foundation v. Ferguson] complaint, [and] asked ChatGPT to provide a summary of the accusations in the complaint.” In response, ChatGPT described the document as “a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (‘SAF’), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF.” 

ChatGPT further stated that “the complaint alleges that Walters, who served as the organization’s treasurer and chief financial officer, misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership.”

The problem with that, according to Walters’s lawsuit, is that he is “neither a plaintiff nor a defendant in the [SAF] lawsuit,” and in fact, “every statement of fact” in the ChatGPT summary that pertains to him is false. Not only is he “not [being] accused of defrauding and embezzling funds from the Second Amendment Foundation,” Walters contends that he is not even a party to that litigation. In short: “None of ChatGPT’s statements concerning [him] are in the actual complaint.” 

Walters asserts that OpenAI is “aware that ChatGPT sometimes makes up facts, and refers to this phenomenon as a ‘hallucination.’” Against that background, Walters alleges that not only did OpenAI “publish libelous matter regarding Walters … by sending the allegations to Riehl,” the company “knew or should have known its communication to Riehl regarding Walters was false, or recklessly disregarded the falsity of the communication.” And since ChatGPT’s characterization of the case included allegations that he was involved in behavior incompatible with the proper conduct of his business, trade, or profession, Walters argues in his lawsuit that OpenAI’s claims were “libelous per se.” He is seeking “general damages in an amount to be determined at trial,” as well as punitive damages and attorney’s fees.

A Bigger Issue for Lawyers, Courts

This is not the first time that so-called generative AI “hallucinations” have come up as of late. In fact, it follows closely from Steven Schwartz, an attorney with Levidow, Levidow & Oberman, coming under fire (and making national headlines) for submitting a ChatGPT-generated memo to the U.S. District Court for the Southern District of New York that contained citations to six non-existent cases. 

Shortly thereafter, Judge Brantley Starr of the U.S. District Court for the Northern District of Texas addressed the potential for generative AI platforms to engage in such hallucinations and become the first federal judge to explicitly ban the use of generative AI – “such ChatGPT, Harvey.AI, or Google Bard” – for filings unless the content of those filings has been checked by a human. According to Judge Starr’s mandate, “These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up – even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath.”

As such, the judge warned that he will “strike any filing from a party who fails to file a certificate on the docket attesting that they have read the Court’s judge-specific requirements and understand that they will be held responsible under Rule 11 for the contents of any filing that they sign and submit to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing.” 

Magistrate Judge Gabriel Fuentes of the U.S. District Court for the Northern District of Illinois has since issued a revised standing order on June 5 requiring that “any party using any generative AI tool in the preparation of drafting documents for filing with the Court must disclose in the filing that AI was used” with the disclosure identifying the specific AI tool and the manner in which it was used. The Judge’s order also mandates that parties not only disclose whether they used generative AI to draft filings but more fundamentally, whether they used generative AI to conduct corresponding legal research.

Other judges are expected to follow suit and require parties to make certifications regarding the use of generative AI in connection with their research and/or filings. 

What About the Media?

All the while, no shortage of media outlets are busy employing generative AI with the aim of cutting costs and boosting productivity. BuzzFeed, for one, garnered attention early this year after it distributed a memo to staffers about the company’s plans to use generative AI tech to create content. CEO Jonah Peretti stated in the memo that the New York-based media company “see[s] the breakthroughs in AI opening up a new era of creativity that will allow humans to harness creativity in new ways with endless opportunities and applications for good.” Fast forward to May 2023 and VentureBeat confirmed in a “Letter from the Editor” that it will use “to inspire and strengthen our work — overcoming writer’s block, improving our storytelling, and exploring new angles,” noting that its “human copy-editors will always review, edit and fact-check VentureBeat stories, whether they contain AI-influenced content or are written without any generative AI assistance.”

BuzzFeed and VentureBeat are hardly outliers. An AI-focused survey of global newsroom executives conducted this spring by WAN-IFRA and Germany-based SCHICKLER Consulting Group revealed that 49 percent of survey participants stated that their newsrooms are already using tools like ChatGPT for text creation, research, spelling/grammar, and/or workflow purposes, with 70 percent of survey participants saying that they expect Generative AI tools to be helpful for their journalists and newsrooms. Not all upsides, the survey shed light of potential red flags, including concerns over the accuracy/quality of generated AI outputs and the overarching lack of oversight. WAN-IFRA and SCHICKLER found that “almost half of survey participants said that their journalists have the freedom to use the technology as they see fit, [with] only a fifth of respondents said that they have guidelines in place on when and how to use GenAI tools.”

In much the same way as courts are catching up to the use of generative AI tools, newsrooms are doing the same. “As newsrooms grapple with the many complex questions related to GenAI,” WAN-IFRA researcher Teemu Henriksson said in connection with the survey that “it seems safe to assume that more and more publishers will establish specific AI policies on how to use the technology (or perhaps forbid its use entirely).” 

The case is Walters v. OpenAI LLC, 23-A-04860-2 (GA. Sup.)

related articles