Skip to main content
search
0

AI and Defamation: Who Do You Sue?

By Grayson Horton

Artificial intelligence (AI) has the potential to alter many different facets of the law, but one area in particular that will present unique challenges is defamation. Anyone who has used AI can remember a scenario where AI produced a fake result. Take the example of when Google’s Bard chatbot falsely said during its first demo that the James Webb Telescope took the first photograph of a planet outside the solar system.1 In another instance, lawyers were sanctioned by a federal judge in New York for using ChatGPT to write filings that relied on fictitious cases.2 As these examples illustrate, AI can provide incorrect information and do so convincingly. Yet what happens when AI produces fake information about a person’s reputation? Can a defamation suit be brought? Does AI “understand” that the information it is creating is false? Will AI reproduce the same false information repeatedly to other users? These are just a few of the questions that AI raises as it relates to defamation.

The first question that must be asked when determining whether AI can be liable for defamation is whether the content created by AI is original content or content produced by another party.3 Under existing law, 47 U.S.C. § 230 states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content 

provider.”4 This has generally meant that tech companies are shielded from liability for the content posted on their sites.5 However, AI does not simply retrieve information and display the results. AI’s “[c]urrent systems are the product of data-driven training processes: They learn to extract patterns from records of prior experiences, and then to apply that capability in new settings.”6 AI “platforms are responsible for the way in which these words are assembled in their output.”7 This arrangement of words in a unique way is what makes AI cases challenging:

[An] AI company, by making and distributing an AI program that creates false and reputation damaging accusations out of text that entirely lacks such accusations, is surely ‘materially contribut[ing] to [the] alleged unlawfulness’ of that created material. The program is not a mere “neutral conduit for [the actionable] content’”—indeed, it is not a conduit at all.8

In essence, AI has the capability to create defamatory speech because AI can uniquely arrange words and phrases that could harm someone’s reputation. 

A recent case filed against OpenAI, the creator of ChatGPT, concerns a situation where AI created false allegations against a man named Mark Walters.9 Specifically, a man named Fred Riehl asked ChatGPT to summarize a case he was reporting on for his website.10 ChatGPT said that while Mark Walters was the treasurer and chief financial officer of the Second Amendment Foundation, he “defrauded” and “embezzled” funds.11 Mark Walters had spoken at Second Amendment Foundation events and aligns with their beliefs, but he has never worked for Second Amendment Foundation.12 ChatGPT created an entire fictitious legal complaint, complete with a fake case number.13 The judge in the case has recently rejected ChatGPT’s motion to dismiss.14

While Mark Walters’s case can go forward, it is full of tricky issues that must be resolved. First, the plaintiff will have to prove that he is the Mark Walters that the AI was referring to.15 Second, the plaintiff will have to prove that the AI made a statement of fact.16 This could be difficult because does AI “understand” the assertions that it makes? Lastly, the damages seem minimal because only Fred Riehl received the false information from ChatGPT.17 How much damage did ChatGPT really inflict on Plaintiff’s reputation if only one individual received the false information? It will be interesting to see how the Georgia court handles these issues.

Lastly, liability for defamation presupposes intentionality.18 Yet, can AI act intentionally? Some believe that AI, by its very nature, cannot act intentionally because AI is only being trained on the “form” alone, and this does not produce an “understanding” of the data.19 An example of this is that it would be an “impossibility for a non-speaker of Chinese to learn the meanings of Chinese words from Chinese dictionary definitions alone.”20 Others say AI companies, like other non-human entities, should be liable for their actions.21 Dogs are protected from cruel treatment but are liable for unruly behavior.22 Corporations, while often hard to trace the bad actions of one person or a group of people, are held liable as an organization.23 AI companies could similarly be held liable for the content their AI programs create because they are the ones that trained the AI and act as its supervisor. 

AI is a paradox. While much of the discussion about AI has focused on its processing capabilities and efficiency, the more interesting question is just how human is AI? The goal of AI is to replicate the human intelligence and make it better, but the programs that AI companies will create will be from the creation of humans. Humans are imperfect. As imperfect beings, AI will develop intricacies and complexities like humans where the law will have to learn and adapt. AI and defamation are areas of the law where it will be fascinating to see how currents laws are adapted to handle this fast-developing technology.   

References:

1 Vincent, James. Google’s AI Chatbot Bard Makes Factual Error in First Demo, THE VERGE (Feb 8, 2023) www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bardmistake-error-exoplanet-demo.

2 Neumeister, Larry. Lawyers Submitted Bogus Case Law Created by CHATGPT. A Judge Fined Them $5,000. AP NEWS, (June 22, 2023) https://apnews.com/article/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c.

3 Sydney Coker, Can Artificial Intelligence Platforms Be Held Liable for Defamation? RICHMOND JOURNAL OF LAW AND TECHNOLOGY (2023), https://jolt.richmond.edu/2023/10/30/can-artificial-intelligence-platforms-be-held-liablefor-defamation/#_ftnref14 

4 47 U.S.C.A. § 230. 

5 Coker, supra note 3. 

6 Christopher Potts et al., When Artificial Agents Lie, Defame, and Defraud, Who is to Blame? STANFORD HAI (2021), https://hai.stanford.edu/news/when-artificial-agents-lie-defame-and-defraud-who-blame. 

7 Coke, supra note 3. 

8 Eugene Volokh, Large Libel Models? Liability for AI Output, JOURNAL OF FREE SPEECH 489, 495 (2023).

9 Complaint at 32, Walters v. OpenAI, L.L.C., 23-A-04860-2 (June 5, 2023).

10 Id. at 9.

11 Id. at 16.

12 Miles Klee, Chatgpt is Making Up Lies — Now It’s Being Sued for Defamation, ROLLING STONE (June 9, 2023), https://www.rollingstone.com/culture/culture-features/chatgpt-defamation-lawsuit-openai-1234766693/.

13 Exhibit 1, Walters v. OpenAI, L.L.C., 23-A-04860-2 (June 5, 2023). 

14 Order Denying Defendant’s Motion to Dismiss Plaintiff’s Amended Complaint, Walters v. OpenAI, L.L.C., 23-A-04860-2 (January 11, 2023). 

15 Klee, supra note 12.

16 Id.

17 Id. 

18 Potts et al., supra note 6.

19 Emily M. Bender and Alexander Koller, Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (2020), https://aclanthology.org/2020.acl-main.463. 

20 Id. 

21 Potts et al., supra note 6.

22 Id.

23 Id.

Close Menu

(239) 687-5300