Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Reason
Reason
Politics
Eugene Volokh

Why ChatGPT Output Could Be Libelous

My "Large Libel Models: ChatGPT-4 Erroneously Reporting Supposed Felony Pleas, Complete with Made-Up Media Quotes?" post has led to a good deal of discussion, both in the comments here and elsewhere. I much appreciate that, since I'm writing a law review article on the subject, and would love to hear people's thoughts about the matter. (I hope to post a draft here soon.)

One common response, especially among the more technically savvy, is that ChatGPT output shouldn't be treated as libel for legal purposes: Such output shouldn't be seen by the law as a factual claim, the theory goes, given that it's just the result of a predictive algorithm that chooses the next word based on its frequent location next to the neighboring ones in the training data. I've seen analogies to Ouija boards, Boggle, "pulling Scrabble tiles from the bag one at a time," and a "typewriter (with or without an infinite supply of monkeys)."

But I don't think that's right. In libel cases, the threshold "key inquiry is whether the challenged expression, however labeled by defendant, would reasonably appear to state or imply assertions of objective fact." OpenAI has touted ChatGPT as a reliable source of assertions of fact, not just as a source of entertaining nonsense. Its current and future business model rests entirely on ChatGPT's credibility for producing reasonable accurate summaries of the facts. When OpenAI promotes ChatGPT's ability to get high scores on bar exams or the SAT, it's similarly trying to get the public to view ChatGPT's output as reliable. It can't then turn around and, in a libel lawsuit, raise a defense that it's all just Jabberwocky.

Naturally, everyone understands that ChatGPT isn't perfect. But everyone understands that newspapers aren't perfect, either—yet that can't be enough to give newspapers immunity from defamation liability; likewise for lawsuits against OpenAI for ChatGPT output, assuming knowledge or negligence (depending on the circumstances) on OpenAI's part can be shown. And that's especially so when OpenAI's output is framed in quite definite language, complete with purported (but actually bogus) quotes from respected publications.

To be sure, if OpenAI billed ChatGPT as just a fun toy, a sort of verbal kaleidoscope, matters might be different. But it probably wouldn't have been able to raise $13 billion for that.

The post Why ChatGPT Output Could Be Libelous appeared first on Reason.com.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.