Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

OpenAI accuses New York Times of paying someone to hack ChatGPT

OpenAI, requesting the partial dismissal of several parts of the New York Times' December copyright lawsuit against it, said in a filing in Manhattan federal court Monday that the Times "paid someone to hack OpenAI's products." 

OpenAI said in the filing that contrary to the Times' assertions, ChatGPT "is not in any way a substitute for a subscription" to the Times, adding that "In the real world, people do not use ChatGPT or any other OpenAI product for that purpose. Nor could they."

"In the ordinary course, one cannot use ChatGPT to serve up Times articles at will," the filing says. 

Related: Copyright expert predicts result of NY Times lawsuit against Microsoft, OpenAI

OpenAI said that the Times' allegations, which include more than 100 examples in which ChatGPT allegedly generated word-for-word paragraphs of Times content, passed off as its own, "do not meet its famously rigorous journalistic standards." 

OpenAI claimed it took the Times  (NYT)  "tens of thousands of attempts" to generate the results, adding that "they were able to do so only by targeting and exploiting a bug by using deceptive prompts that blatantly violate OpenAI’s terms of use."

"Normal people do not use OpenAI’s products in this way," OpenAI said. 

OpenAI is seeking the dismissal of four claims, including the Times' claim of direct copyright infringement, contributory infringement, copyright management information removal and unfair competition by misappropriation. 

The move has been made, according to OpenAI, to "focus the litigation on the core issues that really matter," a question of whether it is considered fair use to use copyrighted content to train AI models. 

OpenAI did not respond to a request for comment. 

Ian Crosby, the Times' lead counsel, said in an emailed statement that OpenAI's filing importantly does not dispute that the company "copied millions of The Times' works to build and power its commercial products without our permission."

“What OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced The Times’ copyrighted works. And that is exactly what we found," Crosby said. "In fact, the scale of OpenAI’s copying is much larger than the 100-plus examples set forth in the complaint."

Crosby added that OpenAI's decision to enter into licensing deals with other news publishers "only confirms that they know their unauthorized use of copyrighted work is far from ‘fair.’"

Related: ChatGPT maker has a strong response to New York Times lawsuit

AI and copyright challenges

The Times sued OpenAI and its top investor, Microsoft, in December, alleging rampant copyright infringement in both the input and output of the company's AI models.  (MSFT)

"If Microsoft and OpenAI want to use our work for commercial purposes, the law requires that they first obtain our permission," a Times spokesperson said at the time. "They have not done so."'

Through the suit, the Times is seeking to hold the two companies accountable for "billions of dollars in statutory and actual damages that they owe for the unlawful copying and use of The Times’s uniquely valuable works."

The core of the suit, however, is the same core of every author and artist lawsuit against AI companies: a question of whether it is fair use for AI companies to train their models on copyrighted content without credit, compensation or permission. 

OpenAI has regularly taken the stance that it is fair use, saying in Monday's filing: "OpenAI and the other defendants in these lawsuits will ultimately prevail because no one — not even the New York Times — gets to monopolize facts or the rules of language."

Still, the company has said in the past that "it would be impossible to train today's leading AI models without using copyrighted materials." 

Related: New York Times slaps Microsoft, ChatGPT maker with copyright lawsuit

"OpenAI’s lobbying campaign, simply put, is based on a false dichotomy (give everything to us free or we will die) — and also a threat: either we get to use all the existing IP we want for free, or you won’t get to generative AI anymore," AI researcher Gary Marcus said in January. "But the argument is hugely flawed."

The U.S. Copyright Office has yet to weigh in on whether AI training is accepted under the fair use doctrine. Nor have nascent regulatory efforts. 

"The AI companies are working in a mental space where putting things into technology blenders is always okay," Copyright expert and Cornell professor of digital and information law James Grimmelmann told TheStreet in January. "The media companies have never fully accepted that. They've always taken the view that 'if you're training or doing something with our works that generates value we should be entitled to part of it.'"

Contact Ian with AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Related: Human creativity persists in the era of generative AI

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.