On April 30, 2024, eight of the nation's largest daily newspapers, including the New York Times, filed a lawsuit against OpenAI and Microsoft, claiming both companies used news articles without permission to train their AI models.
It's among the most significant cases launched against AI developers to date and follows a separate, ongoing lawsuit filed on December 27, 2023, in which the New York Times claimed OpenAI committed copyright infringement through ChatGPT.
When I asked ChatGPT about these cases, it dutifully gave me a summary of the 2023 case, including three main points: copyright infringement, a lack of compensation for using copyrighted content, and the ongoing nature of the case.
Ironically, in explaining this case to me, ChatGPT may have pulled from New York Times articles, highlighting the exact issue at hand — or, at the very least, the issue that the newspaper has.
But is this case as black and white as The Times' own masthead? Given that its outcome could determine how companies handle the training of future models for potentially decades to come, it's worth taking a closer look.
A pressing matter: The case for The New York Times
At the time of writing, OpenAI is just one of several AI companies facing dozens of lawsuits. These cases often focus on some form of copyright infringement, with plaintiffs ranging from individual YouTube creators to media conglomerates and music studios.
When hearing about claims of copyright infringement by AI, musicians, actors, and writers may first come to mind. Hollywood has been at the center of tensions with the AI industry for years now, but the press and news outlets also play an important role in the battle against the misuse of AI.
Cases like the 2023 and 2024 New York Times lawsuits aren't just about protecting journalists from losing their jobs to AI. Unlike AI-related cases in Hollywood, these cases focus more on the exploitation of news articles by AI chatbots and the risks of those chatbots spreading misinformation under the guise of legitimate news sources.
The possibility of generative AI fabricating news isn't hypothetical, either. It's already happening. For example, in 2023, The Guardian blocked ChatGPT from scraping its website for training data after readers reported that the chatbot was generating news articles under Guardian reporters' bylines... that those reporters never actually wrote.
Similarly, the 2024 New York Times lawsuit against AI cited multiple examples of ChatGPT generating false product recommendations attributed to news outlets.
This completely fabricated information isn't just an inconvenient AI blunder. It may cause serious harm to readers who believe they're getting information from a trustworthy source. As the lawsuits claim, this can also damage newspapers' reputations if unsuspecting readers don't realize they're getting a recommendation or story that's completely made up.
To make matters even more complicated, these lawsuits also raise the issue of lost revenue. It's no secret that newspapers rely on paid subscriptions to stay in business. AI models can give readers access to text from copyrighted news articles without paying for access to that content. So, the New York Times and the other news outlets joining it in the 2024 lawsuit claim they are owed compensation for OpenAI's use of their content.
What all of this amounts to is a question of accountability. Can AI companies profit off of content they weren't authorized to use? Are they responsible for the damage done when their algorithms generate false or misleading content?
Generating headlines: The case for OpenAI
The Times' argument is sound enough, but as with any lawsuit, there are two sides to every story.
In response to the lawsuit filing, OpenAI would accuse The Times of incomplete reporting in a post titled OpenAI and Journalism published on their website in January 2024, stating "We support journalism, partner with news organizations, and believe the New York Times lawsuit is without merit."
On the topic of copyrighted works appearing within ChatGPT, OpenAI offered insight claiming, "[The New York Times] had mentioned seeing some regurgitation of their content but repeatedly refused to share any examples, despite our commitment to investigate and fix any issues."
Exploring these "regurgitations," the name given to a bug encountered by OpenAI's models when it offers up training data as part of an answer, further, Open AI states "Interestingly, the regurgitations The New York Times induced appear to be from years-old articles that have proliferated on multiple third-party websites.
"It seems they intentionally manipulated prompts, often including lengthy excerpts of articles, in order to get our model to regurgitate."
From OpenAI's perspective, The Times' claims result from attempted "jailbreaking" efforts to force ChatGPT to reveal training data. As the company puts it, "this misuse is not typical or allowed user activity ... we are continually making our systems more resistant to adversarial attacks to regurgitate training data, and have already made much progress in our recent models."
Reinforcing his company's stance, OpenAI CEO Sam Altman commented on the lawsuit at the recent Dealbook Summit 2024 in December, stating "I think the New York Times is on the wrong side of history in many ways."
A case that's not so black and white
It's not just copyright infringement on the table in these landmark AI lawsuits — in cases of copyright infringement in particular, the outcomes could completely reshape the way AI models are trained in the future.
To some, LLMs like ChatGPT are simply spewing out pieces of pre-learned information to users, scraped from the training materials it was initially fed, raising alarm bells over how much of this "new" information contains another's work.
However, AI developers claim that models such as ChatGPT don't work that way. To them, even if a model is trained on copyright-protected data, it's no different than somebody learning to play guitar by playing riffs from their favorite bands. If this person were to go on to create music of their own, they wouldn't be breaking copyright simply by having learned to do so by listening and learning to others who already could.
The jury is still out on which side of the argument will prevail in court, with both sides making strong cases.
However, perhaps ChatGPT's excellence in delivering content that is so human-like will be its downfall, similar to the misunderstanding of the widely attributed Pablo Picasso quote (which, ironically, is also possibly misattributed): "Good artists copy, great artists steal."
If you're anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you'll need a powerful and performative laptop to keep up to speed with your needs.
At Laptop Mag, we review laptops year-round to ensure we're giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
So, if you're shopping for a new laptop and looking to invest in an AI PC (or just a great laptop in general), check out our current top-tier picks below.
Best Mac for AI
We love the MacBook Air 13 M3. Starting at just $1,099 (MSRP), with education pricing dropping to $999 (MSRP), the Air is a laptop we can recommend for just about any purpose. It's affordable, especially by Apple standards, and it features an excellent keyboard, fantastic performance, and outstanding endurance (over 15 hours of battery life), which makes it a great laptop for just about anyone's needs, especially those interested in getting to grips with all of the latest Apple Intelligence features.
Best Windows AI PC
The Asus Zenbook S 14 (UX5406) has quickly become our favorite AI PC laptop of the year, offering all the hallmarks of a great buy, including exceptional performance and battery life. This laptop is one of the first to feature an Intel Core Ultra 200V series processor and at just $1,499 (MSRP), you get a fantastic balance of power, a stunning 14-inch OLED display, effortless multitasking, NPU-enhanced performance for AI tasks, and all of the additional Copilot+ features available with Windows 11.