DeepSeek, the Chinese AI startup, shocked the world after unveiling an AI model last week that rivals leading models like OpenAI’s o1, while claiming it cost far less to develop and required far fewer Nvidia chips—and giving it away for free. The fallout sent Nvidia’s stock plummeting today and left observers wondering: What does it mean for the most deep-pocketed AI startups, OpenAI and Anthropic, which sell their models to consumers and companies, as well as highly funded competitors like Mistral and Cohere?
The current moment is deeply ironic, Toronto-based AI developer and consultant Reuven Cohen told Fortune. DeepSeek released its AI model as open-source, meaning the company allowed researchers, developers, and other users to access the underlying code and its “weights” (which determine how the model processes information) to use, modify, or improve. That sounds a lot like what OpenAI said it would do when it was founded in 2015 as a nonprofit company that shared its research and techniques openly (as its name suggests). But OpenAI is now “by far, the most closed in every way possible,” Cohen said.
Though DeepSeek did not release the data it used to train its R1 model, there are indications that it may have used outputs from OpenAI's o1 to kick-start the training of the model's reasoning abilities. This process of analyzing and learning from another model’s outputs is sometimes referred to as "reverse engineering.”
Open-source developers have been reverse-engineering OpenAI models like o1 for months, Cohen said. DeepSeek’s efforts make it clear that models can self-improve by learning from other models released by OpenAI, Anthropic, and others—which puts those companies’ existing business models, cost structures, and technological assumptions at risk.
“The problem is that the companies have momentary advantages but haven't built durable moats,” said Patrick Moorhead, founder of Moor Insights & Strategy. “Companies with proprietary leanings need a scale, time to market, cost, or 5X utility advantage to be successful. Both OpenAI and Anthropic are being outmaneuvered by open [source AI].”
Many proponents of open-source AI have long predicted the commoditization of AI models. “If these models turn out to be pretty capable, which they really are looking like, and they're very cheap, then there's a world where companies stop using OpenAI at scale,” said William Falcon, CEO of Lightning AI, a software platform that allows users to train and deploy open-source AI models, including DeepSeek’s.
“That also brings into question the valuation of all these companies,” he said, though he pointed out that OpenAI, which as of October 2024 was valued at $157 billion, and Anthropic, which is currently raising money on a $60 billion valuation, do have billions in revenue and are less speculative than other startups like Cohere and Mistral, which he said are “going to be the ones impacted the most by this.”
In addition, DeepSeek’s success shows that open-source developers don’t even have to figure out the entire secret recipe created by a closed-model company like OpenAI, Falcon added. They just needed a few improved techniques to make the training of the model more efficient.
Those improvements, he added, will quickly be implemented by other companies, including OpenAI, Anthropic, Meta, and Google. “I would be shocked if they haven't already, since Friday, grabbed that stuff, implemented it, and probably already applied it,” he said.
However, while this is definitely a moment to be introspective about why top U.S. AI researchers did not discover these techniques on their own, it doesn’t mean America’s entire market position on AI is really undone, or that OpenAI’s or Anthropic’s future is shaky.
“I'm skeptical that we're going to go from [billions of Nvidia chip commitments] from Microsoft and everyone else to, oh, we only need hundreds to train these frontier models,” said Daniel Newman, CEO and analyst at the Futurum Group, adding that OpenAI and others will be researching the accuracy of the DeepSeek techniques, and deciding whether their results can be replicated and implemented.
Vaibhav Srivastav, a researcher at open-source platform Hugging Face, emphasized that he did not think OpenAI, Anthropic, and other model companies are in deep trouble. “I think the real moat is in the application layer,” he said, meaning that the value for these companies lies not just in building models but in how those models are integrated into applications. However, he added, “I do think DeepSeek must be a humbling moment for them.”
Open-source AI experts say there is no schadenfreude, however. In fact, said Falcon, it’s all about moving the AI ball forward—including with OpenAI. If OpenAI had not “gone dark” in terms of sharing its research openly since the release of ChatGPT, he said, the U.S. would likely be further along capability-wise, since open-source collaboration drives progress.
“But, of course, OpenAI would not have been as big a company,” he said. “And China would be just as far ahead.”
But there is one more ironic twist at play in the DeepSeek narrative, said Cohen. What about Meta, which has spent the past two years touting Llama, its family of free open AI models? After all, Meta has positioned itself as the antithesis of OpenAI and Anthropic, yet DeepSeek has suddenly emerged as the real open-source disrupter. Meta has reportedly assembled four “war rooms” of engineers to respond to DeepSeek’s potential breakthrough AI developments.
“OpenAI might be expensive and proprietary, but they're still the most used platform by orders of magnitude,” he said. “Regardless, they're gonna do well for a while.” The real question is, he said, “what the hell is Meta doing? This was theirs to lose.”