- DeepSeek’s reasoning model R1 not only destroys the myth that China can only copy the West, it may even be poised to eclipse it in the AI tech race.
China’s artificial intelligence breakthrough is shaking the foundation of the West’s dominance in this technological arms race, conjuring comparisons to one of the USSR’s greatest achievements.
U.S. equity markets are set to open deep in the red as investors begin to digest the significance of DeepSeek’s AI reasoning model R1 published fully open-source last week to the business of megacaps like chip giant Nvidia.
Not only does it suggest the Communist Party–controlled country has caught up to the United States, but it may also be on the cusp of eclipsing it.
R1’s more cost-efficient AI training and inference risks also call into question the thesis underpinning sky-high valuations for most Magnificent Seven stocks.
Now Marc Andreessen, a tech community legend whose Netscape Navigator enabled the internet revolution, is reaching back decades to find what he believes is the most appropriate historical parallel—the day in 1957 when the Soviet Union beat the U.S. into space with the launch of the first satellite into orbit.
“DeepSeek-R1 is AI’s Sputnik moment,” Andreessen posted on Sunday.
The release of OpenAI’s ChatGPT in November 2022 took the world by storm, launching a hype in AI stocks only comparable to the dotcom era.
Investors have bid up the prices of companies like Nvidia, Microsoft, and Tesla on the expectation that a select few will be in a position to rake in billions upon billions of dollars in AI-related profits.
DeepSeek rockets to No. 1 on App Store chart
Whether it was Microsoft partner OpenAI’s GPT, Alphabet’s Gemini, or Claude designed by Amazon-backed Anthropic, it was considered unquestionable that a mere handful of companies had the kind of financial and technological resources to compete in this space.
They would then monetize their advantage by charging customers to use their proprietary, closed-source AI models.
Only Elon Musk has proved capable of playing catch-up by investing billions to build the largest single compute cluster in the United States in Memphis in record time, which will train his new Grok AI model.
Deepseek R1 is AI's Sputnik moment.
— Marc Andreessen 🇺🇸 (@pmarca) January 26, 2025
Yet R1 suggests that the thesis may be wrong.
Within no time, DeepSeek’s free-to-download app has rocketed to the top of the App Store charts.
Its success follows the new Trump administration’s unveiling last week of its ambitious $500 billion Stargate AI program, which was designed to enshrine American dominance in technology for years to come.
DeepSeek’s decision to follow the lead of Meta, whose Llama model is open-source, by publishing its research, means anyone can build their own version of R1 and tailor it to their needs.
This could potentially wipe out the technological advantage of AI training that megacap stocks like Microsoft, Alphabet, and Amazon have.
Nightmare for Nvidia
For Nvidia CEO Jensen Huang, the news comes just as he is ramping up production of his vaunted Blackwell microchip, a more advanced version of his industry-leading Hopper series H100s that control 90% of the AI semiconductor market.
Nvidia’s circuitry is so cutting-edge that its products are subject to export controls imposed by the U.S. government, with only a dumbed-down version allowed shipment to China. Now, it turns out, AI models may not necessarily need tens of thousands of the latest-generation AI chips sold at full price. As a result, Nvidia shares are expected to open more than 10% down, losing the equivalent of $350 billion in value when trading opens.
deepseek is a ccp state psyop + economic warfare to make american ai unprofitable
— Neal Khosla (@nealkhosla) January 24, 2025
they are faking the cost was low to justify setting price low and hoping everyone switches to it damage AI competitiveness in the us
dont take the bait
Speaking to CNBC, Perplexity AI founder Aravind Srinivas praised the DeepSeek’s team for catching up to the West by employing clever solutions.
This includes cutting the compute requirements in half without sacrificing accuracy by switching from the more conventional method of binary encoding—called floating point 16—to the more efficient FP8.
“Usually the myth is that Chinese are just good at copying,” Srinivas told CNBC.
“The point is it’s changing,” he continued. “It’s not like China is a copycat—they are also innovating.”