OpenAI CEO Sam Altman joined President Donald Trump and leaders of SoftBank and Oracle yesterday to tout Stargate, a $500 billion plan to build data centers in the U.S. to power the expected soaring use of AI in the coming years. Altman called Stargate, which will get an up-front investment of $100 billion from OpenAI, SoftBank, Oracle, and the Emirati AI investor MGX, the “most important project of this era.”
Whether or not you agree with him, Stargate is arguably the tech industry’s biggest gamble ever. After all, in addition to the eye-popping price tag and the astronomical energy needs (possibly rivaling the electricity demands of entire cities), the massive investment has zero guarantee of return. Given that today’s AI is a generalized technology in its infancy, no one knows how to make money from it at such an enormous scale. And further, while OpenAI may believe that Stargate is “critical” to developing artificial general intelligence (AGI) that will “benefit all of humanity,” the truth is there is not even an agreed-upon definition of AGI (the most common definition is AI that’s equal to humans at certain critical tasks). And even if there was consensus, Ethan Mollick, a professor of management at the University of Pennsylvania’s Wharton School, pointed out on X that there is “still no articulated vision of what a world with AGI looks like for most people.” For those who believe AGI is coming soon, he wrote, “what does daily life look like 5-10 years later?”
Other high-stakes tech bets over the years have not been as costly, nor as wholly uncertain: The Manhattan Project, for developing an atomic bomb during World War II, changed history. However, it was the government, not private business that backed that project, which also had the advantage of being based on well-understood science. AI innovators, on the other hand, are gambling on an outcome that no one fully understands.
Another example is the tens of billions of dollars that tech companies have spent on cloud computing infrastructure. Unlike AI, the push into cloud had a clear business case and the money was invested over more than a decade. Meanwhile, Meta’s obsession with the metaverse, or virtual worlds, was a $50 billion flop. But hey, that strategy was just CEO Mark Zuckerberg’s brief distraction.
And, of course, there was the dot-com boom, which had a mix of success and failures. But it was an industry-wide bet that did not have the concentrated risk of Stargate.
Of course, the tech companies making this latest giant gamble on AI can certainly afford it. Their trillion-dollar valuations and what are practically blank checks from investors, not to mention financial incentives and subsidies from state, local, and federal government, make rolling the dice a bit easier. And their business mission, after all, is going after the latest and the greatest in tech.
Still, the stakes with Stargate are exceptionally high, as both Altman and Trump frame it not just as a technological leap, but as a national imperative. They present it as a project that will solidify U.S. leadership over China in AI, promising 100,000 new jobs and a major economic boost. Trump has even called it the dawn of a “golden age” for America, while Oracle executive chairman Larry Ellison claims it could lead to breakthroughs in treating cancer.
But not everyone is buying the hype. Critics like Gary Marcus argue that AI’s transformative potential is vastly overstated, warning that the U.S. economy will be left holding the bag after a massive overinvestment. In fact, when Stargate was first announced in April, Marcus said it was “the second worst AI investment in history”—after the billions of dollars plowed into self-driving cars over the past decade with little to show for it. Others, like pioneering AI researcher Yoshua Bengio, take an even darker view, believing that far from ushering in prosperity, AI could reshape the world so profoundly that it threatens humanity itself.
Avijit Ghosh, a policy researcher at open source AI platform Hugging Face, emphasizes a different angle—the fact that unrestricted funding like that going towards Stargate concentrates power in the hands of the wealthiest, while excluding the public and independent researchers. In addition, all the attention to building infrastructure to boost AGI harms people who are not “‘building AGI, whatever that means,’” he said. “We are pouring resources into this ‘thing’ that is nebulously defined at best, at the expense of real crises that can be solved with technology at the very present.”
With those criticisms in mind, Stargate can be seen as a moonshot, make-or-break experiment that will not only have significant impact if it fails, but severe consequences if it actually succeeds. While companies like OpenAI, Google, and Meta can afford to make these power moves, the risks may not be in the public’s best interest.
Or maybe the risks of Stargate are worth it, if you consider the U.S. rivalry with China. The country with the best AI has an enormous advantage when it comes to economic power and national defense. If China ends up with the most advanced AI systems, the U.S. could be in danger.
Just two days ago, a Chinese startup, DeepSeek, set off alarm bells by releasing a new open-source AI model that has Silicon Valley buzzing. The company claims its new model beats OpenAI’s most sophisticated o1 model on several math, coding, and reasoning benchmarks.
The release is a “real shot across the bow” to OpenAI and the rest of the AI industry, said Dion Hinchcliffe, an analyst with the Futurum Group, a consulting firm. China’s ability to develop a frontier-level model that competes with the best from OpenAI, he said, is “concerning.” “There’s a real international competition,” Hinchcliffe explained.
Of course, that competition could quickly turn into a high-risk one-up-manship. “Stargate + related efforts could help the US stay ahead of China, but China will still have their own superintelligence(s) no more than a year later than the US, absent e.g. a war,” wrote former OpenAI policy researcher Miles Brundage on X yesterday. “So unless you want (literal) war, you need to have a vision for navigating multipolar AI outcomes.”
Within hours of taking office on Monday, President Trump dismantled the Biden Administration's efforts to tackle AI regulation, including Biden’s 2023 executive order on AI. Trump’s plan is to reduce as many barriers as possible to developing AI, thereby speeding up AI innovation in a business-friendly environment.
But it’s important to at least recognize the high-stakes game at play here. Stargate, combined with reduced regulation, is a gambit that could deliver huge wins for OpenAI, Big Tech, and possibly Trump. It may also be remembered as a necessary play in an era where America's rivals are escalating the stakes. But we should acknowledge that all of us—many of whom both marvel at ChatGPT and fear a Terminator-style future—may be woefully unprepared for what’s about to unfold, critics say.
“I do worry that a lot of focus is going into building agentic AI, or giving some level of autonomy to AI model-powered systems,” said Hugging Face’s Ghosh. “That brings forth a lot of unknown risks."
The public is unprepared for any of those risks. Brundage pointed out on X today that “AI companies have little interest in preparing society, at the speed/scale that's needed, since they are busy trying to beat each other and navigate a complex political environment.” Journalists, academics, and civil society, he said, “need to fill the gap.”
We can look at Stargate and other massive AI projects as Big Tech’s biggest gamble, but it’s a bet that all of us are all-in on—whether we like it or not. Maybe it’s time to make sure we really understand the stakes.