
- Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks are warning that treating the global AI arms race like the Manhattan Project could backfire. Instead of reckless acceleration, they propose a strategy of deterrence, transparency, and international cooperation—before superhuman AI spirals out of control.
Former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks are sounding the alarm about the global race to build superintelligent AI.
In a new paper titled Superintelligence Strategy, Schmidt and his co-authors argue that the U.S. should not pursue the development of artificial general intelligence (AGI) through a government-backed, Manhattan Project-style push.
The fear is that a high-stakes race to build superintelligent AI could lead to dangerous global conflicts between the superpowers, much like the nuclear arms race.
"The Manhattan Project assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it," the co-authors wrote. "What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure."
Trump's AI ambitions
The paper comes as U.S. policymakers consider a large-scale, state-funded AI project to compete with China's AI efforts.
Last year, a U.S. congressional commission proposed a “Manhattan Project-style” effort to fund the development of AI systems with superhuman intelligence, modeled after America’s atomic bomb program in the 1940s.
Since then, the Trump administration has announced a $500 billion investment in AI infrastructure, called the "Stargate Project," and rolled back AI regulations brought in by the previous administration.
Earlier this month, U.S. Secretary of Energy Chris Wright also appeared to promote the idea by saying the country was "at the start of a new Manhattan Project" and that, with President Trump’s leadership, "the United States will win the global AI race."
High-stakes global AI race
The authors argue that AI development should be handled with extreme caution, not in a race to out-compete global rivals.
The paper lays out the risks of approaching AI development as an all-or-nothing battle for dominance.
Schmidt and his co-authors argue that instead of a high-stakes race, AI should be developed through broadly distributed research with collaboration across governments, private companies, and academia. They emphasize that transparency and international cooperation are critical to ensuring that AI benefits humanity rather than becoming an uncontrollable force.
Schmidt has addressed the threats posed by a global AI race before. In a January Washington Post op-ed, Schmidt called for the US to invest in open source AI efforts to combat China's DeepSeek.
The concept of Mutual Assured AI Malfunction
The authors suggest a new concept—Mutual Assured AI Malfunction (MAIM)—modeled on the nuclear arms race's Mutually Assured Destruction (MAD).
"Just as nations once developed nuclear strategies to secure their survival, we now need a coherent superintelligence strategy to navigate a new period of transformative change," the authors wrote.
"We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state’s aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals," they said.
The paper also suggests countries engage in nonproliferation and deterrence, much like they do with nuclear weapons.
"Taken together, the three-part framework of deterrence, nonproliferation, and competitiveness outlines a robust strategy to superintelligence in the years ahead," they said.