Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

The Paris AI Action Summit was a fork in the road—but are we on a path to prosperity or disaster?

Emmanuel Macron and Brigitte Macron stand alongside JD Vance and Usha Vance (Credit: Chesnot—Getty Images)

Bonjour! Greetings from Paris, where the French government is currently hosting government officials from dozens of nations for what it is calling the AI Action Summit. The Summit is the successor to two prior international gatherings, the first convened by the U.K. government and held at Bletchley Park, in England, in November 2023, and the second held by the South Korean government in Seoul in May 2024.

But it would be hard to overstate the difference in vibe between those previous two meetings and this one. The Bletchley Summit was a decidedly sober affair, with just 29 governments represented and top executives from the handful of AI labs, such as OpenAI, Google DeepMind, and Anthropic, at the cutting-edge of AI technology. The conversation was dominated by what some would call AI “doomerism”—or how to head off the most catastrophic risks from powerful AI. It led to a commitment by the countries present to identify AI risks and work together to head them off. Then in Seoul, 16 leading AI companies agreed to publish frameworks for how they would seek to identify and mitigate AI safety risks, and under what circumstances they might decide not to develop models.

An extreme vibe shift

For this Summit, France has taken, shall we say, a different approach. Matt Clifford, a tech investor turned U.K. government advisor who helped plan the Bletchley Summit, said on a panel the Tony Blair Institute hosted here on Sunday that it “was exciting to see what [the French summit] team have done, in blowing it up.”

He positioned the remark as a compliment, that France has widened the aperture of the summit to look at AI’s other potential risks—around bias, inequality, and job displacement—but most importantly to highlight AI’s economic opportunities. France transformed a summit originally into what could best be described as an AI festival, complete with glitzy corporate side events and even a late night dance party held amid the opulent tapestries and neo-baroque gilded mouldings of the French foreign ministry at Quai d’Orsay. That rumbling you can barely make out beneath the thumping bass line? That would be the cognitive dissonance between the party atmosphere in Paris, along with French President Emmanuel Macron’s repeated exhortations to move “faster and faster” on AI deployment, and the fact that executives at leading AI companies are predicting human-level intelligence may arrive in two to five years—with far-ranging, disruptive consequences for society and workers everywhere.

Blowing it up

For those who care about AI’s potential catastrophic risks, an alternate meaning of Clifford’s “blowing it up” comes to mind. Once the main focus of the summit, AI Safety was relegated to a small subset of discussions within a broader “Trust in AI”  pillar, which itself was just one of five separate summit tracks. The word “safety” was banished from the Summit’s name, in favor of the term Action—and Anne Bouverot, French President Emmanuel Macron’s special envoy for the Summit, dismissed concerns about AI’s potential existential risks as “science fiction” in her opening address. (Even though there is mounting empirical evidence that today’s AI models, if used as agents that carry out actions on a user’s behalf, can indeed pose a risk of loss of control—with models seeking to achieve human-assigned goals but doing so in ways the human user never intended.) Safety didn’t make an appearance in the Summit’s final communique either. Nor did the final declaration include any clear path forward for future international meetings to work specifically on AI risks. (India, which co-hosted the Paris Summit, said it would host the next summit in its own country, but without any promises of what it would focus on.)

The Paris Summit bitterly disappointed many who work on AI Safety. Max Tegmark, the MIT physicist who is the founder and president of the Future of Life Institute, called the Summit “a tremendous missed opportunity” and the declaration’s omission of any safety steps “a recipe for disaster.” Tegmark, in an earlier interview with Fortune, said he still held out hope that world leaders would come to recognize that uncontrollable human-level AI would pose a risk to their own power, and that when they recognized this fact, they would move to regulate it. Some AI safety experts think the effort to create international agreements to address AI’s risks will have to shift to a different forum. (There are other efforts underway at the United Nations, OECD, and G7.) More than one AI safety expert told me at the Summit that they think that it may now take some sort of “freak out moment”—when increasingly powerful AI agents cause some sort of harm, or perhaps just demonstrate how easily they could cause harm—to actually get progress on international AI governance. Some predicted that such a moment could come in the next year as more and more companies roll out AI agents and AI model capabilities continue to advance. 

The DEI Declaration

While not mentioning “safety,” the Summit’s final declaration did include some vague language about the need to ensure AI’s “diversity,” and lots of talk about “inclusive” and “sustainable” AI. The use of these trigger terms guaranteed that the Trump Administration—which sent Vice President J.D. Vance to be the official U.S. representative to the Summit—wouldn’t sign the meeting’s final declaration. This might not have been Macron's intentions, but it did allow him to credibly claim France was leading "a third way" on AI between the two opposing camps that have been leading in the technology's development, the U.S. and China. (China did sign the statement.)

And largely because the U.S. wouldn’t sign, the U.K. also decided against signing—apparently to avoid any risk of antagonizing the Trump Administration—although 61 other countries did sign. (Top execs from Google, OpenAI, and Anthropic were all present, but only one company, Hugging Face, the AI model repository and open source AI champion, signed.) Anthropic released a statement from its CEO Dario Amodei in which he hinted at disappointment the Summit hadn’t done more to address the looming risks of human-level artificial general intelligence. “Greater focus and urgency is needed,” Amodei said “given the pace at which the technology is progressing. The need for democracies to keep the lead, the risks of AI, and the economic transitions that are fast approaching—these should all be central features of the next summit.”

The Summit did create a new foundation with a $400 million endowment (and a target of $2.5 billion within five years), devoted to funding projects aimed at creating datasets and small AI models designed to serve the public interest. It also created a Coalition on Sustainable AI that includes Nvidia, IBM, and SAP, as well as French energy giant EDF, but without any clear targets or road map for what the organization will do going forward, leaving climate campaigners disappointed. Union leaders also decried the lack of concrete steps to make sure workers have a clear seat at the table for discussions of AI policy. And the creation of these new organizations was eclipsed by big announcements on AI investment: Macron’s own reveal of a 109 billion euro plan for AI investments in France and the European Union’s unveiling of a 200 billion euro plan to speed AI adoption in European industry. 

Vance Makes Trump’s AI Policy Clear

Elon Musk’s close ties to U.S. President Donald Trump and Trump’s occasional comments about AI’s potential dangers had left some in doubt about exactly where the Trump Administration would come down on AI regulation. Vance laid those doubts to rest, giving a red meat speech that said U.S. A.I. policy would be built on four pillars: the maintenance of U.S. AI technology as “the gold standard;” a belief that excessive regulation could kill innovation and that “pro-growth” AI policies are required; that AI must “remain free from ideological bias, and that American AI will not be co-opted into a tool for authoritarian censorship;” and that workers will be consulted on AI policy and that the Trump Administration will “maintain a pro-worker growth path for AI” with the belief AI will create more jobs than it displaces. With Google CEO Sundar Pichai sitting uncomfortably on stage behind him, and OpenAI CEO Sam Altman and Anthropic’s Amodei in the audience, Vance also warned that companies calling for AI regulation were attempting to engage in regulatory capture, enshrining rules that would lock in their advantage to the detriment of competitors.

At a time when many companies have been rushing to deploy Chineses startup DeepSeek’s R1 reasoning model, Vance also used his speech to caution the countries present against partnering with Chinese companies—although he did not mention China by name. “From CCTV to 5G equipment, we're all familiar with cheap tech in the marketplace that's been heavily subsidized and exported by authoritarian regimes,” he said. “As some of us in this room have learned from experience, partnering with them means chaining your nation to an authoritarian master that seeks to infiltrate, dig in and seize your information infrastructure.”

Chinese researchers present at the conference meanwhile bemoaned the emerging new cold war between Washington and Beijing, saying that it made the whole world less safe. “It’s difficult to hold a very optimistic view about cooperation between the China and the U.S. on AI safety in the future,” Xiao Qian, vice dean of the AI International Governance Institute at Tsinghua University, told the audience at a side event on AI safety in Paris, my Fortune colleague Vivienne Walt reported.

With that, here’s more AI News.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.