Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

Big AI's 'reverse acqui-hire' deals get more scrutiny in the U.K. and U.S.

Satya Nadella, chief executive officer of Microsoft Corp., speaks during the company event on AI technologies in Jakarta, Indonesia, on Tuesday, April 30, 2024. (Credit: Dimas Ardian—Bloomberg/Getty Images)

When Microsoft dropped its observer seat on OpenAI’s board last week—and Apple abandoned reported plans to gain such a seat—the uptick in regulatory scrutiny around the AI sector was a clear culprit. And the scope of that scrutiny just widened further.

The U.K.’s antitrust regulator, the Competition and Markets Authority, yesterday gave notice that it’s in the early stages of probing Microsoft’s March hiring of key staff from AI startup Inflection, which came with $650 million in licensing fees for Inflection.

The hired crew included Inflection cofounder Mustafa Suleyman, who now runs Microsoft’s in-house AI efforts. This seemed to be a case of Microsoft trying to avoid becoming over-reliant on OpenAI, into which it has invested $13 billion for a profit share. But authorities also want to check that the nature of the deal (a so-called reverse acqui-hire, as a more traditional acqui-hire would involve buying the company) wasn’t also a tactic to sidestep antitrust rules that might be more clearly triggered by a straightforward acquisition.

The CMA’s initial probe formally begins today, and the watchdog will decide by Sept. 11 whether to press on with a proper merger investigation.

Meanwhile, Reuters reports that the U.S. Federal Trade Commission—which has already been nosing around the Microsoft-Inflection deal since early June—is also asking questions about Amazon’s deal with AI startup Adept late last month.

This was a very similar arrangement, with CEO David Luan and other key Adept players joining Amazon and Amazon paying Adept to license its technology. One might also see a parallel between Microsoft’s earlier OpenAI investment and Amazon’s earlier investment of $4 billion into OpenAI rival Anthropic.

Apart from the difference in dynamics stemming from the fact that OpenAI is so far a much bigger name than Anthropic, there does seem to be a playbook here, and it’s no surprise that U.S. and U.K. (and EU) regulators would like to know if rules are being skirted. U.S. lawmakers are certainly upset about the trend. “A few companies control a major portion of the market, and just concentrate—rather than on innovation—trying to buy out everybody else’s talent,” complained Sen. Ron Wyden (D-Or.) last week.

Incidentally, there are also a couple more interesting news tidbits out there about AI regulation.

First, King Charles III announced today that the U.K.’s new Labour government will “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”—a big shift from the previous Conservative government’s hands-off approach.

Second, with the EU now having officially published its own AI Act—its rules on model-makers will begin to apply from February next year—European privacy regulators said yesterday that they would be the right ones to enforce the new law in many cases. “I strongly believe that [data protection authorities] are suitable for this role because of their full independence and deep understanding of the risks of AI for fundamental rights, based on their existing experience,” said Irene Loizidou Nicolaidou, deputy chair of the watchdogs’ umbrella body, the European Data Protection Board, in a statement.

Of course, AI has been a big theme at Fortune’s Brainstorm Tech conference in Park City, Utah, this week, as my colleague Jeremy Kahn wrote yesterday. I was particularly intrigued by Jeremy’s chat with Google chief scientist Jeff Dean, who warned against people overplaying AI’s role in his company’s rising carbon emissions.

“There’s been a lot of focus on the increasing energy usage of AI, and from a very small base that usage is definitely increasing,” Dean said. “But I think people often conflate that with overall data center usage—of which AI is a very small portion right now but growing fast—and then attribute the growth rate of AI-based computing to the overall data center usage.”

A fair point, for now. But I think the criticisms of AI’s massive hunger for energy will remain valid until companies like Google and Microsoft can prove that rolling it out doesn’t mean deviating from their emissions-reduction goals.

A few more articles based on yesterday’s Brainstorm Tech action:

How trust and safety leaders at top tech companies are approaching the security threat of AI: ‘Trust but verify’

Why Grindr’s CEO believes ‘synthetic employees’ are about to unleash a brutal talent war for tech startups

Salesforce’s AI chief says the company uses its Einstein products internally: ‘We like to drink our own martinis’

How VCs from Alphabet’s CapitalG to Norwest are coping with a dead IPO landscape: ‘We’re not here to time the market’ 

Sequoia’s Roelof Botha says Silicon Valley’s legendary VC firm will not take a political point of view on the election

Rent the Runway cofounder Jennifer Fleiss on why cofounder relationships are critical for mental wellness in the startup game

Tech talent and killer powder: The recipe that startups say is fueling the rise of Utah’s Silicon Slopes

More news below.

David Meyer

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.