In the last couple days, both OpenAI and Salesforce have announced significant investments in the U.K.—the ChatGPT maker said yesterday that it would be opening its first non-U.S. office in London, while Salesforce said today that it would be investing a whopping $4 billion in its British operations.
It’s not too hard to figure out why the U.K. is so popular with Big A.I. Under Prime Minister Rishi Sunak, the country has adopted a light-touch approach to the regulation of A.I., to help it “seize the opportunities” of the A.I. boom. (See also: Sunak’s welcoming approach to crypto, which led a16z to open its first international office in London.) Instead of proposing new laws, the U.K. is leaving oversight of A.I.’s rise to existing regulators such as the British health and safety agency and its antitrust watchdog.
Salesforce in particular is being very clear that this friendly stance spurred its mega-investment, which dwarfs the $2.5 billion it put into the U.K. over the past five years.
“A clear pro-innovation regulatory framework that compels safe and responsible use of A.I. is vital, and Salesforce is fully focused on bringing secure, trusted, enterprise ready generative A.I. to U.K. businesses,” said Zahra Bahrololoumi, CEO of Salesforce U.K. and Ireland, in a statement this morning.
Salesforce’s investment is very good news for the otherwise beleaguered Sunak, who described it as “a ringing endorsement of our economy.” (Meanwhile, OpenAI’s statement on its U.K. expansion focused more on London, which VP of people Diane Yoon described as “a city globally renowned for its rich culture and exceptional talent pool.”)
But this week’s celebrations also carry an implicit threat for those that are considering serious A.I. regulation, in particular the European Union, which the U.K. left a few years ago.
The EU’s A.I. Act is currently in its final legislative stage—the behind-closed-doors “trilogue” negotiations between the European Commission, the European Parliament, and the governments of EU countries. As it stands, the bill would be catastrophic for A.I. vendors because, as recent research has shown, every single foundation model out there would fall foul of its provisions. In particular, the likes of OpenAI will struggle to disclose which copyrighted material their models were trained on, as the law would require.
Just over a month ago, OpenAI CEO Sam Altman made a heavy-handed attempt to threaten the EU over its approach, saying his company could leave the world’s second-largest consumer market if it “overregulated” A.I. European lawmakers do not take kindly to such threats, and angrily retorted that they wouldn’t be swayed. Altman quickly backtracked, trilling: “We are excited to continue to operate [in the EU] and of course have no plans to leave.”
So what’s more effective? Issuing threats you don’t intend to back up, or showing the governments of EU countries that their neighbor, which isn’t rolling out red tape, is getting a bunch of investment from the kings of the next big thing? “All this could be yours if you water down the A.I. Act in its final stage,” is the underlying message that I’m reading here.
But Big A.I. should be cautious about its future in the U.K. Sunak may be the ideal host, but he may not be in charge for much longer. The country will have to hold a general election sometime within the next 19 months, and polling points to a resounding victory for the opposition Labour Party.
“We are nowhere near where we need to be on the question of [A.I.] regulation,” said Labour Leader Keir Starmer earlier this month, promising an “overarching regulatory framework” to tackle the technology’s risks. OpenAI and Salesforce can’t say they weren’t warned.
More news below.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
Data Sheet’s daily news section was written and curated by Andrea Guzman.