Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

Sam Altman's European vacation

Sam Altman photo. (Credit: Drew Angerer—Getty Images)

OpenAI CEO Sam Altman loves Italy, but the affection may not be mutual—at least not when it comes to OpenAI's flagship product, ChatGPT.

Italy temporarily banned ChatGPT last week on the grounds that it violates Europe’s strict data privacy law, GDPR. OpenAI immediately complied with the ban, saying it would work with Italian regulators to “educate them” on how OpenAI’s A.I. software is trained and operates.

“We of course defer to the Italian government and have ceased offering ChatGPT in Italy (though we think we are following all privacy laws)," Altman tweeted, adding that "Italy is one of my favorite countries and I look forward to visiting again soon!”

The comments drew plenty of snark from other Twitter users for its slightly tone deaf, ugly American vibes. Meanwhile, Italy’s deputy prime minster took the country’s data regulator to task, saying the ban seemed excessive. But Rome’s decision may be just the start of generative A.I.’s problems in Europe. As this newsletter was preparing to go to press, there were reports Germany was also considering a ban.

Meanwhile, here in the U.K., where I’m based, the data protection regulator followed Italy’s ban with a warning that companies could very well fall afoul of Britain’s data protection laws too if they weren’t careful in how they developed and used generative A.I. The office issued a checklist for companies to use to help ensure they are in compliance with existing laws.

Complying with that checklist may be easier said than done. A number of European legal experts are actively debating whether any of the large foundation models at the core of today’s generative A.I. boom—all of which are trained on vast amounts of data scraped from the internet, including in some cases personal information—comply with GDPR.

Elizabeth Renieris, a senior researcher at the Institute for Ethics in AI at the University of Oxford who has written extensively about the challenges of applying existing laws to newly-emerging technology such as A.I. and blockchain, wrote on Twitter that she suspected GDPR actions against companies making generative A.I. “will be impossible to enforce because data supply chains are now so complex and disjointed that it’s hard to maintain neat delineations between a ‘data subject, controller, and processor’ (@OpenAI might try to leverage this).” Under GDPR, the privacy and data protection obligations differ significantly based on whether an organization is considered a controller of certain data, or merely a processor of it.

Lilian Edwards, chair of technology law at the University of Newcastle, wrote in reply to Renieris, “These distinctions chafed when the cloud arrived, frayed at the edges with machine learning and have now ripped apart with large models. No-one wants to reopen GDPR fundamentals but I am not clear [the Court of Justice of the European Union] can finesse it this time.”

Edwards is right that there’s no appetite among EU lawmakers to revisit GDPR’s basic definitions. What’s more, the bloc is struggling to figure out what to do about large general-purpose models in the Artificial Intelligence Act it is currently trying to finalize, with the hope of having key EU Parliamentary committees vote on a consensus version on April 26. (Even then, the act won’t be really be finalized. The whole Parliament will get to make amendments and vote in early May and there will be further negotiation between the Parliament, the EU Commission, which is the bloc’s executive arm, and the European Council, which represents the bloc’s various national governments.) Taken together, there could be real problems for generative A.I. based on large foundation models in Europe.

At an extreme, many companies may have to follow OpenAI’s lead and simply discontinue offering these services to EU citizens. It is doubtful European politicians and regulators would want that outcome—and if it starts to happen, they will probably seek some sort of compromise on enforcement. That alone may not be enough. As has been the case with GDPR and trans-Atlantic data sharing, European courts have been quite open to citizens’ groups going to court and obtaining judgements based on strict interpretations of the law that force national data privacy regulators to act.

At a minimum, uncertainty over the legal status of large foundation models may make companies, especially in Europe, much more hesitant to deploy them, especially in cases where they have not trained the model from scratch themselves. And this might be the case for U.S. companies that have international operations too—GDPR applies not just to customer data, but also employee data, after all.

With that, here’s the rest of this week’s news in A.I.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.