Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

Mark Zuckerberg says Europe needs more consistent AI regulation—and even his privacy nemesis agrees

Mark Zuckerberg, chief executive officer of Meta Platforms Inc., during the Acquired LIVE event at the Chase Center in San Francisco, California, US, on Tuesday, Sept. 10, 2024. (Credit: David Paul Morris—Bloomberg/Getty Images)

Leading technology executives, including Meta's Mark Zuckerberg, Spotify's Daniel Ek, and Stripe's Patrick Collison, have written an open letter slamming Europe for its "fragmented and inconsistent" regulation of data privacy and artificial intelligence.

The block will “miss out” on the benefits of AI unless it regulates the sector more consistently, a wide-ranging group of CEOs and researchers wrote in the letter published Thursday.

It called for decisions from data privacy regulators across the EU to be faster, clearer, and more harmonized "to enable European data to be used in AI training for the benefit of Europeans.”

Leaders of European industrial titans like the German steel producer Thyssenkrupp and Italian tire-maker Pirelli also signed the letter.

The letter highlights the plight of companies that are developing open-source AI models, along with the data-processing restrictions that Meta says are stopping it from rolling out AI applications in the region. The letter echoes arguments that Zuckenberg and Ek made in an op-ed in the Economist last month.

The latest missive is likely to add to an on-going debate about Europe's sluggish economic growth and poor productivity that has grown to a fever pitch in the past few weeks following the publication of a major report into European competitiveness, written by Mario Draghi, former European Central Bank president. Draghi said the AI revolution gave Europe a chance to “redress its failings in innovation and productivity and to restore its manufacturing potential.”

Draghi recommended that EU companies pool their data for open-source model training, but warned that limitations on data storage and processing stand in the way.

Stumbling blocks

Meta’s difficulty in rolling out its AI assistant in Europe makes for a useful case study here.

Like many companies these days, Meta has been exploiting its users’ publicly posted data (as opposed to private posts and messages) to train its AI models. It paused that practice in the EU and U.K. in June, following complaints privacy activists made to European data protection authorities.

Those activists say that, under the EU’s General Data Protection Regulation (GDPR), Meta doesn’t have a secure legal basis for using Europeans’ data to train AI models. (The U.K. still uses an almost identical copy of the GDPR, even after Brexit.) They also claim that Meta is flouting the GDPR’s restrictions on "purpose limitation"—essentially, that it can’t repurpose Facebook and Instagram users’ data for AI training because that’s not what those users signed up for years ago.

Meta last week resumed its U.K. AI rollout after assuring that country’s regulator that it would make it easier for users to opt out of their data being used as training fodder, but it still hasn’t received full U.K. approval for its training—the watchdog continues to watch. And in Europe, the pause seems set to continue indefinitely.

Meta isn’t the only company that’s had to stop training its models on Europeans’ personal data, to comply with data-protection law. Elon Musk’s X has done the same and Microsoft's LinkedIn isn't even trying to train its models on European users' data, though those companies have decided to keep offering their AI in Europe nonetheless. Meta’s stance is that, if its models can’t be trained to understand Europe-specific idioms, knowledge and culture, the deployment isn’t worth it.

"They are absolutely right"

Meta says the inconsistency it is facing comes in two forms. First, the company says that different European data protection authorities have issued conflicting guidance about the use of personal data for AI training. On this, even Meta’s arch privacy nemesis agrees.

“They are absolutely right that there are conflicting views from the DPAs,” said Max Schrems, the Austrian activist lawyer whose anti-Meta privacy crusade has given the company numerous nightmares over the last decade, and whose NGO issued the complaints that led to Meta’s European AI pause. “There are also guidelines by the DPAs that just structurally don’t understand how LLMs work...so the criticism is somewhat legitimate.”

But Meta also claims that it’s been picked on unfairly, as many other companies have also been using public internet data to train their models. “AI training is not unique to our services, and we’re more transparent than many of our industry counterparts,” Meta griped in June, when its rollout plans hit the buffers.

For some of the other signatories of Wednesday’s open letter, the issue seems to be more about perceived threats to Europe’s overall competitiveness than risks to their specific operations.

For example, one of the signing CEOs was Ericsson’s Börje Ekholm. The Swedish networking equipment manufacturer declined to give any comments to Fortune about problems that Ericsson itself is experiencing, due to the way AI is currently regulated.

Instead, Ericsson pointed to a LinkedIn post by Ekholm, warning that “the U.S, China and India are sprinting ahead to embrace AI innovation while Europe lags behind in the dust, barely at crawling pace.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.