European Union negotiators have clinched a dea on the world's first comprehensive artificial intelligence rules, paving the way for legal oversight of AI technology that has promised to transform everyday life and spurred warnings of existential dangers to humanity.
Negotiators from the European Parliament and the bloc's 27 member states overcame big differences on controversial points including generative AI and police use of face recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.
“Deal!” tweeted European Commissioner Thierry Breton just before midnight. “The EU becomes the very first continent to set clear rules for the use of AI.”
The result came after marathon closed-door talks this week, with the initial session lasting 22 hours before a second round kicked off Friday morning.
Officials were under the gun to secure a political victory for the flagship legislation.
Civil society groups, however, gave it a cool reception as they wait for technical details that will need to be ironed out in the coming weeks.
They said the deal didn't go far enough in protecting people from harm caused by AI systems.
“Today’s political deal marks the beginning of important and necessary technical work on crucial details of the AI Act, which are still missing,” said Daniel Friedlaender, head of the European office of the Computer and Communications Industry Association, a tech industry lobby group.
Historic!
— Thierry Breton (@ThierryBreton) December 8, 2023
The EU becomes the very first continent to set clear rules for the use of AI 🇪🇺
The #AIAct is much more than a rulebook — it's a launchpad for EU startups and researchers to lead the global AI race.
The best is yet to come! 👍 pic.twitter.com/W9rths31MU
The race to regulate AI
The EU took an early lead in the global race to draw up AI guardrails when it unveiled the first draft of its rulebook in 2021.
The recent boom in generative AI, however, sent European officials scrambling to update a proposal poised to serve as a blueprint for the world.
The European Parliament will still need to vote on the act early next year, but with the deal done that’s a formality.
Generative AI systems like OpenAI’s ChatGPT have exploded into the world's consciousness, dazzling users with the ability to produce human-like text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.
Now, the US, UK, China and global coalitions like the G7 major democracies have jumped in with their own proposals to regulate AI, though they’re still catching up to Europe.
The AI Act was originally designed to mitigate the dangers from specific AI functions based on their level of risk, from low to unacceptable.
But lawmakers pushed to expand it to foundation models, the advanced systems that underpin general purpose AI services like ChatGPT and Google’s Bard chatbot.
Foundation models looked set to be one of the biggest sticking points for Europe.
However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big US rivals, including OpenAI's backer Microsoft.
Press release 📑 #AI Act Negotiations Result in Half-Baked EU Deal
— CCIA Europe (@CCIAeurope) December 9, 2023
💬 "Last night’s political deal marks the beginning of important and necessary technical work on crucial details of the #AIAct. Regrettably speed seems to have prevailed over quality."
➡️ https://t.co/Juw68qNq1G pic.twitter.com/oDQbmykjr9
AI monopolies and personal freedom
Researchers have warned that powerful foundation models, built by a handful of big tech companies, could be used to supercharge online disinformation and manipulation, cyberattacks or creation of bioweapons.
Rights groups also caution that the lack of transparency about data used to train the models poses risks to daily life because they act as basic structures for software developers building AI-powered services.
What became the thorniest topic was AI-powered face recognition surveillance systems, and negotiators found a compromise after intensive bargaining.
European lawmakers wanted a full ban on public use of face scanning and other “remote biometric identification” systems because of privacy concerns.
But governments of member countries succeeded in negotiating exemptions so law enforcement could use them to tackle serious crimes like child sexual exploitation or terrorist attacks.