We have a deal! On Saturday, negotiators from the European Union’s big institutions said they had finally reached a political agreement on the new AI Act, which will be the world’s first—and potentially most influential—comprehensive set of rules for the AI sector.
Hard details still need to emerge, and the final text still needs to be formally approved by the European Parliament and the bloc’s national governments, but we now know roughly what rules to expect when the act goes into effect in a couple years’ time. (A voluntary “AI Pact” will bridge the gap between now and then.)
Foundation models (like GPT-4) and the general-purpose systems built on top of them (like ChatGPT) will have to come with technical documentation and detailed summaries of the content on which they are trained. Those deemed to present systemic risks will have to submit to "adversarial testing" and adhere to rules around cybersecurity and energy efficiency. The deal calls for state-sponsored “regulatory sandboxes,” so startups and smaller enterprises can develop and train new AI systems, and test them in real-world conditions, before releasing them—Parliament claimed this will protect smaller players from “undue pressure from industry giants controlling the value chain.” Crucially, fully open-sourced models will be more lightly regulated than proprietary models.
High-risk AI systems in fields like insurance and banking, as well as those that could affect voter behavior, will have to undergo impact assessments, and Europeans will be able to demand explanations about how decisions were made that could have an impact on them. Some things will be outright banned, like social scoring, emotion-recognition systems in the workplace, AI systems that exploit the vulnerable, and biometric categorization systems that use characteristics such as race and sexual orientation.
Obviously, some people are happy with the outcome, not least the negotiators, who managed to get this thing over the finish line just in time—had it been pushed through to next year, it could have fallen victim to the chaos before looming European elections. “It was long and intense, but the effort was worth it,” exhaled Brando Benifei, one of Parliament’s coordinators in the negotiations, in a statement. “The European Union has made impressive contributions to the world; the AI Act is another one that will significantly impact our digital future,” agreed Dragos Tudorache, another key lawmaker.
But tech lobbyists and rights activists alike are pretty dissatisfied. Let’s give the first word here to Daniel Castro, the vice president of the Microsoft/Amazon/Meta/Google/Apple-funded Information Technology & Innovation Foundation (ITIF): “Given how rapidly AI is developing, EU lawmakers should have hit pause on any legislation until they better understand what exactly it is they are regulating…Acting quickly may give the illusion of progress, but it does not guarantee success.”
The Computer & Communications Industry Association (CCIA), whose members also include Google, Meta, Amazon and Apple, also claimed that “future-proof AI legislation was sacrificed for a quick deal.” The CCIA welcomed the fact that, unlike the original draft from the European Commission, the final compromise allows developers to “demonstrate that a system does not pose a high risk,” but it claimed the Act’s “stringent obligations” would “slow down innovation in Europe” and “lead to an exodus of European AI companies and talent-seeking growth elsewhere.” (Counterpoint: Jeannette Gorzala, vice president of the European AI Forum, which represents local AI entrepreneurs, told Sifted that the provisions for smaller players will “fortify and empower the European AI startup ecosystem.”)
Over to the rights activists now. The European Consumer Organisation (BEUC) fretted that the Act will allow for AI-powered emotion recognition systems outside the workplace, and claimed it would leave things like virtual assistants “basically unregulated.” “Consumers rightly worry about the power and reach of artificial intelligence and how it can lead to manipulation and discrimination, but the AI Act doesn’t sufficiently address these concerns,” said deputy director general Ursula Pachl.
And European Digital Rights (EDRi) predicted that “the devil will be in the detail” of the technical drafts that will appear in the coming weeks. EDRi is concerned that the agreement marks the first time the EU is legalizing live facial recognition in public spaces—albeit limited to a handful of applications such as the search for suspects and the prevention of terror attacks—and contains “exemptions to the rules for when law enforcement, migration, and national security authorities deploy ‘high-risk’ AI.”
“Our fight against biometric mass surveillance is set to continue,” said EDRi senior policy advisor Ella Jakubowska in a statement. More news below.
David Meyer
Want to send thoughts or suggestions to Data Sheet? Drop a line here.