Last week, the European Union (EU) made history by passing the AI Act, the world's first major set of rules governing the use of artificial intelligence (AI). This legislation imposes significant obligations on businesses and aims to reduce risks, create opportunities, combat discrimination, and enhance transparency.
The AI Act classifies AI into various categories, ranging from 'unacceptable' to high, medium, and low risk. It prohibits certain practices such as the use of biometric categorization systems based on sensitive characteristics, untargeted scraping of facial images for facial recognition databases, emotion recognition in workplaces and schools, social scoring, predictive policing solely based on profiling, and AI that manipulates human behavior or exploits vulnerabilities.
While there are exceptions for law enforcement, critics have raised concerns about potential misuse of biometric mass surveillance. High-risk AI systems will have additional obligations, including risk assessment, transparency, accuracy, and human oversight. However, some argue that the legislation falls short in certain areas, such as the scope of fundamental rights impact assessments.
Under the new rules, general-purpose AI systems must meet transparency requirements, comply with EU copyright law, and provide detailed summaries of training data. The implementation of the AI Act will occur in stages, with companies urged to prepare promptly. The law is expected to have a global impact, influencing other nations' approaches to AI regulation.
Industry self-regulation and best practices are likely to guide AI regulation in the US and UK, focusing on flexibility, innovation, and compliance with existing laws related to data protection, consumer protection, product safety, and equality.