In a groundbreaking development on Friday, Europe secured a preliminary agreement on pivotal regulations within the European Union pertaining to the deployment of artificial intelligence. This encompasses the government’s utilization of AI in biometric surveillance and the regulatory framework for AI systems, such as ChatGPT. With this political accord, the EU is poised to be at the forefront, marking the first major global power to establish comprehensive laws governing artificial intelligence.
The agreement, reached after nearly 15 hours of negotiations following an extensive 24-hour debate, is a significant stride forward. Both EU countries and European Parliament members are expected to fine-tune the details in the upcoming days, potentially shaping the ultimate form of the legislation.
Europe has established itself as a trailblazer, recognizing its pivotal role as a global standard setter. “This is, yes, I believe, a historical day,” remarked European Commissioner Thierry Breton at a press conference.
The agreement mandates foundational models like ChatGPT and general-purpose AI systems (GPAI) to adhere to transparency requirements before entering the market. This involves the creation of technical documentation, compliance with EU copyright law, and the dissemination of detailed summaries regarding training data.
Foundation models with significant systemic risk must undergo model evaluations, address and mitigate systemic risks, perform adversarial testing, report serious incidents to the European Commission, ensure cybersecurity, and disclose information about energy efficiency.
GPAIs carrying systemic risk may opt for adherence to codes of practice to comply with the new regulations.
Governments are restricted in using real-time biometric surveillance in public spaces, only permitted in cases involving victims of specific crimes, the prevention of tangible threats, such as terrorist attacks, and searches for individuals suspected of the most severe crimes.
The agreement prohibits cognitive behavioral manipulation, untargeted scraping of facial images from the internet or CCTV footage, social scoring, and biometric categorization systems inferring political, religious, philosophical beliefs, sexual orientation, and race.
Consumers are granted the right to file complaints and receive meaningful explanations, with fines for violations ranging from 7.5 million euros ($8.1 million) or 1.5% of turnover to 35 million euros or 7% of global turnover.
Business group DigitalEurope criticized the rules, viewing them as an additional burden for companies. Director General Cecilia Bonefeld-Dahl stated, “We have a deal, but at what cost?” and expressed concern about the last-minute regulation of foundation models.
Privacy rights group European Digital Rights also criticized the law, particularly its acceptance of live public facial recognition across the EU. Senior Policy Advisor Ella Jakubowska noted, “Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm.”
The legislation is expected to come into force early next year after formal ratification by both sides and should be applicable two years thereafter.
Governments globally are grappling with balancing the advantages of AI, capable of human-like interactions and coding, with the imperative to establish safeguards. Europe’s ambitious AI regulations may serve as a model for other governments, providing an alternative to the United States’ light-touch approach and China’s interim rules.