Based on a press release issued by the European Parliament in December 2023, a provisional agreement between the European Parliament and the Council on the Artificial Intelligence Act marks a monumental step in shaping AI development. The Act, designed to safeguard fundamental rights, democracy, and environmental sustainability, emphasizes the need for responsible AI implementation while fostering innovation in Europe. A cornerstone of the agreement is that rules would impose obligations on AI’s according to their level of potential risks.
The plan proposed by EU policymakers requires developers of AI models to maintain information on training processes, summarise the copyrighted material used and label the content generated by AI. Systems that pose a "systemic risk" should adhere to an industry code of conduct, monitor cyber incidents and report them. Central to this agreement are provisions that ban certain AI applications deemed threatening to individual rights and democratic principles. These include prohibiting biometric categorization systems based on sensitive characteristics, facial image scraping for recognition databases, emotion recognition in workplaces and schools, social scoring, manipulation of human behaviour, and exploitation of vulnerabilities of people.
In recognizing the potential risks posed by AI in law enforcement, negotiators introduced stringent safeguards and exceptions for the use of biometric identification systems in public spaces. These measures, subject to judicial authorization, limit the scope of real-time biometric identification strictly for specific crime prevention or victim search scenarios (they include targeted searches of victims, prevention of specific terrorist threats, localisation or identification of persons suspected of having committed either of the crimes mentioned in the regulation). The Act classifies high-risk AI systems (based on their potential harm) and imposes mandatory obligations, such as fundamental rights impact assessments, applicable across various sectors including insurance and banking. Notably, citizens retain the right to lodge complaints regarding high-risk AI decisions impacting their rights, ensuring transparency and accountability.
Moreover, the regulations outline requirements for general-purpose AI (GPAI) systems, mandating transparency, technical documentation, and adherence to copyright laws. Stringent measures are imposed on high-impact GPAI models with systemic risks, necessitating model evaluations, risk assessments, cybersecurity protocols, and energy efficiency reports. In support of innovation, especially for small and medium-sized enterprises (SMEs), the agreement promotes regulatory sandboxes and real-world testing, allowing businesses to develop and train innovative AI solutions without undue pressure from industry giants. Non-compliance with the Act can lead to substantial fines, emphasizing the importance of adherence to these regulations. These are ranging from EUR 35 million or 7% of global turnover to EUR 7.5 million or 1.5 % of turnover, depending on the magnitude of the breach and the size of the company.
The Act's formal adoption into EU law requires further approval from both Parliament and Council, with committees set to vote on the agreement soon. This marks a milestone in shaping the ethical, safe, and innovative adoption of AI in Europe, setting a global precedent for responsible AI regulation.