The European Union is considering a new legal framework that aims to significantly bolster regulations on the development and use of artificial intelligence. EU lawmakers in the leading European Parliament committees vote on the political agreement on the AI Act on 26 April. The AI Act is a landmark EU proposal to regulate Artificial Intelligence based on its potential to cause harm.
The proposed legislation focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims addressing ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy. The cornerstone of the AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited and minimal.
AI systems with limited and minimal risk – like spam filters or video games – are allowed to be used with few requirements other than transparency obligations. Systems deemed to pose an unacceptable risk – like government social scoring and real-time biometric identification systems in public spaces – are prohibited with little exception.
High-risk AI systems are permitted, but developers and users must adhere to regulations that require rigorous testing, proper documentation of data quality and an accountability framework that details human oversight. AI deemed high risk include autonomous vehicles, medical devices and critical infrastructure machinery, to name a few.
The Artificial Intelligence Act proposes steep non-compliance penalties. For companies, fines can reach up to €30 million or 6% of global income. Submitting false or misleading documentation to regulators can result in fines, too. The proposed law furthermore aims to establish a European Artificial Intelligence Board, which would oversee the implementation of the regulation and ensure uniform application across the EU. The body would be tasked with releasing opinions and recommendations on issues that arise as well as providing guidance to national authorities.
The Artificial Intelligence Act was originally proposed by the European Commission in April 2021. A so-called general approach position on the legislation was adopted by the European Council in late 2022, however, European lawmakers are still debating the details. Technological developments like the stellar rise of ChatGPT played a role in disrupting the AI Act’s discussions, as EU lawmakers scrambled to decide how to deal with a technology that is moving at a neck-breaking speed and is not covered in the original proposal.
Once the European Parliament adopts its own position on the legislation, EU interinstitutional negotiations – a process known as trilogies – will begin to finalize and implement the law.