Logo gray

The European Union has banned AI systems with "unacceptable risk"

In 2023, the European Union adopted the first AI Act, which aims to limit the use of AI and define areas where actors must pay particular attention to security and fundamental rights considerations when using this technology. The AI Act's proponents argue that while AI improves competitiveness, efficiency, and user experience, it also brings additional risks that need to be systematically addressed.

The AI Act classifies AIs into 4 main categories based on risk, such as minimal risk, specific transparency risk, high risk, and unacceptable risk AI. The latter category includes for example technology that exploits the vulnerability of a social group, and is used as a basis for social scoring, individual predictive policing, or emotion recognition in the workplace or educational institutions. From 2 February 2025, artificial intelligence that falls into the category of "unacceptable risk" can no longer be used in the European Union. This is the first compliance date for the AI Act.

The AI Act sets very high fines, with a maximum of €35 or 7% of total worldwide annual turnover. Market players are not expecting fines straight away, as the EU itself is late: it has undertaken to issue more detailed guidance, as promised, but this has not yet been issued. Feedbacks on the AI Act have not been uniform, with some market participants welcoming its introduction and pledging to bring their systems into line with the new rules before it came into force. Others are waiting for the rules to be clarified or are simply not introducing AI developments into the EU market. Apple and Meta, among others, have done so.

The next deadline is 2 August 2025, when the governance rules and obligations for general-purpose AI will come into force.