The European Union (EU) is taking a proactive stance towards regulating the development and use of artificial intelligence (AI) with the introduction of the Artificial Intelligence Act (AI Act) introducing new transparency and risk-management rules for AI systems.
After months of negotiations and two years after draft rules were proposed, EU Members of Parliament (MEPs) have reached an agreement and passed a draft of the Artificial Intelligence (AI) Act on 27 April 2023, which would be the first set of comprehensive laws related to AI regulation. Subsequently, on 11 May the respective committees also approved of the draft.
In order to be able to comprehend the regulation, it might be worth clarifying the basic concepts that have been floating around recently (not necessarily reflected in the legislation):
Artificial Intelligence (AI) has been around for decades in some form and includes machine learning (ML).
Generative AI (GAI) is a type of ML capable of generating new content such as text, images, video, code, etc. in response to prompts.
Large language models (LLM) – such as GPT - are a type of GAI, which generate new combinations of text language. Foundation models (also called base models) are large artificial intelligence (AI) models trained on a vast quantity of data at scale (often by self-supervised learning or semi-supervised learning) resulting in a model that can be adapted to a wide range of tasks.
ChatGPT is a generative AI (GAI) interface, a chatbot that has been trained to interact with humans in a conversational way.
EU AI Act
The AI Act categorizes AI applications into four risk levels: unacceptable risk, high risk, limited risk and minimal or no risk.
The rules follow a risk-based approach and establish obligations for providers and users depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socioeconomic status and personal characteristics). MEPs substantially amended the list to include bans on intrusive and discriminatory uses of AI systems.
High-risk AI: AIs creating an adverse impact on people's safety or their fundamental rights are considered to be high-risk (as listed in the Annex of the draft) and mandatory pre-requirements (e.g. conformity assessment) would apply thereto. Limited and no-risk AI applications would face limited or no additional administrative obligations, respectively.
Generative AI – safeguards
As a first step, generative foundation models – as used in the wording of the draft – would be required to ensure transparency about the fact the content is generated by an AI system, not by humans.
Also, as a result of the modification by the MEPs, foundation models need to be registered in the EU database and ensure risk assessment and mitigations and protection of basic values of the EU (fundamental rights, health and safety and the environment, democracy and rule of law).
The current draft proposal is for a joint regulation of the European Parliament and of the Council. Before negotiations with the Council on the final form of the law can begin, the draft needs to be endorsed by the Parliament, with the vote expected in June 2023.