New approaches to training very large context-aware systems have enabled generative AI systems (GAI), especially large language models (LLMs), which can produce content that is, in many cases, indistinguishable from the products of the human mind. ChatGPT, an LLM, has proven to be one of the fastest-growing consumer applications, and such popularity raises the question of how to govern and regulate AI even more pressing.
The European Commission’s initial proposal and the Council’s and EP’s proposed amendments differ substantially in their treatment of GAI. The Commission’s draft does not explicitly address this category of AI. The main approach to regulation is based on the risk levels. There is generally a distinction between AI systems which cause (i) unacceptable risks, (ii) high risks (iii) limited risks and (iv) low risks.
The Council has addressed the rise of GAI, such as large language models, by introducing a new category of AI systems, namely general-purpose AI systems in its position. The problem identified by the Council relates to the general purpose nature of many AI systems and the concerns over how to deal with AI systems which are open for various purposes. How can a technology that can simultaneously enable care robots and autonomous lethal weapons systems be regulated? The Council answers this and similar questions by providing that the rules on high-risk systems apply to general-purpose AI that can be used in such contexts unless such uses are explicitly excluded (Article 4b, Council position). It also gives the Commission the authority to “specify and adapt” the requirements, which extends the Commission’s competencies to alter the requirements for high-risk systems.
The Parliament attempts to address the issue of GAI by focusing on foundation models, effectively introducing a separate risk category. The Council and Parliament positions have not radically changed the general framework established by the Commission. However, both proposed amendments did reduce the burden on providers of GAI. By creating sensible and forward-looking regulations, EU lawmakers can redefine the narrative around AI. They can provide the blueprint for harnessing its potential while protecting humanity from its risks, fostering a future where AI technology is used responsibly and, in some cases, in support of the objectives laid out in the EU Treaties and the Charter.