The EU AI Act: A Primer
newsletter via Feeds on Inoreader 2023-09-26
Summary:
The Proposal for a Regulation laying down harmonised rules for artificial intelligence, better known as the EU AI Act will be finalized by the end of the year. Pending final EU procedures (the trilogue), the act will likely be adopted in early 2024 before June 2024 European Parliament elections. Its enactment will be followed by a transition period of at least 18 months before the regulation becomes fully enforced. This blog offers a high-level introduction to the AI Act for those interested in AI regulation, and a refresher for readers who lost track of the deliberations over the last two years. It provides an overview of the core concepts and approaches of the AI Act, focusing on the commonalities and small differences between the proposals of the Council of the EU, the European Parliament, and the European Commission that will be hammered out during the trilogue process over the coming months. However, there remain substantive disagreements in the proposals, especially when it comes to technical and detailed provisions like definitions and enforcement. Dissecting those warrants a blog post on its own and is not the focus of this article.
What is the EU AI Act?
The AI Act is a legal framework governing the sale and use of artificial intelligence in the EU. Its official purpose is to ensure the proper functioning of the EU single market by setting consistent standards for AI systems across EU member states. In practice, it is the first comprehensive regulation addressing the risks of artificial intelligence through a set of obligations and requirements that intend to safeguard the health, safety and fundamental rights of EU citizens and beyond, and is expected to have an outsized impact on AI governance worldwide. The AI Act is part of a wider emerging digital rulebook in the EU that regulates different aspects of the digital economy like the General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act. As such, the AI Act does not address data protection, online platforms or content moderation. While the interplay between the AI Act and existing EU legislation poses its own challenges, building on existing laws enables the EU to avoid a “one law fixes all” approach to this emerging technology.
The AI Act covers AI systems that are “placed on the market, put into service or used in the EU.” This means that in addition to developers and deployers in the EU, it also applies to global vendors selling or otherwise making their system or its output available to users in the EU.
There are three exceptions:
- AI systems exclusively developed or used for military purposes, and possibly defense and national security purposes more broadly, pending negotiations;
- AI developed and used for scientific research; and,
- Free and open source AI systems and components (a term not yet clearly defined), with the exception of foundation models which are discussed below.
The Risk-Based Approach
At the heart of the proposal stands its risk categorization system, whereby AI systems are regulated based on the level of risk they pose to the health, safety and fundamental rights of a person. There are four categories of risk: unacceptable, high, limited and minimal/none. The greatest oversight and regulation envisioned by the AI Act focuses on the unacceptable and high risk categories, so that is the focus of much of the discussion below. Exactly where different types of AI systems fall remains to be determined and is expected to be fiercely debated during the trilogue. In practice, an AI system might also fall into several categories.
Unacceptable Risk Systems will be Prohibited
AI systems belonging to the unacceptable risk category are prohibited outright. Based on consensus between the three proposals, unacceptable risk systems include those that have a significant potential for manipulation either through subconscious messaging and stimuli, or by exploiting vulnerabilities like socioeconomic status, disability, or age. AI systems for social scoring, a term that describes the evaluation and treatment of people based on their social behavior, are also banned. The European Parliament further intends to prohibit real-time remote biometric identification in public spaces, like live fac