AI Act deal: Key safeguards and dangerous loopholes

newsletter via Feeds on Inoreader 2023-12-16

Summary:

Compared to the original draft by the European Commission, which dates back to spring of 2021, EU lawmakers have introduced crucial safeguards to protect fundamental rights in the context of AI. Thanks to the intense advocacy efforts of civil society organizations, the Act now foresees a mandatory fundamental rights impact assessment and public transparency duties for deployments of high-risk AI systems – key demands that AlgorithmWatch has been fighting for over the last three years. People affected will also have the right to an explanation when their rights were affected by a decision based on a high-risk AI system and will be able to launch complaints about them.

At the same time, these big wins are weakened by major loopholes, such as the fact that AI developers themselves have a say in whether their systems count as high-risk. Also, there are various exceptions for high-risk systems used in the contexts of national security, law enforcement, and migration, where authorities can often avoid the reach of the Act’s core provisions.

What we can already say now: The AI Act alone will not do the trick. It is just one puzzle piece among many that we will need in order to protect people and societies from the most fundamental implications that AI systems can have on our rights, our democracy, and on the societal distribution of power.

Angela Müller, Head of Policy & Advocacy AlgorithmWatch

Among the most contested issues in the 36 hours of final negotiations were the prohibitions of certain AI systems. While the EU Parliament had taken a clear stance on systems that are not compatible with one of the main purposes of the AI Act – the protection of fundamental rights –, Member States pursued a different agenda. The final list of bans is considerably longer than in the Commission’s ‍‍‍original proposal, containing also a partial ban on predictive policing systems, a ban on systems that categorize people based on sensitive data (such as their political opinion or sexual orientation) or a ban on emotion recognition systems used in workplaces and in education. All these safeguards are important in protecting people from the most misguided uses of AI.

That said, EU lawmakers still decided to introduce major loopholes to again allow for such misguided uses through the backdoor. AI systems that are used to ‘recognize’ the emotions of asylum seekers or AI used to identify the faces of people in real-time in public space with the objective of searching for a suspect of crime are legalized through the loopholes and exceptions that the list of bans apparently foresees. Thus, the level of protection these bans actually provide can only be assessed once we see the final text.

The lawmakers’ deal also includes provisions regulating so-called general purpose AI systems (GPAI) and the models they are based on. Through a two-tiered approach, these obligations target mostly high-impact systems. Providers‍‍‍ will have to assess and mitigate the systemic risks that come with them, evaluate and test their models, and report serious incidents as well as their energy efficiency.

We welcome the fact that large general purpose AI systems are not merely left to self-regulation, but that developers will have to mitigate systemic risks and make transparent how much energy they consume – even though we strongly advocated for much further-reaching obligations, including on protecting the rights of click workers, strengthening the rights of individuals affected, and ensuring accountability across the value chain. That said, it remains to be seen how these provisions will be implemented in practice and the extent to which Big Tech will be able to work their way around them. Also, given the attention this topic has gotten over the last week and judged by the dynamics of the last negotiation phase, it may be the case that these provisions are now portrayed as a ‘compromise’, whereas they are clearly rather minimal.

Matthias Spielkamp, Executive Director of AlgorithmWatch

While a deal on the AI Act has now been announced, the more technical drafts that will be written over the next weeks will be decisive. Not only will key provisions be clarified, but it could also be the case that a lot of important details will still have to be agreed upon over the next weeks. The deal that has been announced is mostly a deal at the political level, taken under high pressure, which suggests that some issues will still have to be resolved at what the EU calls the «technical level», meaning in consultation of experts of the Commissio

Link:

https://algorithmwatch.org/en/ai-act-deal-key-safeguards-and-dangerous-loopholes/

From feeds:

Everything Online Malign Influence Newsletter » Newsletter

Tags:

credible policy-digital newsletter

Authors:

Angela Müller and Matthias Spielkamp

Date tagged:

12/16/2023, 15:11

Date published:

12/16/2023, 15:05