Navigating the nexus of Policy, Digital Technologies, and Futures (S1/E11)
S1/E11: The European Union's Artificial Intelligence Act: Let the fight, I mean, the negotiations begin
Welcome to Part 3 of the European Union’s Artificial Intelligence (AI) Act in this blog series!
In this episode I’ll give you hints about the official positions taken by the Council of the European Union (EU), or the EU Council, on one side, and by the European Parliament (EP), on the other side, with respect to the European Commission’s (EC) proposal for the AI Act.
In the middle, the European Commission, the initiator and the executor of EU regulation
In case you’ve missed it, the EU governance usually reserves the right to propose legislation to the EC. Upon an EC proposal, like the AI Act, the Council and the Parliament separately agree on their respective positions, which are then negotiated between them, in what is called Trilogues that are supported by the Commission.
Concerning the AI Act, in the past episode we explored the EC proposal in some detail. However, as hinted before, several important factors were missing in it. Notably, generative AI was totally absent, because it only became a primordial topic after the proposal was published, and notions of national security were only mildly dealt with.
The Commission proposal writes in EU law a technology-neutral definition of AI systems and suggests different set of rules based on a risk-based approach with four levels of risks. The co-legislators reacted as follows.
In one corner, the EU Council, fighting for the EU governments
The Council has adopted its position, representing the consensus of the 27 Member States, in December 2022. The EU Member States agreed on the following points with respect to the tabled proposal. They want to (emphases are mine):
- narrow down the AI definition to systems developed through machine learning approaches and logic- and knowledge-based approaches;
- extend to private actors the prohibition on using AI for social scoring;
- add a horizontal layer on top of the high-risk classification to ensure that AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured;
- add new provisions to account of situations where AI systems can be used for many different purposes (general purpose AI);
- clarify the scope of the AI act (e.g. explicit exclusion of national security, defence, and military purposes from the scope of the AI Act) and provisions relating to law enforcement authorities;
- simplify the compliance framework;
- add new provisions to increase transparency and allow users' complaints;
- substantially modify the EC’s proposal’s provisions concerning measures in support of innovation (e.g. AI regulatory sandboxes).
In the other corner, the European Parliament, fighting for the European citizens
In Parliament, the discussions were led by the Committee on Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE), under a joint committee procedure. The EP position was adopted by the plenary in June 2023 and also substantially amends the Commission’s proposal, as follows (again, emphases are mine).
- MEPs amended the definition of AI systems to align it with the definition agreed by the Organisation for Economic Co-operation and Development (OECD).
- MEPs substantially amended the list of AI systems prohibited in the EU. Parliament wants to ban the use of biometric identification systems in the EU for both real-time and ex-post use (except in cases of severe crime and pre-judicial authorisation for ex-post use) and not only for real time use as proposed by the Commission. Furthermore, Parliament wants to ban all biometric categorisation systems using sensitive characteristics, predictive policing systems, emotion recognition systems and AI systems using indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.
- While the Commission proposed to automatically categorise as high-risk all systems falling in certain areas or use cases, the EP wants to add the additional requirement that the systems must pose a 'significant risk' to qualify as high-risk. Furthermore, the EP imposes on those deploying a high-risk system in the EU to carry out a fundamental rights impact assessment including a consultation with the competent authority and relevant stakeholders.
- The EP position states a layered approach to regulate general-purpose AI systems. It wants to impose an obligation on providers of foundation models to ensure a robust protection of fundamental rights, health, safety, the environment, democracy, and the rule of law. Furthermore, generative foundation AI models that use large language models (such as Chat GPT) to generate art, music, and other content would be subject to stringent transparency obligations. Finally, all foundation models should provide all necessary information for downstream providers to be able to comply with their obligations under the legislation.
- National authorities’ competences have been strengthened. In addition, the EP proposes to establish a new EU body, called the AI Office, to support the harmonised application of the AI Act, provide guidance, and coordinate joint cross-border investigations.
- In order to support innovation, research activities and the development of free and open-source AI components would be largely exempted from compliance with the AI Act rules.
The trilogues should start under the Spanish rotating presidency of the EU Council, from July to December 2023.
And the winner is!
Judging by the above, there’ll be trouble for the Spaniards. In May 2023 I saw one of their representatives indicate in public that the national security exemptions of scope are a “red line” that the Council will defend at all costs. The EP, in opposition, sees this law as one about protecting the EU citizens fundamental rights, and these include a fortiori the protection from mass surveillance carried out by, well, governments. The EP position is one that guarantees some kind of logic in the recent conversations about “us and them”, or “like-minded countries”. After all, the EU sells itself as the region where fundamental rights are enshrined in law and are always protected. If an act of this importance is approved without provisions protecting its citizens from the EU’s own governments, then it’ll be difficult to trace lines between “us” and “them”, unless they’re drawn in the sand.
One last point. While conducting research for this episode, I came across a new EC proposal in the making, this time for a European Media Freedom Act, which would ensure media diversity and independency in the EU. I’ve also read that the French are lobbying for the introduction in the proposal of a total exemption of the Media Freedom Act on grounds of, guess what, again, national security. And this only weeks after the French Senate adopted a very controversial law that allows the remote activation of mikes and cameras of electronic devices to conduct surveillance of people by the French government, including, it’s decried, journalists. As I wrote before, contemporary reality of our digital life presents many points of resemblance with George Orwell’s dystopian visions. It’s probably only a matter of time before one or several elected EU governments take the opportunity to join the existing dots.
This concludes our exploration of the AI Act per se, that we started in Episode 9. I hope to have time to share with you two smaller complementary pieces of legislation, namely the Review of the Product Liability Directive (of 1985) and the new AI Liability Directive, so that you get the complete picture around AI regulation. Keep posted!
[This blog series is inspired by research work that is or was partially supported by the European research projects CyberSec4Europe (H2020 GA 830929), LeADS (H2020 GA 956562), and DUCA (Horizon Europe GA 101086308), and the CNRS International Research Network EU-CHECK.]
CNRS - France
Digital Skippers Europe (DS-Europe)