After engaging in an extensive 3-day ‘marathon talks’, negotiators from the Council presidency and the European Parliament, on Saturday, December 9, have reached a provisional agreement on the proposal for harmonised rules governing artificial intelligence (AI) — the Artificial Intelligence Act.
The draft regulation aims to ensure the safety and respect for fundamental rights and EU values of AI systems sold and used in the EU.
The council says this proposal also aims to stimulate investment and innovation in AI in Europe.
Promoting the development and adoption of safe AI
The AI Act is a legislative initiative that aims to promote the development and adoption of safe AI among public and private entities in the EU’s single market.
The ‘risk-based’ approach proposes regulating AI based on its potential to cause harm to society, with stricter rules for higher-risk scenarios.
As the first legislative proposal of its kind in the world, it can set a global standard for AI regulation in other jurisdictions, just as the GDPR has done, thus promoting the European approach to tech regulation on the world stage.
“This is a historical achievement and a huge milestone towards the future! Today’s agreement effectively addresses a global challenge in a fast-evolving technological environment on a key area for the future of our societies and economies. And in this endeavour, we managed to keep an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens,” says Carme Artigas, Spanish secretary of state for digitalisation and artificial intelligence.
The main elements of the provisional agreement
Compared to the initial Commission proposal, the main new elements of the provisional agreement can be summarised as follows:
- rules on high-impact general-purpose AI models that c
- a cause systemic risk in the future, as well as on high-risk AI systems
- a revised system of governance with some enforcement powers at the EU level
- extension of the list of prohibitions but with the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to safeguards
- better protection of rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment before putting an AI system into use.
Distinguishing AI from simpler software
To ensure clear criteria for distinguishing AI from simpler software systems, the agreement aligns the definition of an AI system with the approach proposed by the OECD.
It also emphasises that the regulation does not apply to areas outside the scope of EU law or member states’ competencies in national security or defence.
Additionally, the AI Act does not apply to AI systems used exclusively for military, defence, or non-professional purposes.
Classification of high-risk AI systems and prohibited practices
The provisional agreement introduces a classification system for AI systems based on their level of risk.
High-risk AI systems, as well as high-impact general-purpose AI models that can cause systemic risk in the future, will be subject to stricter requirements and obligations.
On the other hand, AI systems presenting limited risk will have lighter transparency obligations.
The agreement also prohibits certain AI practices, such as cognitive behavioral manipulation, untargeted scraping of facial images, emotion recognition in the workplace and educational institutions, social scoring, biometric categorisation, and some cases of predictive policing for individuals.
Exceptions for law enforcement authorities
Considering the particularities of law enforcement authorities and the need to preserve their ability to use AI in their vital work, several changes to the Commission proposal relating to the use of AI systems for law enforcement purposes were agreed upon.
While ensuring appropriate safeguards, these provisions allow for the deployment of high-risk AI tools by law enforcement agencies in urgent situations.
However, mechanisms are introduced to protect fundamental rights against potential misuse of AI systems.
The use of real-time remote biometric identification systems in publicly accessible spaces is also subject to strict safeguards and limited to specific law enforcement purposes.
General-purpose AI systems & foundation models
According to the European Council, new provisions have been added to situations where AI systems can be used for many different purposes (general-purpose AI), and where general-purpose AI technology is subsequently integrated into another high-risk system.
The provisional agreement also addresses the specific cases of general-purpose AI (GPAI) systems.
Foundation models, capable of performing a wide range of tasks (such as generating video, text, images, conversing in lateral language, computing, or generating computer code), must comply with transparency obligations requirements before entering the market.
Stricter rules apply to high-impact foundation models with advanced complexity and capabilities to disseminate systemic risks.
AI office, scientific panel, and more
The need for enforcement at the EU level becomes apparent with the introduction of new rules on general-purpose AI models.
An AI Office will be established within the Commission to oversee these advanced AI models. This office will contribute to developing standards, testing practices, and enforcing common rules.
A scientific panel of independent experts will advise the AI Office on various aspects of AI models. The AI Board, comprising member states’ representatives, will serve as a coordination platform and advisory body.
Additionally, an advisory forum will be established to provide technical expertise from industry representatives, SMEs, civil society, and academia.
Penalties and compliance
The agreement sets out fines for violations of the AI Act based on a percentage of the offending company’s global annual turnover or a predetermined amount.
It would be €35M or 7 per cent for violations of the banned AI applications, €15M or 3 per cent for violations of the AI act’s obligations, and €7.5M or 1,5 per cent for the supply of incorrect information.
However, the provisional agreement provides for more proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AI Act.
Transparency and fundamental rights
The provisional agreement provides for a fundamental rights impact assessment before its deployers put a high-risk AI system in the market.
The provisional agreement also provides for increased transparency regarding high-risk AI systems.
Notably, some provisions of the Commission proposal have been amended to indicate that certain users of high-risk AI systems that are public entities will also be obliged to register in the EU database for high-risk AI systems.
Moreover, newly added provisions put emphasis on an obligation for users of an emotion recognition system to inform natural persons when they are being exposed to such a system.
Establishing regulatory sandboxes
To promote an innovation-friendly environment, the agreement includes measures to support innovation in AI.
Regulatory sandboxes will be established to create a controlled environment for developing and testing innovative AI systems.
Testing of AI systems in real-world conditions will also be allowed under specific conditions and safeguards.
The agreement recognises smaller companies’ administrative burden and provides a list of supportive actions and limited derogations to alleviate this burden.
Entry into force
The provisional agreement provides that the AI Act should apply two years after it enters into force, with some exceptions for specific provisions.
Following the provisional agreement, further technical work will be conducted to finalise the details of the regulation.
The compromise text will be submitted to member states’ representatives for endorsement.
The agreement will then undergo legal-linguistic revision before formal adoption by the co-legislators.
IBM on EU’s AI act
Regarding the landmark EU AI Act, tech giant IBM says, “IBM applauds EU negotiators for reaching a provisional agreement on the world’s first comprehensive AI legislation. We have long urged the EU to take a carefully balanced approach, focused on regulating high-risk applications of AI while promoting transparency, explainability, and safety among all AI models. As lawmakers work through the remaining technical details, we encourage EU policymakers to retain this focus on risk and accountability, rather than algorithms.”
“We share the goals of enabling AI’s safe and trustworthy development and creating an open, pro-innovation AI ecosystem, and recognise that both government and industry have roles to play,” says the company.
Recently, the company announced watsonx.governance to provide organisations with the toolkit they need to manage risk, embrace transparency, and anticipate compliance with AI-focused regulation like this.