In a move towards regulating artificial intelligence (AI) within the European Union, the Members of the European Parliament (MEPs) have endorsed a provisional agreement on the Artificial Intelligence Act at the committee level.
The agreement, which aims to ensure safety and compliance with fundamental rights, received overwhelming support during the Internal Market and Civil Liberties Committees’ vote on Tuesday, with a tally of 71-8 and 7 abstentions.
“This regulation aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI. At the same time, it aims to boost innovation and establish Europe as a leader in the AI field. The rules put in place obligations for AI based on its potential risks and level of impact,” says the European Parliament.
The announcement comes three months after reaching a provisional agreement on the proposal for harmonised rules governing artificial intelligence (AI) — the Artificial Intelligence Act.
Key Provisions of the Artificial Intelligence (AI) Act
1. Banned Applications:
The agreement bans certain AI applications that threaten citizens’ rights. It includes:
- Biometric categorisation systems based on sensitive characteristics
- Untargeted scraping of facial images from the internet or CCTV footage for facial recognition databases
- Emotion recognition in the workplace and schools
- Social scoring
- Predictive policing is based solely on profiling a person or assessing their characteristics
- AI that manipulates human behavior or exploits people’s vulnerabilities
2. Law Enforcement Exemptions
The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations.
“Real-time” RBI can be deployed only under strict safeguards, e.g. limited in time and geographic scope, with prior judicial or administrative authorisation.
Post-remote deployment, considered high-risk, also requires judicial authorisation and must be linked to a criminal offence.
3. Obligations for High-Risk Systems
The legislation imposes clear obligations on high-risk AI systems that could significantly impact health, safety, fundamental rights, environment, democracy, and the rule of law. These obligations extend to critical infrastructure, education, employment, essential services, law enforcement, migration and border management, justice, and democratic processes. Citizens are granted the right to launch complaints regarding AI systems affecting their rights.
4. Transparency Requirements
General-purpose AI (GPAI) systems and their underlying models must meet transparency requirements and comply with EU copyright law during training. More powerful models posing systemic risks will face additional evaluation, risk assessment, and reporting obligations. Moreover, artificial or manipulated video content (“deepfakes”) must be clearly labeled as such.
5. Measures to Support Innovation and SMEs
Regulatory sandboxes and real-world testing initiatives will be established at the national level, offering SMEs and startups opportunities to develop and train innovative AI solutions before market placement.
Even though the provisional agreement has received endorsement at the committee level, it awaits formal adoption in an upcoming plenary session of the European Parliament and final endorsement by the Council.
Once fully adopted, the Artificial Intelligence Act will become applicable within 24 months of its entry into force. However, certain provisions, such as
- Bans on prohibited practices will apply 6 months after the entry into force
- Codes of practise will apply 9 months after entry into force
- General-purpose AI rules including governance will apply 12 months after entry into force
- Obligations for high-risk systems will apply in 36 months
01
Not solo: How Xolo aims to help Dutch solopreneurs with its comprehensive business management platform