The European Union, as part of its digital strategy, is taking steps to regulate artificial intelligence (AI) to promote the development and use of this innovative technology.
In April 2021, the European Commission proposed the first regulatory framework for AI in the EU. The aim is to turn the EU into a global hub for trustworthy AI.
The proposed regulations would classify AI systems according to the risk they pose to users, with different levels of regulation for different risk levels.
On June 14, 2023, MEPs adopted Parliament’s negotiating position on the AI Act. The aim is to reach an agreement by the end of this year.
European Parliament’s view on AI
The European Parliament‘s priority is to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.
The Parliament wants AI systems to be overseen by people rather than by machines to prevent harmful outcomes. Further, the Parliament also wants to establish a technology-neutral, uniform definition of AI that could be applied to future AI systems.
The new rules establish obligations for providers and users depending on the level of artificial intelligence risk.
Unacceptable risk
The European Parliament will ban AI systems that pose unacceptable risks. These include:
- Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
- Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
- Real-time and remote biometric identification systems, such as facial recognition
Parliament allows certain exceptions, however.
For instance, remote biometric identification systems that employ a “post” approach where identification is carried out after a substantial delay will be authorised to prosecute severe offences. However, this can only take place with the approval of a court.
High-risk AI
AI systems that negatively impact safety or fundamental rights will be classified as high risk.
These high-risk AI systems will be categorised into two groups.
The first group includes AI systems used in products that fall under EU product safety legislation, such as toys, aviation, cars, medical devices, and lifts.
The second group comprises eight specific areas that will require registration in an EU database. These areas include:
- Biometric identification and categorisation of natural persons
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and self-employment access
- Access to and enjoyment of essential private and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Assistance with legal interpretation and application of the law.
However, all high-risk AI systems will be assessed before being put on the market.
Generative AI, like ChatGPT, would have to comply with transparency requirements.
01
10 years of Dutch Startup Visa dreams: A look back with Denis Chernobaev