EU AI Act: European Parliament prioritises safety, transparency, and human oversight in AI systems

|

|

Last update:

The European Union, as part of its digital strategy, is taking steps to regulate artificial intelligence (AI) to promote the development and use of this innovative technology. 

In April 2021, the European Commission proposed the first regulatory framework for AI in the EU. The aim is to turn the EU into a global hub for trustworthy AI.

The proposed regulations would classify AI systems according to the risk they pose to users, with different levels of regulation for different risk levels. 

On June 14, 2023, MEPs adopted Parliament’s negotiating position on the AI Act. The aim is to reach an agreement by the end of this year.

European Parliament’s view on AI

The European Parliament‘s priority is to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. 

The Parliament wants AI systems to be overseen by people rather than by machines to prevent harmful outcomes. Further, the Parliament also wants to establish a technology-neutral, uniform definition of AI that could be applied to future AI systems.

The new rules establish obligations for providers and users depending on the level of artificial intelligence risk. 

Unacceptable risk

The European Parliament will ban AI systems that pose unacceptable risks. These include: 

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
  • Real-time and remote biometric identification systems, such as facial recognition

Parliament allows certain exceptions, however. 

For instance, remote biometric identification systems that employ a “post” approach where identification is carried out after a substantial delay will be authorised to prosecute severe offences. However, this can only take place with the approval of a court.

High-risk AI

AI systems that negatively impact safety or fundamental rights will be classified as high risk. 

These high-risk AI systems will be categorised into two groups. 

The first group includes AI systems used in products that fall under EU product safety legislation, such as toys, aviation, cars, medical devices, and lifts. 

The second group comprises eight specific areas that will require registration in an EU database. These areas include:  

  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and self-employment access
  • Access to and enjoyment of essential private and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance with legal interpretation and application of the law.

However, all high-risk AI systems will be assessed before being put on the market. 

Generative AI, like ChatGPT, would have to comply with transparency requirements. 

Topics:

Follow us:

Vigneshwar Ravichandran

Vigneshwar has been a News Reporter at Silicon Canals since 2018. A seasoned technology journalist with almost a decade of experience, he covers the European startup ecosystem, from AI and Web3 to clean energy and health tech. Previously, he was a content producer and consumer product reviewer for leading Indian digital media, including NDTV, GizBot, and FoneArena. He graduated with a Bachelor's degree in Electronics and Instrumentation in Chennai and a Diploma in Broadcasting Journalism in New Delhi.

Partner eventsMore events

Current Month

21mar5:15 pm7:00 pmDiscover the final projects of our students

02apr(apr 2)8:00 am04(apr 4)6:00 am0100 Europe 2025

16apr8:00 am6:00 pmAWS Summit Amsterdam 2025An amazing day of learning and doing

Share to...