Subscribe to PLMJ’s newsletters to receive the most up-to-date legal insights and our invitations to exclusive events.
We are looking for people who aim to go further and face the future with confidence.
Subscribe to PLMJ’s newsletters to receive the most up-to-date legal insights and our invitations to exclusive events.
We are looking for people who aim to go further and face the future with confidence.
The European Commission recently launched a stakeholder consultation on on the implementation of the AI Act’s rules on the classification of artificial intelligence (AI) systems as high risk. The purpose of the consultation is to collect contributions on:
The AI Act came into force on 1 August 2024 to establish harmonised rules for the use of coherent, trustworthy and human-centric artificial intelligence systems.
The classification of AI systems follows a risk-based approach [2], especially with regard to the high-risk systems provided for in Chapter III of the AI Act.
The figure below illustrates the various levels of risk classification, including AI systems whose use is not permitted because they are considered prohibited AI practices, pursuant to Article 5 of the AI Act [3].
Data source: European Comission
The AI Act states that high- risk systems can only be used if they do not pose an unacceptable risk to people’s health, safety or fundamental rights. These systems are divided into:
According to the AI Act, the European Commission should publish guidelines on the practical application of the Act by February 2026, in particular those relating to the classification of AI systems as high risk[7].
In addition, the Commission should draw up guidelines on legal requirements and responsibilities of operators throughout the value chain[8]. The Guidelines aim to:
The consultation promoted by the European Commission closes on 18 July 2025, with the Guidelines to be published by 2 February 2026. The rules on the classification of systems as high risk, as set out in Article 6(1), will come into effect on 2 August 2027.
[2] See Recital 26 of the AI Act.
[3] According to Recital 28 of the AI Act, the use of AI systems for manipulative, exploitative, or social control practices is prohibited under EU law, as these practices do not align with EU values. However, there are exceptions for their use in law enforcement situations, such as biometric identification systems. The use of such systems is subject to exhaustive and restrictive regulation.
[4] Article 6(1) of the AI Act.
[5] Article 6(2) of the AI Act.
[6] Article 6(3) of the AI Act. This article applies to systems that perform narrow procedural tasks; improve the outcome of previous human activities; detect decision-making patterns without significantly replacing or influencing human assessment; and perform preparatory tasks in the context of relevant assessments. Examples of AI systems that are exempt from being considered high risk include: (i) interactive platforms and virtual tutors, provided they do not manipulate behaviour or create harmful dependency; (ii) support robots for the elderly and people with disabilities that assist with daily activities without replacing human assessment; (iii) tools that help individuals integrate into new communities or the labour market without significantly influencing critical decisions.
[7] Article 6(5) of the AI Act.
[8] Article 96(1)(a) of the AI Act.