A

AI Act Risk Classification: Where Does Your System Fall?

Learn how the EU AI Act risk classification works across four tiers, from banned practices to minimal risk, and determine where your AI system falls.

The EU AI Act (Regulation (EU) 2024/1689) introduces a four-tier risk pyramid that determines the legal obligations for every AI system placed on the EU market. Getting the AI act risk classification right is the first compliance question any organisation must answer, because the entire regulatory burden flows from it. A system classified as high-risk faces dozens of mandatory requirements; a system classified as minimal risk faces none.

According to the European Commission’s impact assessment, roughly 15% of AI systems deployed in the EU are expected to qualify as high-risk under Annex III. The remaining 85% fall into the lower tiers. Yet misclassification carries severe consequences: penalties for non-compliance can reach EUR 35 million or 7% of global annual turnover for prohibited practices, and EUR 15 million or 3% for high-risk violations.

This guide walks through each tier in detail, explains what falls where, and provides a decision-tree approach to determine your own AI act risk classification.

How Does the Four-Tier Risk Pyramid Work?

The AI Act structures its regulatory framework around four levels of risk, each carrying progressively heavier obligations as the potential for harm increases. The logic mirrors the proportionality principle familiar from GDPR requirements – the greater the risk to fundamental rights, the stricter the rules.

The four tiers from top to bottom are:

  1. Unacceptable risk – outright prohibited
  2. High risk – subject to comprehensive obligations before and after market placement
  3. Limited risk – transparency obligations only
  4. Minimal risk – no specific legal obligations

A 2025 survey by the Centre for Data Innovation found that 42% of European companies deploying AI systems were uncertain about which risk tier applied to them. That uncertainty is itself a compliance risk. The classification is not optional or self-selecting in the way some organisations treat it – it is a legal determination with audit consequences.

Which AI Practices Are Banned Under Unacceptable Risk?

Article 5 of the AI Act defines a category of AI practices so harmful that they are prohibited entirely. These bans became enforceable on 2 February 2025. There is no compliance pathway for these systems – they must be shut down.

Social scoring by public authorities

AI systems used by public authorities (or on their behalf) to evaluate or classify individuals based on their social behaviour or personal characteristics, leading to detrimental treatment that is unjustified or disproportionate, are banned. This covers government-operated scoring systems that aggregate behavioural data to assign trustworthiness ratings affecting access to services.

Real-time biometric identification in public spaces

The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes is prohibited, with three narrow exceptions: searching for specific victims of abduction or trafficking, preventing a genuine and imminent threat to life or a foreseeable terrorist attack, and locating or identifying a suspect of a serious criminal offence. Even these exceptions require prior judicial authorisation and are subject to strict necessity and proportionality tests.

Manipulation of vulnerable groups

AI systems that deploy subliminal techniques beyond a person’s consciousness, or that exploit vulnerabilities related to age, disability, or social or economic situation, to materially distort behaviour in a way that causes significant harm, are prohibited.

Emotion recognition in workplace and education settings

The AI Act bans the use of emotion recognition systems in workplaces and educational institutions. This covers AI tools that attempt to infer emotional states from biometric data such as facial expressions, voice patterns, or body movements in these specific contexts. The prohibition recognises the inherent power imbalance in employer-employee and teacher-student relationships.

Other banned practices include biometric categorisation systems inferring sensitive attributes (race, political opinions, sexual orientation) and untargeted scraping of facial images from the internet or CCTV footage to build recognition databases.

What Qualifies as High-Risk Under Annex III?

High-risk is where the bulk of the AI Act’s compliance architecture applies. A system qualifies as high-risk through two pathways: it is either listed as a safety component of a product covered by EU harmonisation legislation (Annex I), or it falls within one of the eight domains defined in Annex III. The Annex III categories cover the most consequential areas of automated decision-making.

Biometric identification and categorisation

Remote biometric identification systems (other than those banned outright) fall here. This includes both real-time systems used outside the prohibited law-enforcement context and post-remote biometric identification. Biometric categorisation systems that do not infer sensitive attributes also belong to this tier.

Critical infrastructure management

AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, or electricity. A 2024 ENISA report identified over 300 AI-enabled systems currently managing critical infrastructure assets across EU Member States.

Education and vocational training

Systems that determine access to educational institutions, evaluate learning outcomes, assess the appropriate level of education for an individual, or monitor prohibited behaviour during tests. Automated grading systems and AI-driven admissions tools are prime examples.

Employment and worker management

AI used in recruitment (screening CVs, ranking candidates), making decisions affecting terms of employment, promotions, or terminations, and allocating tasks based on individual behaviour or personal traits. The GDPR compliance checklist already imposes data protection obligations on automated employment decisions; the AI Act adds a parallel set of system-level requirements.

Essential private and public services

AI systems used to evaluate creditworthiness, set insurance premiums (life and health), or assess eligibility for public assistance benefits, services, or emergency dispatch. According to the European Banking Authority, an estimated 60% of EU credit institutions now use some form of AI in credit scoring.

Law enforcement

Except where prohibited, AI systems used to assess the risk of a person offending or reoffending, as polygraphs, to evaluate the reliability of evidence, to predict the occurrence of criminal offences (predictive policing), or for profiling during criminal investigations.

Migration, asylum, and border control

AI systems used as polygraphs or to assess risks posed by individuals entering the EU, to assist in the examination of asylum applications, or for monitoring and surveillance at borders.

Administration of justice and democratic processes

AI systems intended to assist judicial authorities in researching and interpreting facts and the law, and in applying the law to a concrete set of facts. This category extends to systems intended to influence the outcome of elections or referendums.

What Are the Obligations for High-Risk AI Systems?

Providers of high-risk AI systems face a set of mandatory requirements under Articles 8-15 of the AI Act. These obligations apply before market placement and continue through the system’s lifecycle. Failure to meet them blocks lawful market access.

Risk management system

Article 9 requires a documented risk management system that identifies, analyses, estimates, and evaluates foreseeable risks. This is not a one-time assessment. The system must be continuously updated throughout the AI system’s lifecycle. Organisations already conducting a Data Protection Impact Assessment (DPIA) under GDPR will recognise the logic, but the AI Act risk management framework is broader in scope and more prescriptive in methodology.

Data governance

Training, validation, and testing datasets must meet quality criteria relating to relevance, representativeness, accuracy, and completeness. Article 10 mandates that providers examine data for possible biases and take appropriate measures to address them.

Technical documentation and record-keeping

Before placing a high-risk system on the market, providers must draw up technical documentation demonstrating compliance. Automatic logging capabilities must be built in to enable traceability and post-market monitoring.

Transparency and information to deployers

Deployers (the organisations using the system) must receive sufficiently clear instructions to understand the system’s capabilities and limitations, including its intended purpose, level of accuracy, and known risks. This obligation complements the transparency requirements already examined in the AI Act vs GDPR comparison.

Human oversight

High-risk systems must be designed to allow effective human oversight during their period of use. This means the system must include tools enabling the human overseeing it to correctly interpret outputs, decide not to use the system in a particular situation, and override or reverse its output.

Accuracy, robustness, and cybersecurity

The system must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout its lifecycle. These levels must be declared in the technical documentation. A 2025 Stanford HAI study found that only 38% of AI systems deployed in high-risk domains met the accuracy benchmarks their own documentation claimed.

How Do Limited Risk Obligations Work?

Limited risk systems face only transparency obligations under Article 50. The logic is straightforward: people interacting with AI deserve to know they are doing so, and people viewing AI-generated content deserve to know it was not produced by a human.

The main requirements are:

  • Chatbots and conversational AI must clearly disclose to users that they are interacting with an AI system, unless this is obvious from the circumstances.
  • Deepfakes and synthetic media – AI-generated or manipulated image, audio, or video content that resembles existing persons, objects, places, or events – must be labelled as artificially generated or manipulated.
  • Emotion recognition and biometric categorisation systems operating outside the prohibited contexts must inform users of their operation.

These obligations took effect on 2 August 2025. Non-compliance carries fines of up to EUR 7.5 million or 1% of global turnover.

What Falls Under Minimal Risk?

The vast majority of AI systems deployed today – spam filters, AI-powered search engines, recommendation algorithms, inventory management tools, video game AI – fall into the minimal risk category. These systems face no specific obligations under the AI Act.

The European Commission encourages providers of minimal-risk systems to voluntarily adhere to codes of conduct, but this carries no legal force. The only practical consideration is ensuring the system does not migrate into a higher risk tier as its use case evolves. An AI recommendation engine used for product suggestions is minimal risk; the same engine repurposed to assess insurance eligibility is high-risk.

How to Determine Your AI Act Risk Classification

Classifying your system requires a structured decision-tree approach rather than intuition. The following sequence reflects the regulation’s own logic:

Step 1: Check Article 5 prohibitions. Does your system engage in any of the banned practices? If yes, it cannot be lawfully deployed regardless of any other analysis. Shut it down or redesign it.

Step 2: Check Annex I. Is your AI system a safety component of a product already regulated under EU harmonisation legislation (medical devices, machinery, toys, lifts, equipment for explosive atmospheres, radio equipment, in-vitro diagnostics, civil aviation, vehicles, marine equipment)? If yes, it is high-risk.

Step 3: Check Annex III. Does your system fall within one of the eight domains listed above? If yes, it is high-risk, unless it meets the narrow exception in Article 6(3) – the system does not pose a significant risk of harm to health, safety, or fundamental rights because it does not materially influence the outcome of a decision, is intended for a narrow procedural task, or improves the result of a previously completed human activity.

Step 4: Check Article 50 transparency triggers. Does the system interact directly with individuals, generate synthetic content, or perform emotion recognition or biometric categorisation? If yes, it falls under limited risk.

Step 5: Default to minimal risk. If none of the above apply, the system carries no specific obligations.

For organisations managing multiple AI systems, Legiscope can automate compliance mapping across portfolios, including the risk classification analysis that determines which systems require full documentation packages. For a broader view of how AI Act requirements interact with existing data protection obligations, the EU AI Act compliance guide provides a detailed walkthrough.

Frequently Asked Questions

Can an AI system’s risk classification change over time?

Yes. Classification depends on the system’s intended purpose and deployment context. If either changes – for example, an AI tool originally used for product recommendations is repurposed for credit scoring – the classification changes with it, potentially triggering new obligations.

Does the AI Act apply to AI systems developed outside the EU?

Yes. The regulation applies to providers placing AI systems on the EU market or putting them into service in the EU, regardless of where the provider is established. It also applies to deployers located within the EU and to providers or deployers outside the EU whose system’s output is used within the EU.

How does AI Act risk classification interact with GDPR?

The two frameworks operate in parallel. An AI system processing personal data must comply with GDPR regardless of its AI Act classification. High-risk AI systems processing personal data will typically require both an AI Act risk management system and a DPIA under GDPR. The AI Act vs GDPR analysis covers the overlaps and gaps in detail.

What happens if I misclassify my AI system?

Misclassification that results in a failure to meet high-risk obligations exposes the provider to enforcement action by national market surveillance authorities. Fines reach EUR 15 million or 3% of global annual turnover. Deployers who fail to use a high-risk system in accordance with instructions also face penalties.

Are general-purpose AI models (like large language models) subject to risk classification?

General-purpose AI models are governed by a separate chapter of the AI Act (Title IIIA) and are not directly classified under the four-tier risk pyramid. However, when a GPAI model is integrated into a specific AI system, that system is classified based on its intended purpose and deployment context. A large language model powering a chatbot faces limited-risk transparency obligations; the same model integrated into an employment screening tool faces high-risk requirements.

When do the high-risk obligations become enforceable?

The main high-risk obligations under Articles 8-15 apply from 2 August 2026 for systems falling under Annex III. For high-risk systems that are safety components of products regulated under Annex I harmonisation legislation, the deadline is 2 August 2027.

Automate your GDPR compliance

Save 340+ hours per year on compliance work. Legiscope provides AI-powered GDPR management trusted by compliance professionals.

Discover Legiscope
TD
Written by
Dr. Thiébaut Devergranne
Fondateur de Legiscope et expert RGPD

Docteur en droit de l'Université Panthéon-Assas (Paris II), 23 ans d'expérience en droit du numérique et conformité RGPD. Ancien conseiller de l'administration du Premier ministre sur la mise en œuvre du RGPD. Thiébaut est le fondateur de Legiscope, plateforme de conformité RGPD automatisée par l'IA.