A

EU AI Act: Practical Compliance Guide for 2026

A practical guide to EU AI Act compliance in 2026 covering risk categories, high-risk obligations, GPAI rules, timelines, and GDPR intersections.

The EU AI Act, formally Regulation (EU) 2024/1689, is the world’s first comprehensive legal framework for artificial intelligence. It entered into force on 1 August 2024, with obligations phasing in through August 2027. As of early 2026, organisations deploying or developing AI systems in the European Union are already subject to the first wave of binding requirements, and the most consequential obligations – those governing high-risk AI systems – take effect this August.

This guide covers the full structure of the regulation, the obligations it creates, and what organisations need to do to achieve EU AI Act compliance before each enforcement deadline.

What Is the EU AI Act?

The AI Act establishes harmonised rules for the placing on the market, putting into service, and use of AI systems within the European Union. Unlike sector-specific AI guidance issued in other jurisdictions, it applies horizontally across all industries and use cases. The regulation was adopted by the European Parliament on 13 March 2024, published in the Official Journal on 12 July 2024, and entered into force twenty days later.

According to the European AI Office, the regulation covers an estimated 6,000 to 8,000 high-risk AI systems already operating across EU Member States. The total number of AI systems in scope – including limited-risk and general-purpose models – is substantially larger.

The legislative approach is risk-based: the stricter the potential harm, the heavier the compliance burden. This mirrors the proportionality logic familiar from GDPR, but applies it to the design, deployment, and lifecycle management of AI systems rather than to personal data processing alone.

How Does the Risk-Based Classification Work?

The AI Act sorts AI systems into four tiers of risk. Each tier carries a different set of obligations – or in the case of the lowest tier, no specific obligations at all.

Unacceptable risk (prohibited practices)

Article 5 prohibits AI practices that pose a clear threat to fundamental rights. These include social scoring systems operated by or on behalf of public authorities, real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions), emotion recognition in workplaces and educational institutions, biometric categorisation systems that infer sensitive attributes such as race or political opinions, AI systems that deploy subliminal techniques or exploit vulnerabilities to distort behaviour, and untargeted scraping of facial images from the internet or CCTV to build facial recognition databases.

These prohibitions became enforceable on 2 February 2025. Any organisation still operating a system that falls within these categories faces immediate liability.

High-risk AI systems

High-risk systems form the core of the regulation’s compliance architecture. A system qualifies as high-risk if it falls within one of the categories listed in Annex III, or if it is a safety component of a product covered by the EU harmonisation legislation listed in Annex I (such as machinery, medical devices, or toys).

Limited risk (transparency obligations)

Systems that interact directly with individuals – chatbots, emotion detection systems, AI-generated content – must meet specific transparency requirements. Users must be informed that they are interacting with an AI system. Content that is artificially generated or manipulated (including deepfakes) must be labelled as such.

Minimal risk

AI systems that do not fall into the above categories are not subject to specific obligations under the regulation, though the Commission encourages voluntary codes of conduct.

What Are the High-Risk Categories?

Annex III defines eight domains where AI systems are classified as high-risk. These domains cover some of the most sensitive areas of public and private decision-making:

  1. Biometric identification and categorisation – remote biometric identification systems (excluding those prohibited outright), emotion recognition, and biometric categorisation.
  2. Critical infrastructure – AI used as safety components in the management and operation of road traffic, water, gas, heating, and electricity supply.
  3. Education and vocational training – systems that determine access to education, evaluate learning outcomes, or monitor prohibited behaviour during assessments.
  4. Employment and worker management – AI used in recruitment, candidate screening, performance evaluation, and task allocation.
  5. Access to essential services – systems used to evaluate creditworthiness, set insurance premiums, or determine eligibility for public assistance benefits.
  6. Law enforcement – AI used for individual risk assessment, polygraphs, evidence evaluation, and crime prediction.
  7. Migration, asylum, and border control – systems used in visa processing, asylum application assessment, and border surveillance.
  8. Administration of justice – AI applied to case research, legal interpretation, and dispute resolution.

An estimated 85% of the AI Act’s compliance obligations fall on providers and deployers of high-risk systems. Organisations operating in any of these eight domains should be conducting gap assessments now.

What Obligations Apply to High-Risk AI Systems?

Providers of high-risk AI systems – the entities that develop or place them on the market – bear the heaviest obligations. Deployers (organisations that use the systems in their operations) also carry specific duties.

Risk management system

Article 9 requires providers to establish, implement, document, and maintain a risk management system throughout the entire lifecycle of the high-risk AI system. This must include identification of known and foreseeable risks, estimation and evaluation of risks that may emerge from intended use and reasonably foreseeable misuse, and adoption of risk mitigation measures. The system must be updated continuously, not treated as a one-time exercise. Organisations already maintaining data protection impact assessments will recognise the structural parallels.

Data governance

Training, validation, and testing datasets must meet quality criteria covering relevance, representativeness, completeness, and statistical properties appropriate to the system’s intended purpose. This obligation has direct implications for organisations that process personal data in AI training sets, as it must be reconciled with GDPR data minimisation and purpose limitation principles.

Technical documentation and record-keeping

Providers must prepare detailed technical documentation before a system is placed on the market. Article 11 specifies that this documentation must be sufficient to allow authorities to assess compliance. High-risk systems must also be designed to automatically log events relevant to identifying risks and facilitating post-market monitoring.

Transparency and information to deployers

Providers must supply deployers with clear instructions for use, covering the system’s intended purpose, known limitations, performance metrics, and the level of human oversight required. Deployers in turn must inform natural persons that they are subject to a high-risk AI system, unless this is obvious from the circumstances.

Human oversight

Article 14 requires that high-risk AI systems be designed to allow effective human oversight. Depending on the system, this means either human-in-the-loop (a human authorises each decision), human-on-the-loop (a human can intervene during operation), or human-in-command (a human can override or disable the system). The deployer must ensure that persons assigned to oversight have the competence, authority, and resources to carry out that function.

Accuracy, robustness, and cybersecurity

High-risk AI systems must achieve appropriate levels of accuracy for their intended purpose, be resilient to errors and inconsistencies in input data, and withstand attempts by unauthorised parties to exploit vulnerabilities. These requirements connect to the broader cybersecurity obligations that many organisations already manage under DORA or NIS2.

What Are the Rules for General-Purpose AI?

The AI Act introduces a dedicated regime for general-purpose AI (GPAI) models – foundation models and large language models that can be adapted to a wide range of downstream tasks. Chapter V applies to any GPAI model placed on the EU market, regardless of whether it is later integrated into a high-risk system.

All GPAI providers must prepare and keep up-to-date technical documentation, provide information and documentation to downstream providers integrating the model into their systems, establish a policy for compliance with EU copyright law, and publish a sufficiently detailed summary of the training data used.

GPAI models that pose systemic risk – defined as models trained with a total computing power exceeding 10^25 FLOPs, or designated as such by the Commission – face additional requirements: adversarial testing (red-teaming), model evaluation for systemic risks, incident monitoring and reporting to the AI Office, and adequate cybersecurity protections.

As of March 2026, six GPAI models have been classified as posing systemic risk. GPAI rules became enforceable on 2 August 2025, meaning all GPAI providers should already be in compliance.

What Is the EU AI Act Compliance Timeline?

The phased enforcement schedule distributes obligations across four deadlines:

Date What applies
2 February 2025 Prohibited AI practices (Article 5) and AI literacy obligations (Article 4)
2 August 2025 GPAI model obligations (Chapter V), governance provisions, penalties framework
2 August 2026 High-risk obligations for systems in Annex III categories; conformity assessment procedures
2 August 2027 High-risk obligations for AI systems embedded in products covered by Annex I EU harmonisation legislation

For most organisations, August 2026 is the critical deadline. The Annex III high-risk categories cover the vast majority of AI systems used in employment, financial services, education, and public administration. Compliance requires a functioning risk management system, data governance framework, technical documentation, and human oversight mechanisms – all in place and demonstrable before the system is deployed.

How Does the AI Act Intersect with GDPR?

Any AI system that processes personal data must comply with GDPR in addition to the AI Act. The two regulations are complementary, not alternative. Key areas of overlap include:

  • Legal basis for processing. AI training on personal data requires a valid GDPR legal basis. Organisations relying on legitimate interest must document the balancing test following GDPR legitimate interest requirements.
  • Data protection impact assessments. Article 35 of GDPR requires a DPIA for processing that is likely to result in a high risk to individuals. Deploying a high-risk AI system that processes personal data will almost always trigger this requirement. Organisations already performing DPIAs should extend these assessments to cover AI-specific risks.
  • Automated decision-making. Article 22 of GDPR gives individuals the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects. High-risk AI systems used in employment screening, credit scoring, or public service eligibility sit squarely within this provision.
  • Data subject rights. Individuals retain full GDPR rights – access, rectification, erasure, objection – in relation to personal data processed by AI systems. Compliance programmes built around the GDPR requirements must account for AI-specific processing activities.

Organisations that have already invested in GDPR compliance infrastructure – record-keeping, DPIAs, data processing agreements, and breach notification procedures – have a material head start. Platforms such as Legiscope that automate GDPR documentation and risk assessment can be extended to cover the overlapping obligations introduced by the AI Act, reducing duplicated effort across both regimes.

What Are the Penalties?

The AI Act establishes a three-tier penalty structure, among the highest in EU regulatory law:

Violation Maximum fine
Prohibited AI practices EUR 35 million or 7% of total worldwide annual turnover, whichever is higher
High-risk system obligations EUR 15 million or 3% of global turnover
Supplying incorrect information to authorities EUR 7.5 million or 1.5% of global turnover

For SMEs and startups, fines are capped at the lower of the two amounts. National market surveillance authorities are responsible for enforcement, with the EU AI Office coordinating cross-border cases and overseeing GPAI providers directly.

The penalty regime draws explicit comparison with GDPR, where 2,086 fines totalling over EUR 4.5 billion were imposed between May 2018 and December 2025. Regulators have signalled that AI Act enforcement will follow a similarly active trajectory.

Frequently Asked Questions

Who does the EU AI Act apply to?

The regulation applies to providers (developers), deployers (users), importers, and distributors of AI systems placed on the EU market or whose output is used within the EU. It applies regardless of whether the entity is established inside or outside the Union – extraterritorial scope mirrors GDPR’s approach.

Do I need to comply if I only use open-source AI models?

Partially. Open-source GPAI models are exempt from some documentation and information-sharing requirements, but this exemption does not apply if the model poses systemic risk or is integrated into a high-risk AI system. Deployers of open-source high-risk systems bear the same obligations as deployers of proprietary systems.

How does the AI Act affect my existing GDPR compliance programme?

The two frameworks are additive. Your GDPR compliance programme – including data protection impact assessments, records of processing, and data subject rights procedures – remains fully applicable. The AI Act adds obligations specific to AI system design, testing, documentation, and human oversight that must be layered on top of existing data protection controls.

When should I start preparing for high-risk AI compliance?

Now. The August 2026 deadline for Annex III high-risk systems is approximately five months away. Implementing a compliant risk management system, conducting data governance reviews, preparing technical documentation, and training human oversight personnel cannot be done in weeks. Organisations that have not yet begun gap analysis are behind schedule.

Can I use the same risk assessment for GDPR and the AI Act?

The methodologies overlap but are not identical. A GDPR DPIA evaluates risks to individuals arising from data processing. The AI Act risk management system evaluates risks arising from the AI system’s design, intended use, and foreseeable misuse across its lifecycle. In practice, organisations should integrate both into a single assessment workflow to avoid duplication while ensuring each regulation’s specific requirements are fully addressed.

Automate your GDPR compliance

Save 340+ hours per year on compliance work. Legiscope provides AI-powered GDPR management trusted by compliance professionals.

Discover Legiscope
TD
Written by
Dr. Thiébaut Devergranne
Fondateur de Legiscope et expert RGPD

Docteur en droit de l'Université Panthéon-Assas (Paris II), 23 ans d'expérience en droit du numérique et conformité RGPD. Ancien conseiller de l'administration du Premier ministre sur la mise en œuvre du RGPD. Thiébaut est le fondateur de Legiscope, plateforme de conformité RGPD automatisée par l'IA.