AI Regulation

EU AI Act Timeline: Key Dates and Deadlines

Complete EU AI Act timeline from prohibited practices (Feb 2025) to full application (Aug 2027). Key compliance deadlines, GPAI obligations, and GDPR intersections.

The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024, but its obligations phase in over a three-year period through August 2027. This staggered timeline creates a compliance landscape where some obligations are already enforceable, others take effect in months, and the most consequential requirements – governing high-risk AI systems – arrive in August 2026. Organisations deploying or developing AI systems in the EU cannot afford to treat this as a future problem.

This article provides the complete EU AI Act timeline with every enforcement date, what each deadline means in practice, and what organisations should be doing now.

Key Takeaways

  • Prohibited AI practices have been enforceable since 2 February 2025 – violations already carry penalties up to EUR 35 million or 7% of global turnover.
  • GPAI model obligations took effect 2 August 2025; providers of models like GPT-4, Claude, and Gemini must comply now.
  • High-risk AI system obligations apply from 2 August 2026 – the most operationally demanding deadline.
  • The AI Act does not replace GDPR. Both apply concurrently to AI systems processing personal data.

The Complete EU AI Act Timeline

Phase 0: Entry into Force – 1 August 2024

The AI Act was published in the Official Journal of the European Union on 12 July 2024 and entered into force twenty days later on 1 August 2024. This date starts all compliance clocks. No substantive obligations apply yet, but the regulation is binding law.

The European AI Office, established within the European Commission’s DG CONNECT, became operational in February 2024 and serves as the primary enforcement body for GPAI model obligations at the EU level.

Phase 1: Prohibited Practices – 2 February 2025 (ENFORCEABLE NOW)

Six months after entry into force. The following AI practices became prohibited under Art. 5:

  • Social scoring by public authorities or on their behalf – AI systems that evaluate or classify natural persons based on social behaviour or personal characteristics, leading to detrimental treatment
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (with narrow exceptions for specific serious crimes, under strict judicial authorisation)
  • Emotion recognition in the workplace and in educational institutions
  • Biometric categorisation systems that infer sensitive attributes (race, political opinions, religious beliefs, sexual orientation) from biometric data
  • Subliminal manipulation – AI systems deploying techniques beyond a person’s consciousness to materially distort behaviour
  • Exploitation of vulnerabilities – AI systems targeting specific groups (children, disabled persons, economically vulnerable) to distort their behaviour
  • Untargeted scraping of facial images from the internet or CCTV footage to build or expand facial recognition databases

These prohibitions are already enforceable. Under Art. 99(3), violations carry penalties of up to EUR 35 million or 7% of worldwide annual turnover – the highest tier of AI Act penalties. Any organisation still operating a system that falls within these categories faces immediate liability.

The AI literacy obligation under Art. 4 also became applicable at this date. Providers and deployers must ensure their staff have a sufficient level of AI literacy, taking into account their technical knowledge, experience, education, and the context of AI system use.

Phase 2: GPAI Model Obligations – 2 August 2025 (ENFORCEABLE NOW)

Twelve months after entry into force. General-Purpose AI (GPAI) model providers must comply with:

All GPAI models (Art. 53):

  • Maintain and make available technical documentation including training methodology, data sources, and computational resources used
  • Provide information and documentation to downstream providers integrating the GPAI model into their AI systems
  • Establish a policy to respect EU copyright law, including the text and data mining opt-out under Art. 4(3) of Directive (EU) 2019/790
  • Publish a sufficiently detailed summary of the content used for training, following a template provided by the AI Office

GPAI models with systemic risk (Art. 55): Models classified as systemic risk (currently those trained with cumulative compute exceeding 10^25 FLOPs, or by AI Office designation) must additionally:

  • Perform model evaluations including adversarial testing
  • Assess and mitigate systemic risks
  • Track, document, and report serious incidents to the AI Office
  • Ensure adequate cybersecurity protections for the model and its physical infrastructure

The AI Office has published Codes of Practice for GPAI providers, which serve as the primary compliance reference. Providers that follow the Codes of Practice benefit from a presumption of conformity.

Phase 3: Governance and Conformity Assessment Infrastructure – 2 August 2025

Also at the twelve-month mark:

  • Member States must designate national competent authorities and market surveillance authorities
  • Rules on notified bodies (Art. 28-39) apply – these are the organisations authorised to conduct conformity assessments for high-risk AI systems
  • The Advisory Forum and scientific panel provisions become operational

Phase 4: High-Risk AI System Obligations – 2 August 2026 (4 MONTHS AWAY)

Twenty-four months after entry into force. This is the most consequential deadline in the EU AI Act timeline. The full set of obligations for high-risk AI systems takes effect:

Who is affected: Providers and deployers of AI systems classified as high-risk under Annex III or as safety components of products under Annex I. Annex III covers:

  • Biometric identification and categorisation (beyond those already prohibited)
  • Critical infrastructure management and operation
  • Education and vocational training (access, admission, assessment)
  • Employment, workers management, and access to self-employment (recruitment, promotion, monitoring)
  • Access to essential private and public services (credit scoring, insurance pricing, emergency services dispatch)
  • Law enforcement (individual risk assessment, lie detectors, evidence evaluation)
  • Migration, asylum, and border control
  • Administration of justice and democratic processes

What providers must do (Art. 8-15):

  • Risk management system (Art. 9): Establish, implement, document, and maintain a risk management system throughout the AI system’s lifecycle
  • Data governance (Art. 10): Training, validation, and testing datasets must be relevant, representative, free of errors, and complete. Bias detection and mitigation measures are mandatory.
  • Technical documentation (Art. 11): Detailed documentation enabling authorities to assess compliance
  • Record-keeping (Art. 12): Automatic logging of events during system operation
  • Transparency (Art. 13): Instructions for use that enable deployers to understand the system’s capabilities and limitations
  • Human oversight (Art. 14): Designed to be effectively overseen by natural persons during use
  • Accuracy, robustness, cybersecurity (Art. 15): Appropriate levels throughout the lifecycle

What deployers must do (Art. 26):

  • Use the system in accordance with instructions for use
  • Ensure human oversight by appropriately trained individuals
  • Monitor the system for risks, report incidents to providers
  • Conduct a fundamental rights impact assessment for certain high-risk uses (Art. 27)
  • Inform natural persons that they are subject to high-risk AI system decisions

Conformity assessment: Before placing a high-risk AI system on the market, providers must complete a conformity assessment (Art. 43). For most Annex III systems, this can be done through internal procedures. For biometric identification systems and critical infrastructure safety components, third-party assessment by a notified body is required.

Penalties for non-compliance with high-risk obligations: up to EUR 15 million or 3% of global turnover under Art. 99(4).

Phase 5: Extended Deadline for Certain Products – 2 August 2027

Thirty-six months after entry into force. High-risk AI systems that are safety components of products regulated under the EU harmonisation legislation listed in Annex I (including machinery, medical devices, in vitro diagnostics, civil aviation, motor vehicles, and marine equipment) receive an additional year. The obligation to integrate AI Act requirements into existing product conformity assessments applies from this date.

Phase 6: Full Application – 2 August 2027

All remaining provisions of the AI Act become applicable. The regulation is fully operational.

What the Timeline Means for Your Organisation

If you already missed a deadline

Prohibited practices (Feb 2025): If you operate any AI system that could fall within Art. 5 categories, conduct an immediate assessment. Do not assume narrow interpretation – the AI Office has signalled willingness to interpret prohibitions broadly. Penalties are already applicable.

GPAI obligations (Aug 2025): If you provide a general-purpose AI model, the compliance deadline has passed. Technical documentation, copyright compliance, and (if applicable) systemic risk assessments are required now.

What to do before August 2026

The high-risk deadline is four months away. For organisations deploying or developing AI systems that may be classified as high-risk:

1. Classify your AI systems. Map every AI system you develop or deploy against Annex III categories and Annex I product categories. The AI Act risk classification framework determines your obligations.

2. Establish your risk management system. Art. 9 requires a continuous, iterative risk management process – not a one-time assessment. Start now if you have not already.

3. Audit your training data. Art. 10 data governance requirements demand that training datasets be relevant, representative, and as free of errors as possible. Assess your datasets against these criteria and document any limitations.

4. Prepare technical documentation. Art. 11 documentation requirements are extensive. Generating this documentation retrospectively for existing systems is significantly harder than building it into the development process.

5. Design for human oversight. Art. 14 requires that high-risk systems are designed to be effectively overseen by humans. If your system architecture does not currently support meaningful human intervention, redesign is needed before August.

6. Plan your conformity assessment. Determine whether your system requires internal assessment or third-party notified body assessment. For notified body assessments, account for lead times.

The GDPR Intersection: Both Apply

The AI Act explicitly does not replace GDPR. Recital 10 states that the regulation is “without prejudice” to the GDPR, the Law Enforcement Directive, and the ePrivacy Directive. Art. 2(7) confirms that the AI Act does not affect the application of Union law on the protection of personal data.

In practice, this means:

  • AI systems processing personal data must comply with both the AI Act and GDPR simultaneously
  • A valid legal basis under Art. 6 GDPR is required for any personal data processing in AI training, deployment, or monitoring – the AI Act does not create a new legal basis
  • Data protection impact assessments under Art. 35 GDPR may be required alongside AI Act fundamental rights impact assessments under Art. 27 – the two assessments address different but overlapping risks
  • Automated decision-making protections under Art. 22 GDPR apply to AI systems making decisions with legal or similarly significant effects, independently of AI Act transparency requirements
  • DPAs retain full jurisdiction over personal data processing aspects of AI systems, alongside the market surveillance authorities enforcing the AI Act

Organisations should integrate their GDPR and AI Act compliance programs rather than treating them as separate workstreams. The intersection between the AI Act and GDPR creates compound obligations that require coordinated governance.

See how Legiscope helps organisations manage the overlapping compliance requirements of GDPR, the AI Act, and other EU regulations.

FAQ

When does the EU AI Act fully apply?

The AI Act phases in between August 2024 and August 2027. Prohibited practices have applied since 2 February 2025. GPAI obligations since 2 August 2025. High-risk system obligations apply from 2 August 2026. Full application, including obligations for AI systems embedded in regulated products, is from 2 August 2027.

What are the penalties for missing EU AI Act deadlines?

Penalties depend on the type of violation. Prohibited practices: up to EUR 35 million or 7% of global turnover. High-risk system non-compliance: up to EUR 15 million or 3% of turnover. Supplying incorrect information to authorities: up to EUR 7.5 million or 1% of turnover. SMEs and startups benefit from proportionate penalty caps. These penalties apply per infringement.

Does the EU AI Act apply outside the EU?

Yes. Like GDPR, the AI Act has extraterritorial reach. Art. 2 applies the regulation to providers placing AI systems on the EU market or putting them into service in the EU, regardless of where the provider is established. It also applies to deployers located within the EU and to providers/deployers in third countries whose AI system output is used in the EU.

How does the AI Act interact with GDPR?

Both apply simultaneously to AI systems processing personal data. The AI Act does not replace or override GDPR. Organisations must maintain a valid GDPR legal basis for personal data processing in AI systems, conduct DPIAs where required under Art. 35 GDPR (in addition to AI Act fundamental rights assessments), and respect data subject rights including Art. 22 GDPR protections against purely automated decision-making. Integrated compliance programs covering both frameworks are strongly recommended.

Legiscope automates this for you

Stop doing compliance manually. Legiscope's AI handles ROPA creation, DPA audits, and gap analysis — in minutes, not weeks.

Start free trial
TD
Written by
Dr. Thiébaut Devergranne
Fondateur de Legiscope et expert RGPD

Docteur en droit de l'Université Panthéon-Assas (Paris II), 23 ans d'expérience en droit du numérique et conformité RGPD. Ancien conseiller de l'administration du Premier ministre sur la mise en œuvre du RGPD. Thiébaut est le fondateur de Legiscope, plateforme de conformité RGPD automatisée par l'IA.