Data Privacy

GDPR Article 22: Automated Decision-Making and Profiling

GDPR Article 22 prohibits decisions based solely on automated processing that produce legal or similarly significant effects, with three narrow exceptions.

In one sentence. GDPR Article 22 grants data subjects the right not to be subject to a decision based solely on automated processing, including profiling, that produces legal effects concerning them or similarly significantly affects them — with three narrow exceptions: (a) necessary for contract performance, (b) authorized by law, © based on explicit consent. Even when an exception applies, the data subject retains the right to human intervention, to express their view, and to contest the decision.

Article 22 is one of the GDPR’s most consequential provisions for AI/ML deployments — and the most-litigated as automated decision-making becomes ubiquitous. The CJEU has confirmed in SCHUFA (Case C-634/21, December 2023) that even credit scoring algorithms fall under Article 22 when they significantly influence downstream human decisions. As the EU AI Act takes effect in 2026-2027, Article 22 sits at the intersection of GDPR and AI Act compliance.

For related rights: right of access (Article 15), right to object (Article 21). For DPIA obligations triggered by automated decision-making, Article 35 RGPD AIPD.

Key takeaways

  • Article 22(1): right not to be subject to decisions based solely on automated processing with legal or similarly significant effects.
  • Three exceptions in Article 22(2): contract performance, law, explicit consent.
  • Even with exceptions, suitable measures required: right to human intervention, to express view, to contest the decision.
  • Article 22(4): exceptions don’t apply to special category data (Article 9) unless explicit consent OR substantial public interest.
  • The CJEU SCHUFA judgment (2023) broadened “based solely on” to include cases where a human formally signs off but in practice rubber-stamps the algorithm.

1. Article 22(1) — the prohibition

Three cumulative conditions trigger the prohibition:

  1. The decision is based solely on automated processing (including profiling)
  2. It produces legal effects concerning the data subject OR similarly significantly affects them
  3. The data subject hasn’t consented or another exception applies

“Based solely on” — the SCHUFA clarification

The CJEU in SCHUFA (December 2023) clarified that “solely” doesn’t require zero human involvement. If a human formally signs off but in practice defers entirely to the algorithm, the decision is still “solely automated.”

Practical test:

  • Does the human make an independent assessment?
  • Does the human have the authority to overturn the automated outcome?
  • Does the human do so in non-trivial cases?

If any answer is no, the decision is “solely automated” for Article 22 purposes.

Examples — IN scope Examples — OUT of scope
Loan approval/denial Personalized product recommendations
Insurance premium calculation Search result ordering
Job application screening Movie suggestions
Visa/immigration decisions Newsletter content selection
Welfare benefit eligibility A/B test variant assignment
Employment performance scoring UI customization
Credit scoring (SCHUFA) Generic ad targeting (debated)
University admission

The threshold is significant impact on the data subject’s circumstances, behavior, or choices.

2. The three exceptions (Article 22(2))

(a) Contract performance

The decision is necessary for entering or performing a contract between the data subject and the controller.

Necessity is strict — the EDPB (Guidelines on Article 22) requires:

  • The processing must be necessary, not merely useful or efficient
  • Less invasive alternatives must have been considered

Example accepted: automated credit check necessary to underwrite a loan application. Example rejected: automated screening of job applicants where manual review is feasible.

(b) Authorized by EU/Member State law

Specific law authorizes the automated decision and provides safeguards. Examples: tax authority automated assessments, social security automated decisions.

The data subject has given explicit consent. Subject to Article 7 conditions + must be unambiguous + freely given. Highly disputed in employment and consumer contexts where power imbalance exists.

3. Required safeguards (Article 22(3))

Even when an exception applies, the controller must implement suitable measures including at least:

  • Right to obtain human intervention (a real human, with authority to overturn)
  • Right to express their point of view
  • Right to contest the decision

These rights must be easy to exercise — not buried in legalese, not behind paywalls, not requiring postal mail.

4. Special category data (Article 22(4))

Automated decisions involving special category data (Article 9: health, biometrics, etc.) are permitted only if:

  • Explicit consent (Article 9(2)(a)), OR
  • Substantial public interest under EU/Member State law (Article 9(2)(g))

AND suitable measures to safeguard the data subject’s rights and freedoms are in place.

This excludes most commercial uses of special category data in automated decisions.

5. Information obligations interaction

Articles 13(2)(f) and 14(2)(g) require the controller to inform the data subject of:

  • The existence of automated decision-making (including profiling)
  • Meaningful information about the logic involved
  • The significance and envisaged consequences of such processing for the data subject

The CJEU in SCHUFA confirmed this includes revealing the algorithm’s logic to the data subject upon request — not just a generic “we use AI” disclosure.

6. AI Act intersection

The EU AI Act (Regulation 2024/1689) classifies many automated decision systems as “high-risk AI” requiring conformity assessment, technical documentation, and human oversight. The GDPR Article 22 obligations are cumulative with AI Act obligations:

  • Article 22 protects individual data subjects
  • AI Act regulates the AI system itself

For high-risk AI systems making decisions about individuals, both apply. See EU AI Act compliance guide and AI Act vs GDPR.

7. DPIA obligation

Article 35(3)(a) makes DPIA mandatory for “systematic and extensive evaluation of personal aspects relating to natural persons based on automated processing, including profiling, and on which decisions are based that produce legal effects.”

Translation: any Article 22-scope processing requires a DPIA. See Article 35 RGPD AIPD.

8. Enforcement

Year Sanction Article 22 issue
2023 SCHUFA (CJEU) — Reference for preliminary ruling Credit scoring as Article 22
2024 Hertz France (CNIL) — €40K Automated screening without information
2024 Multiple AI vendors (CNIL) — €50K-€500K Automated decisions without DPIA, no human intervention mechanism

Article 83(5)(b) places Article 22 violations at the top tier — up to €20M or 4% of global turnover. The AI Act adds another layer, up to €35M or 7% of global turnover for prohibited AI practices.

9. Implementation checklist

For any automated decision system:

  • ☐ DPIA conducted (Article 35) — mandatory
  • ☐ Lawful basis documented (Article 6) — typically (b) contract or (a) consent
  • ☐ For special category data: Article 9 exception identified
  • ☐ Article 22 exception applicable: documented
  • ☐ Human intervention mechanism implemented and tested (real human, real authority)
  • ☐ Mechanism for data subject to express their view documented
  • ☐ Mechanism to contest decisions documented (with response SLA)
  • ☐ Privacy notice (Articles 13/14) discloses: existence, logic, significance, consequences
  • ☐ Algorithm explainability documented (for response to access requests)
  • ☐ AI Act conformity assessment if high-risk

10. Tooling

Legiscope maps Article 22 obligations onto your AI/ML deployments via the ROPA, generates the required DPIA, and provides templates for the human-intervention workflow.

For related deep-dives: Article 35 RGPD AIPD, right to object, right of access, EU AI Act compliance guide, AI Act vs GDPR.

Conclusion

Article 22 is the GDPR’s response to algorithmic decision-making. As AI scales, it becomes the most-frequently triggered provision. The SCHUFA judgment removed the easy escape (“we have a human in the loop”) — meaningful human review is required, not formal sign-off. Build the human-intervention workflow into the AI deployment pipeline, not as an afterthought.

FAQ

When does GDPR Article 22 apply?

When a decision is based solely on automated processing (including profiling) and produces legal effects or similarly significantly affects the data subject. Loans, insurance, employment screening, credit scoring all qualify. Personalized recommendations and ad targeting generally don’t (though debated).

What does “based solely on automated processing” mean after SCHUFA?

The CJEU clarified in 2023 that “solely” includes cases where a human formally signs off but rubber-stamps the algorithm. Real human review with authority to overturn is required for the processing to be considered partially automated rather than solely automated.

Can I use automated decisions for hiring?

Only if (a) necessary for contract performance (rare — manual review is usually feasible), (b) authorized by law, or © explicit consent (rare in employment due to power imbalance). Even then, human intervention mechanisms must be in place. Most HR automated screening fails these conditions and triggers Article 22.

What information must I provide about the algorithm?

Article 13(2)(f) and 14(2)(g) require “meaningful information about the logic involved” plus the significance and envisaged consequences. The CJEU SCHUFA ruling confirms this means more than “we use AI” — actual algorithm logic must be disclosed upon request.

Does the AI Act replace Article 22?

No — both apply cumulatively. Article 22 protects individual data subjects via GDPR; the AI Act regulates the AI system itself with conformity assessment, documentation, and oversight requirements. High-risk AI systems making individual decisions are subject to both.

Legiscope automates this for you

Stop doing compliance manually. Legiscope's AI handles ROPA creation, DPA audits, and gap analysis — in minutes, not weeks.

Start free trial
TD
Written by
Fondateur de Legiscope et expert RGPD

Docteur en droit de l'Université Panthéon-Assas (Paris II), 23 ans d'expérience en droit du numérique et conformité RGPD. Ancien conseiller de l'administration du Premier ministre sur la mise en œuvre du RGPD. Thiébaut est le fondateur de Legiscope, plateforme de conformité RGPD automatisée par l'IA.

View full author profile →