The EU AI Act Classification: A Decision Framework

The EU AI Act, Regulation (EU) 2024/1689, applies in full to most high-risk AI systems from 2 August 2026. For compliance managers, that deadline is no longer distant. The regulation does not simply require you to be compliant; it requires you to demonstrate, document, and justify exactly where each AI system in your organisation sits within the regulatory framework.

Getting the classification wrong carries real consequences: misclassifying a high-risk system as minimal-risk exposes your organisation to fines of up to €15 million or 3% of global annual turnover. This article provides a structured classification methodology, built around Article 6 and Annex III of the Regulation, so you can work through your AI portfolio with confidence.

Why AI Act Classification Is More Complex Than It Looks

The four-tier risk structure of the EU AI Act looks straightforward at first glance. It is not. The existing EU AI Act Risk Pyramid article on this site gives a solid overview of what each tier means. The classification process is a different exercise entirely, and this is where organisations consistently run into difficulty.

Several factors drive that complexity. The regulation relies heavily on the concept of “intended purpose,” which means the same underlying model can attract different classifications depending on how you deploy it. Classification also depends on your role in the supply chain: a provider building a system and a deployer using a third-party system face different obligations and different exposure. Finally, Article 6(3) introduces a set of narrow exemptions that allow some Annex III systems to escape the high-risk tier, but only if the conditions are met precisely and the documentation is in place before the system goes to market.

Note: The European Commission committed to publishing practical guidelines on Article 6 classification no later than 2 February 2026. Those guidelines had not been published at the time of writing. Until they are available, the classification methodology below follows the text of the Regulation directly.

Step 1: Check for Prohibited Practices Under Article 5

Before any classification exercise, screen every system against the list of outright bans in Article 5. These systems are not regulated; they are prohibited. No conformity assessment, no exemption, no transition period covers them.

The prohibitions that became effective on 2 February 2025 include the following practices:

  • AI systems using subliminal or manipulative techniques to distort behaviour and cause harm
  • Social scoring by public authorities based on personal behaviour or personality traits
  • Biometric categorisation to infer sensitive attributes such as race, political opinion, or sexual orientation
  • Real-time remote biometric identification in publicly accessible spaces by law enforcement, subject to very narrow exceptions for serious crimes
  • Emotion recognition in workplaces and educational institutions
  • Predictive policing based on profiling or personality traits without objective indicators
  • Untargeted scraping of facial images to build recognition databases

If your system falls into any of these categories, classification stops here. The system cannot be placed on the EU market or put into service.

Tip: Context matters significantly at this step. Emotion recognition is prohibited in workplaces and schools but may be permitted in specific medical contexts. Always tie the prohibition check to the specific intended use case, not just the technical capability.

Step 2: Apply Article 6(1): Safety Components in Regulated Products

The AI Act’s foundational mechanism is its four-tier risk pyramid. Every AI system in scope falls into one of four categories — unacceptable, high-risk, limited-risk, or minimal-risk — and the category determines the entire compliance burden. Unacceptable-risk systems are banned outright. High-risk systems face the full weight of the regulation. Limited-risk systems require transparency obligations only. Minimal-risk systems face no new regulatory requirements beyond existing law.

The first route into the high-risk tier runs through Article 6(1). A system is high-risk under this provision if two cumulative conditions apply:

  1. The AI system is a safety component of a product, or is itself a product, covered by EU harmonisation legislation listed in Annex I of the AI Act.
  2. That product is required to undergo a third-party conformity assessment under the relevant Annex I legislation.

Annex I legislation includes the Medical Device Regulation (EU) 2017/745, the Machinery Regulation (EU) 2023/1230, the Radio Equipment Directive 2014/53/EU, and several others. [VERIFY: confirm full Annex I list reflects the final published text of Regulation (EU) 2024/1689 — confirmed against EUR-Lex publication].

If your AI system is embedded in a medical device that already requires notified body assessment, it is high-risk under Article 6(1). The obligation follows automatically from the product’s regulatory status.

Note: Article 6(1) systems have an extended transition period. The obligations for AI safety components in regulated products do not apply until 2 August 2027, one year later than Annex III systems.

Step 3: Check Article 6(2): Is the System Listed in Annex III?

If Article 6(1) does not apply, the next question is whether the system falls within the eight use-case areas listed in Annex III. These are the stand-alone high-risk categories, independent of any product regulation.

The eight Annex III areas are as follows:

  1. Biometrics — remote biometric identification systems, biometric categorisation based on sensitive attributes, and emotion recognition systems (in permitted contexts)
  2. Critical infrastructure — AI used as safety components in digital infrastructure, road traffic, or utility supply systems
  3. Education and vocational training — systems determining access to educational institutions or evaluating learning outcomes
  4. Employment and workers management — recruitment, performance evaluation, promotion decisions, and task allocation tools
  5. Essential private and public services — credit scoring, insurance risk assessment, eligibility for public benefits, and emergency service dispatch
  6. Law enforcement — polygraph-equivalent tools, evidence reliability assessment, and victim risk assessment tools
  7. Migration, asylum, and border control — visa and residence permit examination systems, and individual security risk assessment
  8. Administration of justice and democratic processes — tools assisting in researching or applying law, and systems influencing elections

Each area contains specific sub-cases. Compliance managers should map the system’s intended purpose to the exact sub-case in Annex III, not just the broader area.

After mapping, all Annex III systems are treated as high-risk by default. One critical exception: if the system performs profiling of natural persons, it is always high-risk regardless of any other consideration. Profiling is defined in Article 4(4) as automated processing of personal data to evaluate aspects of a person’s life, including work performance, economic situation, behaviour, or location.

Step 4: Apply the Article 6(3) Exemption Test

This is the most technically demanding step and the one most likely to generate disputes with market surveillance authorities if handled incorrectly.

Article 6(3) creates a narrow derogation. Even if a system appears in Annex III, it is not high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights, including by not materially influencing the outcome of decision-making. That condition is met only when one of four specific sub-conditions applies:

  • The system performs a narrow procedural task
  • The system improves the result of a previously completed human activity
  • The system detects decision-making patterns or deviations from prior decisions and does not replace or influence a completed human assessment without proper human review
  • The system performs a preparatory task for an assessment relevant to an Annex III use case

The profiling exception overrides all four. A system that profiles natural persons cannot claim the Article 6(3) exemption regardless of which sub-condition it might otherwise meet.

Providers relying on this exemption must document the assessment before placing the system on the market. That documentation must be available to national competent authorities on request. Article 49(2) also requires registration in the EU database for high-risk AI systems even for providers claiming the exemption.

Tip: The burden of proof sits with the provider, not the regulator. If you intend to rely on Article 6(3), treat the documentation as if you expect to defend it in front of a notified body.

Step 5: Determine Transparency Obligations for Limited-Risk Systems

Systems that clear all three prior checks may still carry obligations under Article 50 if they interact directly with users or generate synthetic content. These transparency obligations apply regardless of the risk tier assessment.

Three categories of system are covered:

  • Chatbots and conversational AI — users must be informed they are interacting with an AI system, unless the context makes it obvious
  • Emotion recognition and biometric categorisation systems (in permitted contexts outside the prohibited categories) — the subjects must be informed
  • Deepfakes and AI-generated content — content that manipulates existing persons or events must carry a machine-readable label identifying it as artificially generated

These systems are “limited-risk” in the Act’s terminology. They do not require a quality management system or technical documentation under the high-risk regime, but the transparency obligation is mandatory and enforceable from 2 August 2026.

What Compliance Managers Must Document for Each Classification Decision

Classification is not a one-time internal exercise. The Act treats it as a documented legal position. For every AI system in your portfolio, the compliance record should capture the following information:

  • The intended purpose, stated precisely and tied to the specific use case
  • The role of your organisation: provider, deployer, importer, or distributor
  • The result of the Article 5 prohibited practices check, with reasoning
  • Whether Article 6(1) applies, including the specific Annex I legislation and whether third-party conformity assessment is required
  • Whether the system appears in a specific Annex III sub-case, with the exact reference
  • If Article 6(3) is claimed, the specific sub-condition invoked and the supporting rationale
  • Whether profiling of natural persons occurs

The classification should be reviewed whenever the system undergoes a substantial modification, changes its intended purpose, or is deployed in a new context. A system classified as minimal-risk today can become high-risk tomorrow if its deployment context changes.

The AI Act’s risk-based approach mirrors the logic already familiar from product safety regulation. If you have experience working with the EU risk analysis methodology in physical products, the same principle applies: classification drives obligation, and obligation drives documentation.

For a broader look at how the AI Act relates to cybersecurity requirements, the AI Act vs Cyber Resilience Act comparison on this site covers the interaction between the two frameworks.

Frequently Asked Questions

What is the difference between Article 6(1) and Article 6(2) for AI Act classification? Article 6(1) applies when an AI system is a safety component in a regulated product already subject to EU harmonisation law requiring third-party assessment. Article 6(2) applies when the system falls within one of the specific use cases listed in Annex III, independently of any product regulation.

Can a system listed in Annex III avoid the high-risk classification? Yes, but only under the narrow conditions of Article 6(3): the system must not materially influence decision outcomes and must perform a narrow procedural or preparatory role. Systems that profile natural persons cannot claim this exemption. The provider must document the assessment before placing the system on the market.

Does the high-risk classification apply to deployers or only to providers? Both carry obligations. Providers building or placing the system on the market face the heaviest requirements, including the quality management system and technical documentation. Deployers using a third-party high-risk system have their own obligations under Article 26, including fundamental rights impact assessments in some cases and monitoring of system operation.

What happens if a system’s classification changes after deployment? The provider must reassess classification following any substantial modification or change in intended purpose. If the system becomes high-risk as a result, full compliance obligations apply from that point. For Annex III systems already on the market before 2 August 2026, the Act applies only when a substantial modification occurs.

Where can I find the official list of Annex III use cases? The full Annex III text is published on EUR-Lex as part of Regulation (EU) 2024/1689. The EU AI Act Service Desk also maintains an accessible version at ai-act-service-desk.ec.europa.eu.

Conclusion

Accurate classification is the foundation of every AI Act compliance programme. Without it, you cannot determine your obligations, scope your documentation, or identify which systems need conformity assessment before August 2026. The methodology above follows the legal sequence the Act itself prescribes: Article 5 first, then Article 6(1), then Annex III, then the Article 6(3) exemption test, and finally Article 50 transparency obligations.

Three things to carry forward. First, intended purpose drives classification: the same model in two different deployment contexts may attract different risk tiers. Second, the Article 6(3) exemption is narrower than it appears, and the profiling override removes it entirely for a significant proportion of enterprise use cases. Third, classification is a documented legal position that must be maintained and revisited throughout the system’s lifecycle. August 2026 is the enforcement start date. The time to build and defend those classification records is now.

Get in Touch

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

spot_img

Related Articles

Get in Touch

Latest Posts