Your Essential AI Act Compliance Guide
The European Union’s landmark AI Act EU AI Act Official Text is fundamentally changing the global regulatory landscape for Artificial Intelligence. Understanding this AI ACT Risk Pyramid is the first step in ensuring your organization remains compliant and can continue operating in the EU market.
It moves AI governance into a mandatory, risk-based legal framework. This means new categories of liability, extensive documentation requirements, and a shift in focus from innovation alone to verifiable trustworthinessThe core of the AI Act is its four-tiered approach, which assigns compliance obligations proportionate to the potential harm an AI system can cause.
The AI Act is heavily interlinked with other cybersecurity requirements, present for example in prEN 50742
What is the EU AI Act Risk Pyramid?
The EU AI Act Risk Pyramid is the regulatory framework. The European Commission uses it to classify artificial intelligence systems. This classification is based on their potential to harm health, safety, or fundamental rights. It uses four distinct tiers. They are Unacceptable, High, Limited, and Minimal risk. These tiers dictate the severity of legal obligations. They range from outright market bans to no new regulatory restrictions.
The AI Act’s Four Risk Categories
The regulation creates clear boundaries for AI systems, demanding the highest level of scrutiny for those that could significantly impact fundamental rights or safety:
- Unacceptable Risk: These systems are banned outright because they pose a clear threat to EU values (e.g., social scoring by governments, manipulative techniques).
- High-Risk: This is the most regulated category, encompassing systems used in critical areas like medical devices, employment screening, essential public services, and critical infrastructure. These systems carry the heaviest compliance burden.
- Limited Risk: Systems that require specific transparency obligations to inform users that they are interacting with AI (e.g., deepfakes or chatbots).
- Minimal Risk: The vast majority of AI applications that pose little to no risk (e.g., spam filters, simple inventory tools). These face minimal regulatory oversight.
The 4 Levels of Risk Explained (with Examples)

To ensure compliance, you must accurately map your AI system to one of these four levels.
Level 1: Unacceptable Risk (Prohibited AI)
At the peak of the pyramid are AI systems deemed a clear threat to EU values and fundamental rights. These systems are not subject to compliance procedures because they are banned entirely. You cannot put them on the EU market or put them into service.
This prohibition covers systems that manipulate behavior, exploit vulnerabilities, or conduct untargeted surveillance. If your project falls here, you must stop immediately.
The prohibited practices include:
- Cognitive Behavioral Manipulation: AI using subliminal techniques to distort behavior and cause physical or psychological harm (e.g., toys encouraging dangerous behavior in children).
- Social Scoring: Systems used by public authorities to evaluate trustworthiness based on social behavior or personality traits leading to detrimental treatment.
- Biometric Categorization: Inferring sensitive data like race, political opinions, religious beliefs, or sexual orientation from biometric data.
- Real-time Remote Biometric Identification: The use of facial recognition in public spaces by law enforcement (subject to very narrow exceptions for serious crimes or immediate threats).
- Emotion Recognition in Sensitive Areas: Using AI to infer emotions in workplaces or educational institutions.
- Predictive Policing: Assessing the risk of an individual committing a crime based solely on profiling or personality traits.
Technician’s Tip: Context is everything. “Emotion recognition” is banned in schools and workplaces to protect employees and students, but the exact same technology might be permitted if used in a purely medical context, such as a therapeutic app for autistic children.
Level 2: High-Risk AI (The Compliance Heavyweight)
This tier is where the bulk of the regulatory work lies. High-risk AI systems are permitted, but they are subject to strict, burdensome obligations before they can enter the market. This section of the pyramid is critical for AI certification experts and quality managers.
High-risk systems generally fall into two main buckets: AI used as safety components in regulated products (like medical devices or machinery covered by Annex I), and specific stand-alone AI applications listed in Annex III.
Common examples of Annex III High-Risk AI include:
- Biometrics: Remote biometric identification systems (those not already prohibited under Level 1).
- Critical Infrastructure: AI intended to be used as safety components in the management and operation of road traffic, or the supply of water, gas, heating, and electricity.
- Education and Vocational Training: Systems used to determine access to education (assigning school spots) or for evaluating learning outcomes (automated exam grading).
- Employment and Workers Management: AI used for recruitment (CV-sorting software), making decisions on promotions, or monitoring employee performance.
- Essential Private and Public Services: Systems used by public authorities to evaluate eligibility for benefits, or private services like credit scoring and risk assessment for life and health insurance.
- Law Enforcement, Migration, and Border Control: Tools used for polygraphs, assessing reliability of evidence, or examining visa applications.
The Obligations for High-Risk AI: If your system is High-Risk, you must demonstrate conformity. You must implement a continuous Risk Management System, ensure high-quality data governance (training, validation, and testing sets), maintain comprehensive technical documentation, enable automatic record-keeping (logging), and ensure human oversight measures are built in.
Level 3: Limited Risk (Transparency Obligations)
Moving down the pyramid, we find systems that pose specific transparency risks. These are not inherently “dangerous,” but the regulation mandates that users have the right to know they are interacting with a machine.
For these systems, the primary obligation is transparency. You generally do not need a full quality management system, but you must ensure the user is informed.
This category specifically applies to:
- Chatbots and Customer Service AI: Users must be told they are talking to an AI unless it is obvious from the context.
- Emotion Recognition Systems (outside prohibited areas): Subjects must be informed the system is in use.
- Deepfakes and Generative AI: Content generated or manipulated by AI (audio, video, image) that resembles existing persons or places must be clearly labeled as artificially manipulated.
Technician’s Tip: For generative AI providers, the requirement is often technical. You must ensure your outputs are marked in a machine-readable format (e.g., watermarking metadata) so platforms can detect and label the content downstream.
Level 4: Minimal Risk (No New Restrictions)
The base of the pyramid represents the vast majority of AI systems currently in use. The EU Commission estimates that most AI applications fall into this category.
Examples include:
- AI-enabled video games.
- Spam filters in email clients.
- Inventory management tools aimed at optimizing logistics.
If your system falls here, the AI Act imposes no new legal obligations. You can continue to operate as usual, adhering to existing laws like GDPR. However, the EU encourages the voluntary adoption of Codes of Conduct to signal quality and trustworthiness.option of Codes of Conduct to signal quality and trustworthiness to your customers.
Compliance for High-Risk Systems: The Technical Mandates
If an organization places a High-Risk AI system on the market, the AI Act requires rigorous adherence to specific requirements. These requirements include robust quality management systems. It also requires comprehensive documentation and human oversight.
The challenge for compliance teams is translating these legal obligations into technical proof. This is precisely where technical standards become non-negotiable compliance tools. For a deeper analysis of these tools, refer to our guide on Technical Standards and Compliance.
| EU AI Act Requirement | Translation into Technical Standard | Primary Standard |
|---|---|---|
| Data Governance (Quality and relevance) | Mandate bias assessment and data integrity validation processes. | NIST AI RMF (Govern, Map, Measure phases) |
| Transparency & Logging (Audit trail) | Specify the retention period and required granularity for automatic event logging. | ISO/IEC 42001 (Control A.8.2) |
| Robustness & Accuracy (Resilience) | Define specific stress testing protocols for system security, drift, and intended performance. | ISO/IEC 42001 & NIST AI RMF |
Compliance teams adopt standards like ISO/IEC 42001 (AI Management System) and the NIST AI Risk Management Framework (AI RMF). This adoption provides access to auditable, best-practice methodologies. This to directly demonstrate conformity with the AI Act’s stringent demands.
The “Hidden” Layer: General Purpose AI (GPAI)
The original concept of the risk pyramid struggled to categorize foundation models like GPT-4 or Gemini. The final text of the AI Act solves this by treating General Purpose AI (GPAI) models differently.
GPAI models sit outside the traditional vertical classification because they can be adapted for countless uses.
Providers of GPAI models have horizontal obligations regardless of the final application. They primarily focus on technical documentation. They also need to respect copyright law.
However, if a GPAI model is classified as having “systemic risk”, it faces a much stricter tier of rules. This classification is currently defined by very high computational training power, indicating high capability. The rules include adversarial testing (red-teaming), assessing systemic risks, and reporting serious incidents to the new AI Office.
Step-by-Step: How to Classify according to AI ACT Risk Pyramid
For compliance officers and technicians, here is a practical workflow to determine where your system sits in the pyramid.
- Check against Prohibited Practices (Article 5): Does your system do anything listed in Level 1 above (e.g., social scoring, untargeted scrapping for facial recognition)? If yes, it is banned.
- Check if it is a Safety Component (Annex I): Is the AI a safety component of a product regulated under existing EU harmonization legislation (e.g., Medical Device Regulation, Machinery Regulation)? If yes, it is likely High-Risk.
- Check against Annex III (Stand-alone): Does your system fall into one of the specific critical areas listed in Level 2 above (e.g., HR tools, credit scoring, education)? If yes, it is High-Risk.
- Check Transparency Requirements: Is it a chatbot, a deepfake generator, or an emotion recognition system? If yes, it falls under Limited Risk and requires transparency labeling.
- Fallback: If none of the above apply, your system is likely Minimal Risk.
AI ACT Risk Pyramid Compliance Timeline
The AI Act entered into force in August 2024. However, the obligations are phased in over time. This allows organizations to prepare.
- February 2025 (6 Months): The rules on Prohibited Practices (Level 1) apply. These systems must be off the market.
- August 2025 (12 Months): Obligations for General Purpose AI (GPAI) models apply.
- August 2026 (24 Months): Most rules for High-Risk AI systems (Annex III) apply.
- August 2027 (36 Months): Rules for High-Risk AI systems that are safety components of regulated products (Annex I) apply.
Summary
The EU AI Act is complex, but the pyramid structure simplifies your initial assessment.
- Unacceptable: Don’t do it.
- High Risk: Prepare for heavy compliance (Risk Management, Data Governance, CE Marking).
- Limited Risk: Be transparent; tell the user it’s AI.
- Minimal Risk: Business as usual.
Identify your level early. If you are in the High-Risk category, start building your Quality Management System (QMS) and Technical Documentation now. The transition periods are ticking, and compliance cannot be built overnight.
Frequently Asked Questions (FAQ)
ChatGPT itself is considered a General Purpose AI (GPAI) model. It has its own set of obligations regarding documentation and copyright. However, if you integrate ChatGPT into a high-risk use case. For example, using it to automatically filter job applications (Annex III), that specific application becomes a High-Risk system requiring full compliance.
Penalties are severe. Violating the prohibited practices (Level 1) can lead to fines of up to €35 million or 7% of global annual turnover, whichever is higher. Non-compliance with High-Risk obligations can result in fines of up to €15 million or 3% of turnover
The European standards organizations (CEN/CENELEC) are currently developing harmonized standards. Until those are published, the de facto international standard for managing AI risk is ISO/IEC 42001 (AI Management System), which provides an excellent framework for preparing for the AI Act.



