What Engineers Should Know Before It Becomes Mandatory
AI Certification, what to expect
Artificial Intelligence has moved from research labs to production lines, hospitals, and everyday devices. Yet, until now, its regulation was a grey zone. That’s about to change.
The EU AI Act (Regulation (EU) 2024/1689) is officially bringing AI under a structured regulatory framework similar to product safety laws, including classification, testing, documentation, and third-party assessment. In other words, AI Certification will soon became a need.
This shift doesn’t only affect big tech. It will reshape how engineers, developers, and compliance teams approach the design of systems that use or integrate AI. Whether you work in embedded systems, safety-critical devices, or software design, you’ll need to understand how AI regulation connects with the same principles that drive electrical safety and EMC compliance.
From CE Marking to AI Conformity: The New Model
At first glance, the AI Act borrows heavily from the model used for CE marking of products. The logic is familiar to anyone working with machinery, electronics, or medical devices:
- Define risk classes
- Apply the appropriate conformity assessment
- Maintain technical documentation
- Demonstrate compliance with essential requirements
The regulation defines four AI risk categories:
- Minimal risk: Chatbots, games, or tools with negligible impact. No regulation required.
- Limited risk: Systems that need transparency (e.g., AI that interacts with users or generates content).
- High risk: AI used in critical sectors, healthcare, automotive, employment, education, or safety-related control systems.
- Unacceptable risk: AI that manipulates behavior, invades privacy, or violates human rights (these are banned).
For engineers, this means a shift from “prove your product works safely” to “prove your AI behaves safely.”

High-Risk AI Systems: The New AI Certification Frontier
The high-risk category is where certification becomes mandatory. These systems will need a conformity assessment, often with third-party evaluation, before being placed on the market.
The assessment will require:
- A risk management process covering design, data, and human supervision.
- Data governance documentation, showing that training and validation data are representative and traceable.
- Technical documentation, similar to a safety file.
- A post-market monitoring plan, to track real-world performance and anomalies.
This structure mirrors product safety certification: in electrical safety, you test insulation and creepage distances; in AI, you test dataset integrity, bias, and predictability.
In both cases, documentation and traceability are not optional, they’re your evidence of control.
Lessons from Electrical Safety and EMC
The new AI certification path may look overwhelming, but engineers with compliance experience have an advantage.
Think of how we manage electrical safety under IEC 62368-1 or EMC under IEC 61000-6-3: both require understanding the hazard, designing protective measures, and verifying compliance through testing.
AI is following the same philosophy, just in a different domain:
- Hazard: Unintended or biased decision.
- Safeguard: Explainability, human oversight, or performance thresholds.
- Verification: Simulation, test data, or model validation reports.
In other words, AI regulation is safety regulation in disguise.
The goal is the same: keep the system predictable and controllable.
Compliance by Design: Start Before It’s Too Late
Waiting for enforcement is a dangerous strategy. The AI Act’s transition period gives roughly two years before compliance becomes mandatory, but building the required processes, data governance, and documentation will take time.
Here’s how to start preparing now:
- Map your AI use cases: Identify if they fall under “high-risk.”
- Build documentation habits: Start versioning datasets and model updates.
- Integrate risk thinking: Treat bias, robustness, and cybersecurity like safety hazards.
- Collaborate early: Compliance, R&D, and data teams should align from the first design phase.
Just like safety compliance, AI conformity requires careful planning.
Useful Tip
If your organization already complies with CE marking or ISO standards, leverage those frameworks.
Many AI requirements, traceability, post-market surveillance, quality management, can be met through extensions of ISO 9001, ISO 27001, or even ISO/IEC 42001 (AI Management System).
AI Certification in short
The era of AI certification has begun, and it’s bringing engineering discipline to the world of algorithms.
The EU AI Act doesn’t only regulate, it teaches. It pushes us to think about risk, safety, transparency, and accountability as integral parts of technology design.
For engineers, that’s not a threat. It’s an invitation to apply familiar safety logic to a new domain, and build systems that the market, regulators, and users can finally trust.


