Cyber Resilience Act Compliance Hub

For most of the past two decades, a connected product entering the EU market needed to satisfy two fundamental regulatory questions: does it shock or burn, and does it interfere with other devices? The Cyber Resilience Act changes that frame permanently. Regulation (EU) 2024/2847, which entered into force in December 2024, adds a third mandatory dimension, cybersecurity, to every product with digital elements sold in the EU. This includes hardware with embedded software, IoT devices, industrial sensors, consumer electronics, and networking equipment. If a product can connect to a network or to another device, the CRA almost certainly applies to it.

The obligations are staggered across three dates. Vulnerability reporting and incident notification requirements apply from 11 September 2026, less than five months from now. The full body of technical requirements, conformity assessment obligations, and market surveillance provisions applies from 11 December 2027. Manufacturers who treat December 2027 as their start date are already behind: the technical documentation, quality management processes, and software bill of materials that the CRA requires take months to build correctly, and the September 2026 vulnerability reporting obligation demands infrastructure that cannot be assembled in weeks.

This handbook maps the CRA’s obligations in the order engineers and compliance managers need to encounter them, starting with what the regulation actually requires products to do, moving through the technical implementation, and ending with the reporting and documentation obligations that begin before full application.

Understanding the AI Act: Risk Classification

The AI Act’s Classification mechanism is its four-tier risk pyramid. Every AI system in scope falls into one of four categories, unacceptable, high-risk, limited risk, or minimal risk, and the category determines the entire compliance burden. Unacceptable-risk systems are banned outright. High-risk systems face the full weight of the regulation. Limited-risk systems require transparency obligations only. Minimal-risk systems face no new regulatory requirements beyond existing law.

The EU AI Act risk pyramid maps this classification framework in full, the four tiers, what falls into each, how Annex III defines the high-risk application areas, and the classification logic for products that may sit near a boundary. For a manufacturer whose product incorporates AI for a functional purpose, object detection, predictive maintenance, user behavior analysis, correctly placing that system in the pyramid is the first and most consequential compliance decision.

EU AI Act certification: an engineer’s guide translates the regulatory framework into the engineering and compliance process. It covers what certification under the AI Act actually involves for a high-risk system, how the conformity assessment routes differ from CE marking processes, and what the technical file for an AI system must contain. This is the practical companion to the risk pyramid, the article to read once classification is confirmed.

Regulatory Framework and Standards

The AI Act is a framework regulation sets obligations but delegates the technical detail to harmonized standards that have not yet been fully developed. Understanding the current state of those standards, and how to demonstrate compliance in their absence, is one of the practical challenges manufacturers face now.

Navigating AI regulation: essential standards explained covers the standards landscape as it currently stands — the ISO/IEC standards that the AI Act references or is expected to reference, the work underway at CEN-CENELEC, and the practical approaches manufacturers are using to build technical documentation while the harmonized standards are still being finalized. For engineers who are used to consulting a harmonised standard list and finding clear answers, the current state of AI Act standardization requires a different approach, and this article explains what that looks like.

prEN 50742: the new machine compliance standard you need to know before 2027 covers the forthcoming harmonised standard under the EU Machinery Regulation 2023/1230 that explicitly connects functional safety and cybersecurity for connected machines. For manufacturers of machinery where AI components influence a safety function — a category that falls directly into the high-risk tier under Annex III of the AI Act — prEN 50742 defines what cybersecurity requirements must be met to ensure that digital tampering cannot compromise physical safety. The January 2027 application date of the Machinery Regulation means this standard is an immediate practical concern, not a future one, and it illustrates how the AI Act’s risk-based framework intersects with the specific technical requirements of sector standards.

EU AI Act certification: an engineer’s guide translates the regulatory framework into the engineering and compliance process. It covers what certification under the AI Act actually involves for a high-risk system, how the conformity assessment routes differ from CE marking processes, and what the technical file for an AI system must contain. This is the practical companion to both the risk pyramid and the standards articles — the article to read once classification is confirmed and the applicable standards have been mapped.

 

Governance, Strategy and Organisational Obligations

The AI Act introduces a category of obligation that physical safety regulations rarely require: organisational governance. High-risk AI systems must be developed and maintained under a quality management system, must have post-market monitoring in place, and must be registered in the EU database before deployment. These are not documentation exercises — they require internal processes, assigned responsibilities, and ongoing commitment beyond the initial conformity assessment.

What is AI governance and why should you care introduces the governance dimension — what it means at the organisational level, how it maps to the AI Act’s quality management system requirements, and why companies that already have ISO 9001 or similar frameworks in place have a structural advantage in meeting those requirements. It is the bridge between the technical compliance articles and the organisational implications that compliance managers need to address in parallel.

EU AI Act vs Cyber Resilience Act maps the interaction directly — where the regulations overlap, where they conflict, and how to structure a compliance programme that addresses both without duplicating effort. It is the natural starting point for any manufacturer whose product sits at the intersection of the two regulations, and it links to the Cyber Resilience Act compliance handbook for the full CRA treatment.

This resource is part of Compliance Handbooks, Regulatory Decoded’s in-depth technical series for product engineers.