AI GovernanceISO CertificationAI Act4-minute readDecember 2025

ISO 42001 Certification and EU AI Act Obligations

ISO 42001 provides a framework for managing AI risk through a documented management system. The EU AI Act imposes specific obligations on organisations that develop or deploy high-risk AI systems: conformity assessment before deployment, system-level transparency requirements and registration. These are product and deployment obligations.

For boards that have pursued ISO 42001 certification as their primary response to the EU AI Act, the gap carries a specific governance implication. The certification audit verifies a management system. The Act will assess specific systems against specific requirements.

What ISO 42001 covers

ISO 42001 is an AI Management System standard, published in December 2023 and structured around the same Annex SL framework as ISO 27001. It covers governance, risk management, impact assessment and the responsible deployment of AI systems. Clause 5 places obligations on top management: AI policy, defined roles and responsibilities, resource allocation and management review.

The certification audit confirms that the management system conformed to the standard at the point of the audit. An organisation that holds ISO 42001 certification has demonstrated it manages AI risk through a documented and audited system. The EU AI Act asks a different question of each specific AI system the organisation develops or deploys.

Where the EU AI Act goes further

Three areas define where the Act's obligations go beyond what a management system certification addresses.

Conformity assessment
The Act requires AI systems classified as high-risk to undergo conformity assessment before deployment. Classification is determined by the system's intended use against the Act's risk categories. ISO 42001 helps manage AI risk internally but does not determine classification and does not substitute for the conformity process.
System-level transparency
Article 13 requires high-risk AI systems to be designed and developed to be transparent to deployers and users. This is a technical requirement at system level, documented and verifiable. ISO 42001 addresses transparency as a governance principle within the management system.
Fundamental rights impact assessment
Deployers of certain high-risk AI systems must conduct a fundamental rights impact assessment before deployment. ISO 42001 includes impact assessment in its framework. The Act's FRIA has a defined scope and specific requirements that a management system audit does not address.

What the board needs to establish

The starting point is establishing which AI systems the organisation develops or deploys that fall within the Act's high-risk classification. Without that picture, neither the management system nor a compliance programme is calibrated to the right obligations.

ISO 42001 is the right governance foundation. The board's question is which systems carry specific Act obligations alongside it, and whether the organisation has the processes to satisfy them before those systems are deployed.

How this affects your organisation

For boards that have approved AI strategies and pursued ISO 42001 certification, the certificate demonstrates a governance framework for AI risk management. The AI Act will assess whether specific systems meet specific requirements before they are placed on the market or put into service.

Both questions need answers, and they come from different processes. The board is responsible for ensuring both are in place for the AI systems that carry high-risk classification obligations.

If your organisation is working through ISO 42001 certification alongside EU AI Act obligations, an advisory call is a useful starting point.