ISO

Beyond the EU AI Act: How ISO/IEC 42001:2023 provides the ability to comply with any AI regulation

Andrew Sherbutt headshot jpg

Andrew Shurbutt

Principal, Global Assurance, Coalfire

November 8, 2024

The EU AI Act has captured the attention of organizations in ways similar to the EU’s GDPR. Just as the GDPR was followed by a wave of new and updated privacy regulations, the EU AI Act is expected to be the first of many AI regulations to come. Is your organization ready? Does your organization have a framework in place to identify and address AI regulatory compliance risks?

The ISO/IEC 42001:2023 standard guides organizations in designing, implementing, maintaining, and continuously improving responsible AI governance practices for an artificial intelligence management system (AIMS). One aspect of responsible AI governance is determining the AI regulation(s) applicable to your organization. By implementing ISO/IEC 42001 clauses and controls, an organization establishes a framework for determining which AI regulations apply, how the AI regulations apply to the organization, and how to implement context-specific responsible AIMS processes to ensure an organization complies with the AI regulations. This framework also includes practices for continuously monitoring for new AI regulations and updates to existing AI regulations applicable to the organization.

An organization’s external context is the starting point for determining AI regulatory requirements. The external context considerations described in Clause 4.1 take into account applicable legal requirements, including prohibited uses of AI, and the enforcement of legal requirements in the development and use of AI systems. In determining the organization’s AI system role, Clause 4.1 notes that roles can be informed by legal requirements specific to the AI system. For Clause 4.2, applicable AI regulatory authorities should be included in an organization’s list of interested parties because they are relevant to the AIMS and have requirements that are addressed through the AIMS. The AI regulatory context for the AIMS informs the determination of AI risk and opportunities as described in Clause 6.1.1.  Organizations should also consider the dynamic state of the AI regulatory environment, and the risks associated with updated and new AI legal requirements.

Following the Annex B implementation guidance for Annex A controls can also help an organization comply with AI regulations.  B.2.2 states that the AI policy should be informed by legal requirements. The review of the AI policy (B.2.4) should include an assessment of whether there are opportunities to improve the organization’s policies and approach to managing AI systems in response to changes to legal conditions. AIMS roles and responsibilities (B.3.2) should be established to consistently fulfil legal requirements. The AI system impact assessment process (B.5.2) should include the legal position of individuals and universal human rights. As organizations operate and monitor their AI system (B.6.2.6), monitoring should ensure compliance with applicable legal requirements. Organizations should consider whether they have obligations to report information about their AI system to interested parties, such as AI regulators (B.8.5). During the activity of defining and documenting processes for the responsible use of AI systems (B.9.2), organizations should consider legal requirements applicable to their organization when determining whether to use a particular AI system. Although an organization may have deployed an AI system according to its associated instructions, if the deployment causes concern regarding the impact to the organization’s legal requirements, the organization should communicate its concerns to relevant personnel and third-party suppliers of the AI system (B.9.4). 

At a time when an increasing number of AI regulations are being proposed and signed into law, an organization will need to continually determine which new AI regulations apply to their AIMS. ISO/IEC 42001 certification provides independent verification that an organization has established a framework of processes to ensure their AI system meets applicable AI regulatory requirements. An organization’s ability to prove it can determine and implement existing AI regulatory requirements as well as monitor for new AI regulatory requirements demonstrates responsible AI governance of an AI system.

If you would like to learn more about leveraging ISO/IEC 42001 to establish responsible AI governance practices, or if you are interested in pursuing ISO/IEC 42001 certification, contact Coalfire Certification.