Cyber Risk Advisory

AI Risk and Governance Obligations: Informational Guide

Mandy Pote headshot jpg

Mandy Pote

Managing Principal, Coalfire

September 5, 2024
Screenshot 2024 09 18 at 6 40 03 PM

As artificial intelligence (AI) continues to evolve, organizations must navigate a complex landscape of risks and governance requirements. Coalfire closely follows leading regulations and frameworks that organizations can use to build a comprehensive AI Risk Governance Program.

This guide provides a high-level overview of the applicability and use of key frameworks and regulations - the EU AI Act, NIST AI Risk Management Framework (RMF), ISO 27001, and MITRE ATLAS - and how they can be implemented to inform AI risk management and governance.

EU AI Act

Overview of EU AI Act

The EU AI Act is the first comprehensive legal framework for AI, aiming to ensure AI systems are safe, respect fundamental rights, and align with ethical principles. The act classifies AI systems based on risk levels: unacceptable, high, limited, and minimal.

Applicability

The EU AI Act applies to organizations or individuals that develop AI systems in the EU or entities that utilize AI systems within the EU. The Act has extraterritorial reach, meaning it applies to providers and users outside the EU if their AI systems affect people within the EU.

Implementation Considerations

  1. Risk Classification: Identify the risk category of your AI systems.
    • High-Risk AI Systems: Developers and deployers must comply with stringent requirements, including risk management, data governance, and transparency.
    • General Purpose AI (GPAI): Providers must offer technical documentation and comply with copyright laws.
    • Limited Risk: These AI systems have lighter transparency obligations. Developers and deployers must ensure that users are aware they are interacting with AI.
    • Minimal Risk: These systems pose minimal or no risk and are largely unregulated.
  2. Compliance Requirements: Ensure adherence to specific obligations for high risk AI.
  3. Transparency: Maintain clear documentation and user awareness.

NIST AI Risk Management Framework (RMF)

Overview of the NIST AI RMF

The NIST AI RMF provides a voluntary, structured approach to managing AI risks, focusing on trustworthiness and ethical considerations.

Applicability

Useful for any organization designing, developing, deploying, or using AI systems. This framework can be followed by organizations looking to meet AI regulatory risk management requirements, including the EU AI Act.

Implementation Considerations

  1. Risk Management: Implement the core functions: Govern, Map, Measure, and Manage.
    • Govern Function: Implement policies and procedures to oversee AI risk management activities.
    • Map Function: Identify and document the context in which AI systems operate, including stakeholders, intended uses, and potential impacts.
    • Measure Function: Develop and use metrics to assess the performance, impact, and risk of AI systems.
    • Manage Function: Implement strategies to mitigate identified risks, including technical, organizational, and procedural controls.
  2. Trustworthiness: Ensure AI systems are valid, reliable, safe, secure, and fair.
  3. Continuous Improvement: Regularly update risk management practices.

ISO 42001

Overview of ISO 42001

ISO 42001 is an international standard that defines the requirements for an Artificial Intelligence Management System (AIMS). The standard provides guidance towards responsible development, deployment, and use of AI systems by emphasizing ethical considerations, transparency, and continuous improvement. Organizations who apply the standard can obtain a certification through a certified auditing body.

Applicability

ISO 42001 is a voluntary standard that could be applicable to organizations of all sizes and sectors that develop, deploy, or use AI systems. This certification may be most appealing to organizations that have previously aligned their Information Security Management System (ISMS) to the ISO 27001 standard.

Implementation Considerations

  1. Ethical AI Development:
    • Bias Mitigation: Implement processes to identify and reduce biases in AI systems.
    • Transparency: Ensure that AI decision-making processes are transparent and clearly explainable to stakeholders.
    • Accountability: Establish clear governance structures to oversee AI risk management activities.
  2. Compliance with Privacy Laws: Engage the Legal & Privacy teams to implement robust data protection measures to comply with applicable data protection laws and regulations (e.g., EU AI Act, GDPR).
  3. Continuous Improvement: Establish processes to regularly monitor and manage AI system performance against defined metrics and feedback.

MITRE ATLAS

Overview of MITRE ATLAS

MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a knowledge base of adversary tactics and techniques for AI systems, modeled after the MITRE ATT&CK framework.

Applicability

Security analysts, data scientists, or AI developers can use the MITRE ATLAS to stay informed on real-world threats to AI systems. Additionally, internal or external application security teams can leverage the threat intelligence as part of system security testing, red team exercises, threat modeling, and threat assessment activities.

Implementation Considerations

  1. Threat Techniques: The MITRE ATLAS has provided specific threat tactics (e.g., LLM Prompt Injection, LLM Data Leakage) that can be leveraged by Security Analysts to better under potential risks to AI systems.
  2. Mitigation Strategies: MITRE is continuously updating mitigation concepts (e.g., model hardening) to prevent or respond to threats and manage risks.
  3. CALDERA: MITRE provides an open source platform called CALDERA which is designed to automate adversary emulation, assist red-team exercises, leveraging the MITRE ATLAS knowledge base of adversary tactics.

Conclusion

Navigating AI risk and governance requires a comprehensive understanding of various frameworks and regulations. By aligning with the EU AI Act, NIST AI RMF, ISO 27001, and MITRE ATLAS, organizations can enhance their AI systems’ safety, security, and trustworthiness. This guide serves as a foundational tool to help customers meet their AI risk and governance obligations effectively.