Cyber Risk Advisory

Coalfire Partners with Google Cloud to Assess AI Governance and Security Risks against NIST AI RMF and ISO/IEC 42001

Mandy Pote headshot jpg

Mandy Pote

Managing Principal, Strategy, Privacy, Risk

Andrew Sherbutt headshot jpg

Andrew Shurbutt

Principal, Global Assurance, Coalfire

AI Cloud

Additional authors: Al Mahdi Mifdal, Sr. Director, Coalfire Certification, and Michael Perleoni, Senior Manager, Coalfire Certification

Given the continuous growth in the utilization of AI systems, the need for AI governance has caught the attention of many organizations' executive leadership teams, their customer base, and even the Federal Government.

In October 2023, President Biden issued an Executive Order (EO) establishing new standards for AI safety, security, and data privacy. The EO has tasked NIST with establishing rigorous AI development standards and will require developers of AI systems to share test results and other critical information with the United States (US) government.

In addition to the security standards and expectations of the public sector, private sector organizations are likewise expected to take security measures. Organizations leveraging AI systems for daily operations now expect additional security measures to ensure their personal data is not misused, and they are not under additional security threats. As a developer of AI products (e.g., Vertex AI), Google Cloud has recognized this need and wants to ensure they design programs to support regulatory and customer obligations.

Why AI Risks Are Important

Distinct and emerging risks arise from the inherent characteristics of AI models, the datasets employed in their training, and the resulting data output. These risks encompass common concerns, such as data breaches, misconfigurations, access control, alongside unique challenges like data bias or unfairness and data overcollection. 

As the landscape of AI risks and threats evolve, organizations must refine their risk mitigation strategies for combating these challenges. AI risk management must be incorporated into the system’s overarching risk management framework. In this blog post, we'll delve into two AI risk management frameworks and examine how a company like Google has implemented AI risk management techniques throughout the AI system lifecycle.

Emerging AI Governance Frameworks

Two globally recognized cybersecurity standard organizations – NIST and ISO – are at the forefront of developing and influencing AI governance frameworks for organizations. In order to independently validate Google Cloud’s AI security and privacy commitments, Coalfire leveraged NIST and ISO’s leading AI frameworks – NIST AI RMF and ISO 42001 – both designed to assess and manage risks related to the development and use of AI. 

The framework landscape is poised for further significant change with the introduction of HITRUST v11.3, the Cloud Security Alliance's (CSA) framework, and the EU AI Act. Coalfire is actively monitoring the impact of new regulations and frameworks to adapt best practice standards for compliance and ongoing governance.

NIST AI Risk Management Framework

The NIST AI RMF is a comprehensive guide designed for organizations seeking to effectively manage internal and external risks associated with AI technologies. Whether the organization is a developer and/or user of AI, this framework enables businesses to safeguard operations and ensure a competitive edge in the market.

The NIST AI RMF is divided into four core components:

  • Govern: AI governance policies and oversight 
  • Map: Identify components of the AI lifecycle
  • Manage: Quantify AI risks and impacts of AI use
  • Measure: Strategically prepare for AI risks
     

ISO 42001 Artificial Intelligence Management System

The ISO 42001 standard defines a standardized approach for developing, implementing, and managing Artificial Intelligent Management Systems (AIMS). It provides structured directions for establishing objectives, processes, and controls for AI and machine learning (ML) in accordance with the management system objectives.

The intent of the AIMS implementation is to efficiently and consistently:

  • Facilitate communication, commitment, and engagement with relevant stakeholders.
  • Implement and integrate processes and controls for responsible AI.
  • Measure how effectively the AIMS is balancing AI governance with innovation.
  • Continuously improve AI risk management by implementing appropriate controls.
     

The harmonized structure of ISO’s management system standards (identical clause numbers, clause titles, and text) is present in ISO 42001. This consistency helps an organization to integrate ISO 42001 into its existing management system. In ISO 42001 Appendix D, ISO provides a brief description of how ISO 42001 can be integrated with other management system standards, including ISO/IEC 27001, ISO/IEC 27701, and ISO 9001.

Coalfire’s Assessment of the Google Cloud Vertex AI Platform

In late 2023, Google worked with Coalfire to assess Google Cloud’s Vertex AI platform against the NIST AI RMF and ISO/IEC 42001. The section below describes Coalfire’s methodologies, high-level results of the assessment, along with key takeaways for any organization developing their AI governance program.

Assessment Methodologies

  1. The Coalfire methodology formally assesses the functions prescribed in the NIST AI RMF, namely Govern, Map, Measure, and Manage. Each function was broken down into categories and sub-categories to derive seventy-two (72) assessment objectives. Each assessment objective was analyzed in terms of testing, evaluation, verification, and validation (“TEVV”) throughout the AI lifecycle, as described in NIST AI 100-1.
  2. Coalfire Certification ISO 42001 readiness assessments mimic the process of an ISO 42001 certification audit. The assessment consists of a series of interviews during which implementation of the ISO 42001 requirements (Clauses 4-10) and Annex A controls are evaluated. The gaps identified during the assessment are presented as major and minor nonconformities, which can help prioritize remediation efforts before an ISO 42001 certification audit.  
     

Results and 3 Key Takeaways

  1. A previous information security management system (ISMS) provides a leg up on the AIMS build out.

    If the organization is already ISO 27001 and ISO 9001 certified, certain management system processes may already be implemented, and therefore, the AIMS does not have to be created from the ground up. In the case of Google Cloud, their AI risk assessment includes a well-developed risk scoring criteria, assessment cadence, and risk treatment process, but was updated to include AI-specific components such as an AI Harms Impact Analysis. In ISO 42001: Clause 6.2 AI objectives and plans to achieve them, Google employs the same Objectives and Key Results process used to determine its information security objectives for ISO 27001.
     
  2. AI Roles and Responsibilities require a unique and complex skill set.

    One of the most important components of an AI risk management function and development of an AIMS is the identification and assignment of responsible and accountable parties. Per the NIST AI RMF Govern 3.2: Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.

    What that looks like for each organization is different depending on the primary expertise of existing roster of talent. Additionally, identifying a single resource for the role of AI governance may prove difficult, due to the nature and complexity of the business model, thus, a cross-functional management team (e.g., security, compliance, risk, legal) can draw upon expertise in each area to address emerging risks. We reviewed Google Cloud’s approach, which draws from a deep bench of AI, security, privacy, and compliance, risk, ethics, talent into its AIMS governance model.
     
  3. Metrics are essential for measuring unique AI risks.

    As stated above, the development and use of AI systems create unique risks which require unique risk identification and measurement techniques. Having specific targets, thresholds and metrics that models, systems and outputs are measured against mitigates ambiguity when determining risk mitigation or non-conformity. 

    Consider NIST AI RMF Measure 1:Appropriate methods and metrics are identified and applied. Metrics become important when AI systems are evaluated for trustworthy characteristics. Establishing metrics and benchmarks ensures that the organization can determine if residual risks exceed risk tolerance, and if so, take action to mitigate excessive risk.

    We noted that Google Cloud has applied this concept in its monitoring of potential non-conforming outputs to mitigate inaccurate or unfair outputs. The analysis includes a combination of automated evaluation, human evaluation, and red teaming, to continuously identify the potential impact of output fairness. 
     

Conclusion

Achieving AI compliance does not always necessitate a comprehensive restructuring of existing fundamental risk management processes. Instead, AI requires a novel perspective for the identification, measurement, and management of risk. Along with security and privacy risks, new risks like safety, fairness, and transparency are crucial to assess a comprehensive view of the potential harm an AI system may pose to individuals, groups, communities, organizations, and society as a whole. 

Tom Galizia, President of Coalfire, states: “Known and anticipated AI benefits coupled with unprecedented adoption rates require business leaders to efficiently assess and effectively deploy dynamic governance and risk management programs. Google’s leadership in the field of AI in a proactive partnership with Coalfire’s AI regulatory domain expertise provides leading insights for executives on how to do both.”

Coalfire provides AI Advisory services, including AI Risk Assessments, AI Governance Program development, AI Compliance Readiness, AI Application Threat Modeling, and more. Connect with Coalfire representatives to start your AI risk and governance journey today.