Cyber Risk Advisory

Deploying Responsible AI in Uncertain Times

Joe Nelson

Joe Nelson

Senior Consultant, Coalfire

April 2, 2025
Stock Network

In the AI world, political winds have been shifting, more like a hurricane than a gentle breeze.  The growing body of AI compliance requirements across the US adds on to the existing EU AI Act (among others).  However, politicians on both sides of the pond have stated regulatory enforcement will take a more “innovation friendly” approach in 2025 and beyond.  Most recently, the veto of Virginia’s High-Risk Artificial Intelligence Developer and Deployer Act reflects this.   

Regardless of the political temperature, AI systems’ risks to humans and organizations persist.  AI systems continue to deploy in healthcare, financial services, education, and media (among others), inevitably affecting lives and well-being.  Organizational risks like security vulnerabilities, cost, and brand/reputation risk aren’t going anywhere.  Moreover, demonstrating Responsible AI just makes good business sense; Cisco’s 2024 Consumer Privacy Survey revealed customers who were more aware of privacy protections were more likely to feel their data was protected when using AI systems.   

How are companies – both eager to present and adopt AI solutions but wary of these risks and of running afoul of regulatory authorities – supposed to move forward while controlling risk? 

Plant the Tree Today 

It’s said the best time to plant a tree is twenty years ago; the second-best time is now.  Building an AI system, app, platform, etc., is an expensive endeavor.  The cost increases significantly when developers have to go back and build in oversight mechanisms, disgorge noncompliant materials, retrain models, and otherwise retrofit for compliance’ sake.  Feeding trust and compliance into your sapling model today will deliver strong roots for your AI system tomorrow.   

Existing AI laws and regulations may not directly affect your model today, but new ones are coming, and growth into foreign markets may bring new compliance obligations.  Having an AI governance program in place from the start will keep your system standing strong when the winds inevitably shift again.  Something is better than nothing.  Plant the tree today.  

Adopt a Prevailing Standard or Framework 

AI governance frameworks and regulations have a lot in common.  Most frameworks assess the system’s risks, potential for bias or discrimination, data quality/governance, and security protections.  Moving forward with one will inevitably put you in a much better place to comply with others down the line.  There are two strong starting places:  

The EU AI Act 

The EU AI Act, like the GDPR, remains the gold standard.  It mitigates risk first through categorizing AI systems into prohibited, high-risk, limited-risk, and minimal-risk systems.  The latter requires nine steps; compliance with most of them (such as implementing robust data governance, regularly testing for security and drift, logging inputs/outputs, and maintaining public documentation about the system) would provide any AI system with a solid foundation for compliance with most regulatory schemes.  Limited-risk systems carry transparency requirements, while minimal-risk systems’ requirements are voluntary.   

Assessing compliance with the EU AI Act allows organizations to identify gaps and critical areas to improve, both from a generalized framework perspective and/or to assess specific compliance with the Act, similar to a GDPR assessment.  While no assessor can guarantee legal compliance,  the assessment ensures necessary components are in pace or provides a roadmap to get there.  

NIST AI RMF  

The NIST AI RMF (Risk Management Framework) provides an industry-agnostic framework for any size company which follows a similar pattern from other NIST frameworks, such as Privacy Framework and Cybersecurity Framework (CSF).  The framework encompasses four core functions, Govern, Map, Measure, and Manage, as well as profiles which may be utilized for particular use-cases or maturity goals.  

The core functions start with Govern, which addresses oversight over the other three functions.  Then, Map assesses contextual risks; Measure assures risks are assessed, analyzed, and/or tracked; and Managed uses traditional risk prioritization and management functions within the AI context.  These are broad and may go beyond your AI deployment’s particular use case.  This is where the framework’s “profiles” can help.  

“Profiles” provide a method to narrow the framework’s otherwise wide focus, applying use-case or maturity-goal subsets of the core functions’ categories and subcategories.  For example, the recent publication NIST AI 600-1 is a subset of the framework with subcategories specifically selected to assess generative-AI systems.  Profiles also allow for a “Current” or “Foundation” level, starting with a manageable initial assessment, which can grow and mature over time.  

Coalfire: AI Assessment & Compliance Made Simple  

Since 2001, Coalfire has established itself as a market leader by staying ahead of compliance trends, innovative technologies, impending regulations, and new frameworks. Coalfire has a long-standing track record of delivering AI, data privacy, and cybersecurity expertise to enable organizations across a broad set of industry verticals achieve their compliance objectives. 

Coalfire’s experienced Cyber Risk Advisory team can meet your AI risk-assessment needs, advising on frameworks, and building the roadmap to more risk-aware systems.  Contact Coalfire today to stay ahead of the curve and meet your AI, data privacy, and cybersecurity compliance goals.