Cyber Risk Advisory
Build AI Governance on Strong Privacy Foundations


Artificial intelligence (AI) tools are increasingly embedded across enterprise environments, from automation in vendor platforms to customer-facing applications. As a result, enterprise risk is changing faster than many governance programs can keep up. Rather than starting AI governance from scratch, a more pragmatic approach is to scale what already works: your privacy program.
Emerging frameworks like the NIST AI Risk Management Framework and ISO/IEC 42001 provide guidance, but for many organizations, they may require levels of investment and training that feel out of scale with actual deployment needs. For organizations deploying AI tools, not developing their own, this difference can feel particularly acute.
A strong privacy program can cover the gap. While AI systems introduce novel risks, many of those risks are similar to those already identified through privacy assessments, especially those grounded in frameworks like the NIST Privacy Framework or ISO/IEC 27701. It makes sense, then, that responsibility for AI governance often falls to the same leaders who oversee privacy programs. Best practices developed to mitigate privacy risks can often adapt to provide insight into AI systems without a ground-up re-design.
Why privacy principles make sense for understanding AI risk
Privacy and AI risks both emerge from the use of sensitive data, whether personal or proprietary. Mitigating these related risks therefore finds common ground in the responsible use of data, analysis of potential harms, and appropriate organizational controls to protect the public’s trust.
In this context, privacy principles and AI governance values also reinforce each other. This table identifies where privacy principles and AI-governance values share a focus:
Privacy Principle | AI Governance Principle | Shared Focus |
Transparency and Notice | Transparency and Explainability | Informing individuals about systems and decisions |
Purpose Limitation | Proportionality and Contextual Use | Aligning data use with intended and disclosed purpose |
Data Minimization | Data Quality and Relevance | Using only what is needed for legitimate outcomes |
Individual Participation and Rights | Human-Centric and Contestability | Enabling individuals to understand and challenge outcomes |
Accountability | Governance and Risk Management | Documenting oversight, roles, and controls |
Security Safeguards | Robustness and Safety | Ensuring integrity, reliability, and resilience |
Enhancing privacy workflows to address AI risks
Because the facets of governance are strongly aligned between privacy and AI, identifying relevant components of the privacy program may identify corresponding enhancements to govern AI workflows, as well. Implementation details will differ. While privacy focuses on individual rights and data protection, AI governance extends into system behavior, fairness across groups, and broader societal impacts. Existing tools, templates, and training must evolve to address these distinctions effectively.
For example, courts have made clear that liability for AI-enabled discrimination rests with the organization using the tool, not just the developer. Vendor guarantees are not enough; organizations must be able to demonstrate that their processes are fair, regardless of the tools they use.
Privacy programs have long bridged compliance obligations and operational execution. Well-designed privacy programs do more than document policies. They translate regulatory expectations into controls, assessments, and internal accountability. They enable risk-informed decision-making.
When it comes to AI, these programs’ components provide an adaptable framework for extending existing governance to AI. The table below illustrates how privacy-program components can grow to fill AI-governance requirements without totally rewriting the book:
Privacy Program Component | AI Governance Enhancement | Insight for Adaptation |
Privacy notice | AI transparency disclosure | Notify users how you will be using their data and whether it will be used to train any models |
Contractual privacy commitments for vendors | AI contract terms | Ensure all contracts clearly define allowable and prohibited AI activity, including model training |
Privacy impact assessment | AI impact assessment | Expand risks explored and mitigated to include automated decision-making, bias, explainability, and the potential for unintended consequences |
Data inventory or records of processing activities (ROPA) | AI inventory | Document AI use cases, tools, data provenance |
Documented privacy roles and responsibilities | Documented AI roles and responsibilities | Define decision-making authority, oversight roles, and review processes for AI systems |
Proportional governance for varied risk
Privacy programs normalized the idea that governance should be proportional to risk. Not all data uses require the same level of review. The same is true for AI, both with regard to tool selection and scalable oversight.
Low-impact tools, such as AI-based email prioritization or internal search optimization, may not require extensive oversight. Higher-impact systems, such as those influencing access to employment or financial services, warrant deeper review and more rigorous controls.
The value of a mature privacy program includes mechanisms for scaling oversight in light of varying levels of risk. These practices can be adapted to AI use cases, as well, such as through risk-based intake, tiered assessments, or differentiated contractual requirements.
* * *
It goes without saying that evolving a privacy program will not address every AI risk. But as shown above, it offers a pragmatic, defensible, and economically sound starting point for organizations seeking to begin governing AI responsibly.