Privacy Impact Assessment (PIA)
AI Impact Assessments are Mandatory: What Are They & When Are They Required?

As AI development, design, and deployment hurtle forward, regulators and assessment frameworks have tried to catch up. Unfortunately, news stories about the harms of AI seem to grow as fast as the models themselves; in just a few years, early reports of racial or political bias in AI have been replaced with stories of AI undermining the court system, progressively worsening hallucination rates, and well-intentioned chatbots encouraging children to kill their parents or themselves. While not all harms presented by AI are as severe as these examples, these harms affect human beings and they can be extreme. It is critical to assess the potential impacts on human beings before models make it to the marketplace.
Regulators and frameworks often look to existing data-privacy oversight for guidance in the AI realm, including the growing role Privacy Impact Assessments play both in regulatory schemes (GDPR, CCPA) and standards (ISO 27701, SOC 2). It’s not surprising, then, that AI regulations and standards similarly require AI Impact Assessments in their schemes. Already, AI Impact Assessments are required by ISO 42001 (AI Management Systems), by the EU AI Act (Fundamental Rights Impact Assessments (“FRIA”)), and by the CCPA (Automated Decision-Making Technology (“ADMT”) risk assessments).
Therefore, it’s important for organizations to know exactly what these assessments encompass and when they are required. I describe this below, but for the TL;DR folks: AI Impact Assessments provide unbiased evaluations of the risk of harm to human beings from AI systems, though different regulators/frameworks define the requirements differently.
The EU AI Act’s Requirements are the Most Straightforward
The Act requires deployers of “high risk” AI systems to prepare FRIAs. “High risk” systems are those which pose a “significant risk of harm to the health, safety, or fundamental rights of natural persons,” and many involve personal information. Examples include models that profile humans or otherwise affect decisionmaking in employment, accessing essential services, or accessing credit, among other things.
The Act requires the FRIA to be structured into three sections:
- Descriptive criteria (the model’s intended purposes, the processes in which it will be used, timeframes/frequencies it will be used, and individuals/groups it may affect),
- Assessment criteria (specific risks of harm likely to impact affected individuals, including those stemming from instructions for use), and
- Mitigation criteria (internal governance, human oversight, and complaint mechanisms).
The EU AI Office is tasked with preparing a FRIA template, though it is not available as of the date of this blog.
Of course, models that process personal data also must comply with the GDPR, and the Act acknowledges this overlap. The Act permits deployers who have already performed a Data Protection Impact Assessment (“DPIA”, the EU’s version of a privacy impact assessment) to rely on it in lieu of a FRIA (though any required aspects of a FRIA not discussed in the DPIA should be included). This makes sense because processing personal data in “high risk” systems would also require a DPIA under the GDPR. Note, too, that the GDPR’s DPIA requirements carry significantly higher fines for noncompliance. This would obviously be moot where the high-risk AI system does not process personal data.
California’s Regulators Include AI Impact Assessments in Their Privacy Regulations
Draft regulations proposed under the CCPA (a privacy statute) included rules for AI systems. This received push-back from lawmakers and industry alike, such as from those who sought a stand-alone AI statute. Nevertheless, CCPA regulations requiring risk assessments (including of the impact to individuals) for certain ADMT systems were adopted in September 2025.
ADMT is a broad concept; it's not limited to AI. It includes any technology that processes personal information and executes, replaces, or substantially facilitates human decisionmaking (excluding basic utility technologies like spellcheckers, calculators, and simple data storage tools). For example, a system which approves or denies housing based solely on vacancy or actual payment is not a significant decision, while profiling potential buyers/tenants using an algorithm would almost certainly be considered ADMT. Other examples of “significant decisions” include the provision or denial of lending/financial services, education, employment, or healthcare.
As noted, these risk assessments specifically address the impact to humans and society at large resulting from the processing of personal information. Like the GDPR, its regulatory reach is limited to ADMTs that use personal information. The risk-assessment process must include analysis of the purpose, categories of personal data processed, operational elements (including the logic of the ADMT and any assumptions or limitations), specific benefits, negative impacts/risks to consumers, and safeguards. The assessment must be reviewed and approved by an individual with authority to decide on whether or not to move forward with the processing.
California’s regulations are significant for several reasons, though primarily because:
- They have teeth (significant fines have been issued and individual responsibility attaches),
- The exemptions are narrow (e.g., GLBA exemptions don’t apply to employee data, such as the use of ADMT in hiring), and
- Regulators have demonstrated that they’re looking at all in-scope companies (not just cutting-edge tech companies).
They also implicitly recognize (like the EU AI Act) that AI using personal data is a massive privacy concern. (For completeness, California has passed other AI laws which do not require AI Impact Assessments.) For companies falling under the jurisdiction of the CCPA, these regulations will also apply.
Other states are developing their own requirements, and Colorado, when juxtaposed to California, provides a good counterpoint. Colorado’s legislation distinguishes between a traditional risk-management program and an AI Impact Assessment (similar to ISO 42001, described below). The latter must include, inter alia, the purpose and other details of the system, an analysis of whether the system poses any known or foreseeable risk of algorithmic discrimination, mitigation steps, description of inputs/outputs, insights on monitoring & metrics, what oversight is in place, and steps taken to ensure transparency. It may be viewed as more thoroughly prescriptive than California, though Colorado regulators have not [yet] issued similar fines. California's AI Impact Assessments are also limited to consumer-type interactions that touch personal data; they would not, for example, assess how a chatbot reacts to suicidal ideation, while Colorado's statute would. Both state's statutes would address the potential for bias on proxy variables in lending situations, for example, just through different approaches.
Frameworks Like ISO 42001 Provide the Most Thorough Guidance
There are several non-regulatory frameworks that provide structure to build and assess AI systems, though the most thorough is likely ISO 42001. ISO 42001 defines an AI Impact Assessment as “formal, documented process by which the impacts on individuals, groups … and societies are identified, evaluated and addressed by an organization developing, providing or using products or services utilizing artificial intelligence.” AI Impact Assessments play such a significant role in ISO 42001 that they have their own guidance, found in ISO 42005. This guidance outlines both the process of performing an assessment and of adequately documenting it.
The ISO 42001 AI Impact Assessment’s scope goes far beyond the preceding regulatory requirements. For instance, datasets must be assessed against twenty different data-quality dimensions; organizations must define the model’s origins, its deployment environment, interested parties, both actual and potential harms and benefits, and finally both system failures and accidental or intentional misuse/abuse.
Like in Colorado, risk assessments are distinguished from AI Impact Assessments under ISO 42001. Risk assessments are more aligned with standards like ISO 27001, focusing more on internal systems, while AI Impact Assessments focus on external entities, such as individuals, groups, and society.
Which Type of AI Impact Assessment Should My Company Consider?
It depends. ISO 42001 is among the most thorough, but also has a correspondingly heavy lift, both in terms of time and cost. The NIST AI Risk Management Framework (NIST AI RMF), not described here, provides a more flexible, risk-based framework. Importantly, while there is significant overlap between regulatory risk assessments, there are privacy considerations inherent to the CCPA and GDPR that aren’t covered in the ISO standard (though others, such as ISO 27701, address these). If your concern is limited to meeting regulatory requirements, an assessment specific to the CCPA, GDPR/EU AI Act, or other jurisdiction may present a more efficient solution.
Coalfire's AI, privacy, and cybersecurity professionals leverage our expertise across many different compliance frameworks, partnering with organizations to ensure their data protection and AI programs are efficient and compliant. From preparing for or assessing your organization against ISO 42001, NIST AI RMF, GDPR, or CCPA, to addressing the individual components of these frameworks, such as AI Impact Assessments, get in touch and see how we can help today.