Cybersecurity

Understanding the EU AI Act and the Road to AI Risk & Compliance

Ryan Hartsfield

Senior Consultant, Coalfire

June 10, 2024

Given the breakneck speed at which AI is being developed and consumed, the European Union (EU) has established comprehensive regulations to ensure AI is used responsibly. 

Like the General Data Protection Regulation (GDPR) for data privacy, the newly introduced EU Artificial Intelligence (AI) Act is a landmark regulation that will no doubt have a ripple effect on companies across the globe that create and/or use AI systems. 

Even if your company is not headquartered within the EU, you may have new compliance requirements that must be addressed. One can surmise that given how strongly GPDR is enforced and adopted, that the EU AI Act will be enforced in a similar manner, and it is only a matter of time before other governments follow suit.

The purpose of this blog is to provide the key points of the EU AI Act and to understand how companies can leverage recently published AI standards to prepare themselves for future AI regulations. 

It is essential for companies that develop or consume AI technologies, whether they are based in the EU or not, to get ahead of the AI risk management curve and ensure that they have a formal AI risk management program in place and that it is operating effectively. 

New industry frameworks and standards such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) and ISO/IEC 42001:2023 are not only helping pave the way to a safer and more responsible future regarding AI development, but they are also preparing companies for future regulatory requirements. 

What is the EU AI Act?

The EU passed the EU AI Act on March 13, 2024 with overwhelming support.

According to Article 1, The Act establishes the world’s first comprehensive law governing AI with the goal of improving the European market by promoting the use of artificial intelligence (AI) that is safe, respects human rights, and protects health, safety, and the environment. 

The law applies to anyone who makes, uses, imports, or distributes AI systems in the EU, regardless of where they are based. It also applies to AI systems used in the EU, even if they are made elsewhere. A full list of exceptions can be found here.

The EU AI Act bans certain AI practices and establishes rules for how AI can be sold, used, and monitored in the EU. It requires transparency of certain AI systems and sets specific rules for high-risk AI systems and the people/companies that use them. 

The law also includes rules for selling general purpose AI models and measures to support innovation, particularly for small businesses and start-ups. 

“The AI arms race has accelerated rapidly in the last year. It appears this Act is trying to establish some guardrails that intend to eliminate or at least reduce the risk of harm to both businesses and private citizens,” states John Piotrowski, a principal consultant at Coalfire.

Classification of AI Systems

The EU AI Act classifies AI applications and systems based on their risk levels and designed function. The classification system includes four categories: unacceptable risk, high-risk, limited risk, and minimal risk.

Unacceptable Risk 

AI systems that pose an unacceptable risk are prohibited under the Act. Examples include social scoring systems and manipulative AI designed to distort behavior or decision-making. 

High-Risk 

High-risk AI systems, which have significant implications for safety and fundamental rights, are the primary focus of the Act and include biometric identification systems, critical infrastructure management, and applications in education, employment, and law enforcement. These prohibitions are not just in place for providers based in the EU. Providers of high-risk AI systems in countries outside of the EU face stringent regulations if the AI system’s output is used in the EU.

Limited Risk 

Limited risk AI systems are subject to lighter transparency obligations. Organizations that develop and deploy these systems must ensure that end users are aware they are interacting with AI. This category includes technologies such as chatbots and deep fakes. 

Minimal Risk 

Minimal risk AI systems, which encompass most AI applications currently available like AI-enabled video games and spam filters, are largely unregulated. However, this may change with advancements in generative AI technologies. “If the scope or the business purpose of an AI system changes, then it will be important to revisit its classification status and determine whether level of risk has increased,” Piotrowski states.

Obligations for Providers and Users

Providers, or developers, of high-risk AI systems must adhere to several key obligations that include:

  • Establishing a risk management system
  • Ensuring robust data governance
  • Creating detailed technical documentation 
  • Designing systems for record-keeping
  • Providing clear usage instructions
  • Enabling human oversight
  • Achieving appropriate levels of accuracy and robustness
  • Maintaining a quality management system

Users of high-risk AI systems also have obligations, though less stringent than those for providers. They must ensure that the AI systems they deploy are used responsibly and comply with relevant regulations. As mentioned previously, these regulations are applicable to both users within the EU and those in countries outside of the EU where the AI system’s output is used in the EU.

General Purpose AI (GPAI)

General Purpose AI (GPAI) systems are defined as versatile AI technologies capable of performing a wide range of tasks. According to Article 53, providers of GPAI models must meet the following requirements:

  • Technical Documentation: Create comprehensive documentation
  • Information for Downstream Providers: Supply relevant information for compliance
  • Copyright Compliance: Respect the Copyright Directive
  • Training Data Summary: Publish detailed summaries of training data

GPAI models made available under free and open licenses have slightly different obligations unless they present systemic risks, in which case they must meet additional requirements such as adversarial testing, risk mitigation, incident reporting, and ensuring adequate cybersecurity protections.

A GPAI model will be presumed to have high impact capabilities when the cumulative computation used for its training exceeds 10^25 floating point operations per second (FLOPS). 

High-Risk AI Systems 

High-risk AI systems are at the core of the EU AI Act due to their potential impact on safety and rights. High-risk AI systems are used as safety components in regulated products or in specific high-risk areas such as critical infrastructure, education, employment, and law enforcement. AI systems that profile individuals through the automated processing of personal data are always considered high-risk. A full listing of high-risk AI systems can be found in Annex III of the EU AI Act. 

High-risk systems must undergo a third-party assessment before they are sold or used. While the Act does not specify the specific assessment to be performed, it does establish requirements for providers of AI systems that must be implemented to comply with the law. 

Requirements for Providers of High-Risk AI Systems

  • Risk Management System: Establish and maintain a risk management system
  • Data Governance: Ensure training, validation, and testing datasets are relevant, representative, and free of errors
  • Technical Documentation: Create detailed documentation demonstrating compliance
  • Record-Keeping: Automatically record events relevant to identifying risks and modifications
  • Instructions for Use: Provide clear instructions for downstream users
  • Human Oversight: Design systems to facilitate effective human oversight
  • Accuracy, Robustness, and Cybersecurity: Achieve high levels of accuracy and robustness, incorporating strong cybersecurity measures

Governance and Compliance

The EU AI Act establishes the AI Office within the European Commission to monitor compliance and evaluate AI systems. It can investigate systemic risks and oversee the development of voluntary codes of practice. 

The AI Office monitors AI providers, conducts evaluations, investigates systemic risks, and supports the development of voluntary codes of practice. Non-compliance with any of the following provisions related to operators or notified bodies, other than those laid down in Article 5 (Prohibited AI), shall be subject to administrative fines of up to 15,000,000 EUR ($16,318,425). Providing incorrect or misleading information can result in fines up to 7.5 million EUR ($8.2 million) or 1% of a company's annual turnover.

Timelines for Compliance

  • Prohibited AI Systems: Compliance within 6 months
  • General Purpose AI (GPAI): Compliance within 12 months
  • High-Risk AI Systems (Annex III): Compliance within 24 months
  • High-Risk AI Systems (Annex II): Compliance within 36 months

Codes of Practice: Ready within 9 months.

The Road to AI Risk & Compliance

The most significant lift for compliance is the development of technical documentation. The EU AI Act requires the documentation of the technical specifications of the AI system – hardware, system performance, data requirements, testing procedures, APIs, etc. 

Additionally, the Act calls for the documentation of the Risk Management Program, specifically, how the use of the system can have an impact to human health, safety, privacy, and security. This introduces the concept of an AI Risk Management program to identify, manage, and measure AI risks.

An AI Risk Management program should be built on established frameworks, which can significantly improve an organization's ability to align with the EU AI Act and future AI regulations. With the publication of the voluntary NIST AI RMF and ISO/IEC 42001:2023 international standard, Coalfire has been able to help clients take their existing risk management programs and successfully integrate AI compliance

Key concepts

The NIST AI RMF provides a structured approach to navigating AI-related risks through its four main functions: Map, Measure, Manage, and Govern.  

  • In the "Map" function, organizations can establish the context by identifying and categorizing AI systems according to the EU AI Act’s risk levels, while engaging stakeholders to ensure regulatory alignment. 
  • The "Measure" function focuses on risk assessment and performance metrics, allowing organizations to evaluate AI system accuracy, robustness, and compliance with data governance standards. 
  • The "Manage" function requires organizations to implement risk mitigation measures, improve data quality, enhance cybersecurity, and ensure human oversight, alongside developing incident response plans. 
  • The "Govern" function involves establishing governance policies and procedures for continuous compliance, coupled with systems for ongoing monitoring and reporting, ensuring traceability and accountability.

ISO 42001 provides a standard for establishing, implementing, maintaining, and continually improving an AI management system (AIMS). The core components of the standard are:

  • Facilitate communication, commitment, and engagement
  • Implement processes and controls for responsible AI
  • Balance AI governance with innovation
  • Continually improve AI risk management

Applying cybersecurity measures and risk management concepts from both frameworks is only the beginning. Successful organizations understand that a shift from a static risk assessment to a dynamic risk management program sets the foundation whereby they can respond and adapt to an ever-changing AI regulatory environment. 

Ultimately, the efforts made by organizations that provide or deploy AI technologies to adopt AI compliance will be vital to fostering safe, ethical, and trustworthy AI systems.