Corporate
Navigating the AI Security Landscape: From Executive Orders to Cyber Resilience
Key takeaways:
- The US Executive Order establishes standards for AI safety, emphasizing cybersecurity, privacy, and innovation.
- Coalfire adapts the NIST AI Risk Management Framework, ensuring comprehensive AI security assessments and compliance readiness.
- AI in healthcare demands robust protection of sensitive data, adherence to regulations like HIPAA and GDPR, and clear accountability frameworks.
- Balancing AI benefits with ethical considerations and mitigating risks through tailored cybersecurity measures are essential for a secure AI future.
Explore the implications of the US Executive Order, discover the challenges and solutions in AI development, and learn how Coalfire's tailored approach ensures robust AI risk management.
How can we ensure the brilliance of artificial intelligence (AI) does not inadvertently open the door to unprecedented security risks?
That question has clearly been on the mind of the U.S. government. On October 30, 2023, an Executive Order (EO) was issued from the desk of President Biden that "...establishes new standards for AI safety and security, protects Americans' privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more."
The White House issued a Fact Sheet that summarizes the actions required; some key highlights of the executive order include requirements to:
- Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
- Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
- Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
- Advance the responsible use of AI in healthcare.
Why now?
Unregulated AI systems might have vulnerabilities that malicious actors can exploit. If these systems are integrated into critical infrastructure, such as energy grids, transportation, or military facilities, cybercriminals or hostile nations could manipulate or disable them, leading to widespread disruptions or physical damage.
Although AI has been around for a long time (think back to Alan Turing's work in the late 1940s and early 1950s and the term "artificial intelligence," which was first coined in 1956), it is only recently that the vast amounts of cheap data storage and high-speed data processing capabilities have created a perfect storm that will see almost $100B in global spending on AI initiatives, with some sources claiming the number will be over $150B.
In response to the EO’s requirement for developers to share safety test results, Ryan Hartsfield, a senior consultant with Coalfire’s Security, Privacy, Risk team, said, “As a society, we failed the first test of reigning in social media platforms from their unabated development and deployment strategy. By doing this, we allowed an arms race for human attention whereby the most powerful companies still will not share information regarding the development and safety testing of AI algorithms, which has had disastrous results on the population, especially our youth.”
He continues, “We cannot afford to continue allowing unbridled deployment of a technology that has the potential to surpass human capabilities in the pursuit of profit, and this Executive Order provides an excellent platform to implement controls while not stifling innovation.”
Impact of AI adoption on Coalfire's clients
Coalfire's clients are excited about the opportunity to increase efficiency and productivity, improve business intelligence through quicker and more accurate data analysis and insights, and create excellent customer experiences. If these opportunities can lead to cost savings and competitive advantage, it is easy to understand why some companies implement AI at a breakneck pace.
Of course, the speed of creating AI-developed or AI-enhanced products can sometimes outpace the existing cybersecurity controls, and implementing stringent cybersecurity standards can be costly, especially for smaller organizations or startups. Compliance might strain resources, potentially limiting the adoption of AI technologies among certain groups.
Even if controls are in place, there is still the danger of shadow AI development of projects and applications without proper oversight, authorization, or documentation within an organization.
Case study: Healthcare
In general, there is a need for cybersecurity experts proficient in both AI and traditional cybersecurity practices. Developing comprehensive standards requires expertise in both domains, making it challenging to find qualified individuals capable of addressing the intersection of AI and cybersecurity effectively.
AI in healthcare holds immense promise, offering innovative solutions to improve patient outcomes, streamline medical processes, and enhance healthcare services. However, addressing the potential benefits and ethical considerations of AI in healthcare is paramount to ensuring responsible and equitable implementation.
"The regulatory landscape for AI can only be described as the wild, wild west," says Dee Cruit, healthcare cyber risk services director with Coalfire's Strategy, Privacy, Risk team.
"For organizations with regulatory requirements, they need assurances that software developers leveraging AI technology have covered all the bases." she continues. "The AI NIST Risk Management Framework is a step in the right direction for incorporating AI governance into an integrated cybersecurity program."
Dee also points to the Food and Drug Administration (FDA) initiative to create a Digital Health Advisory Committee.
"Development of digital health technologies (DHTs), such as artificial intelligence/machine learning (AI/ML), brings many wonderful advantages such as enhanced diagnostics, drug discovery and development, personalized treatment, and efficient healthcare operations."
But Dee also points out that with innovation and health benefits also comes risk.
"Healthcare data is sensitive and must be protected. AI systems should adhere to robust data protection measures, ensuring patient privacy and compliance with regulations such as HIPAA in the United States or GDPR in Europe."
She concludes, “Clear guidelines must be established regarding the accountability and liability of AI systems in healthcare. Determining responsibility in case of AI-related errors or malfunctions is crucial to ensure patient safety and legal clarity.”
AI Risk Management Framework
In January 2023, the National Institute of Standards and Technology released the NIST AI Risk Management Framework (NIST AI RMF 1.0)
The NIST AI RMF Standards provide a basis for building trust among stakeholders, including businesses, consumers, and policymakers. When AI systems adhere to recognized cybersecurity standards, users can have confidence in their security, encouraging broader adoption of AI technologies.
Although currently a voluntary framework, the Executive Order must surely hasten a decision for the framework to be mandatory for all federal departments and federal contractors and "highly encouraged" in the private sector.
The AI RMF provides cybersecurity professionals with a map to assess the risks associated with AI technologies and to understand the potential consequences of identified vulnerabilities.
By implementing controls for AI risk measurement and management, companies can ensure that robust security measures and strategies to mitigate AI-related threats effectively are an integral part of their overall information security program.
How can Coalfire help with AI risk management?
AI applications are diverse, ranging from machine learning algorithms to deep learning networks. Developing standards applicable across this broad spectrum of technologies is complex, requiring a nuanced understanding of each system's unique cybersecurity requirements.
Coalfire has adapted the NIST AI RMF framework to produce an assessment test plan which will:
- Help companies understand the gaps related to AI in their information security programs.
- Identify vulnerabilities in the use or development of AI products.
- Demonstrate to third parties that their AI security program has been independently assessed.
- Create a maturity model to demonstrate progress and assurance to internal stakeholders.
- Be "audit ready" when legislation passes, and compliance becomes a regulatory requirement.
Standards often form the basis for regulations. By implementing the NIST AI RMF, organizations can ensure compliance with existing and future regulations related to AI cybersecurity, avoiding legal challenges and reputational damage. The Executive Order lays a good foundation for those much-needed regulations; now, we must wait to see what evolves from these directives.
In conclusion, while challenges exist, developing AI cybersecurity standards presents significant opportunities for enhancing security, fostering innovation, promoting collaboration, and building user trust. Balancing these positive impacts against potential drawbacks requires ongoing efforts to update and adapt standards to the evolving landscape of AI technology and cybersecurity threats.