Cybersecurity

Impending Threats From AI: The Problem with Trust

Caleb Pfanstiel

Consultant, Advisory Services

September 20, 2024
Adobe Stock 730112090

Key Takeaways

  • Implementing AI into business processes comes with inherent risk.
  • The current attack surface for exploiting AI systems is significant and is only going to grow larger as AI becomes more advanced.
  • Proactively assessing internally and externally developed AI implementations can decrease your risk exposure.
  • Trust without verification is risky.

AI Integration Comes with Inherent Risk

The current trend in business development, optimization, and strategy is to integrate artificial intelligence (AI) across business functions, whether its financial prediction, sales efforts, supply chain optimization, or even cybersecurity detection and prevention. 

What companies may not realize is that implementing AI also increases the attack surface available to threat actors. While AI can help us understand and contextualize complex data sets, analyze patterns of behavior, and make accurate behavior-based predictions, are we asking the right questions before we choose to implement it? 

Questions like “How do these AI models train? What conditions implement boundaries? And what stops a user from inputting and collecting sensitive data from the model?” often lack thorough consideration. And post-integration, are these models monitored for performance and activity? AI can solve issues you face in business, but it can also create more problems if not implemented correctly. 

In this post, we will outline the current and emerging threats that enterprises face from adopting AI in business functions, along with future-proof mitigation strategies to proactively prepare for these advancements.

Threats from AI Integration

The current landscape for AI based threats includes many exploits for every stage of the cybersecurity kill chain, from reconnaissance to actions-on-objective. MITRE is one entity that is monitoring these emerging threats closely and is inventorizing and tracking these threats in their Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS) matrix

We assign unique values to each tactic, as shown in the parentheses below. The following are just a few current threat vectors introduced into business environments when AI becomes integrated:

  • Training Data Poisoning (AML.T0020): This occurs when attackers deliberately insert erroneous data into the dataset used to train AI models. Such poisoning can lead to models that learn incorrect patterns or develop biased decision-making processes. For instance, in an AI-driven hiring system, poisoned data might cause the model to exhibit discriminatory behavior, resulting in unfair hiring practices and potential compliance and legal consequences.
  • Injection Attacks (AML.T0051): Injection attacks involve manipulating the input data fed into an AI system to exploit vulnerabilities. For example, in natural language processing systems, attackers can craft input text to cause the model to produce harmful or erroneous outputs. An attacker might inject falsified data into a financial forecasting model, causing it to make inaccurate predictions that lead to poor investment decisions and substantial financial losses.
  • Compliance and Privacy Issues: Integrating AI into business environments becomes problematic when these environments store, process, or view sensitive data. Non-compliance or mishandling of data can lead to legal and financial repercussions. Attackers or malicious insiders can exploit weaknesses in data protection practices to access or misuse sensitive information, potentially leading to legal penalties and loss of customer trust. 
  • Security of the AI Models (AML.T0041): Threat actors can target the physical hardware that AI models rely on to extract information or influence behavior and output. If attackers gain access to or tamper with AI models, they could insert malicious modifications. 
  • Reverse Engineering: Sophisticated threat actors can analyze AI models and reverse engineer them to extract sensitive information or understand their decision-making processes, which can expose vulnerabilities or proprietary information. For instance, an attacker might reverse engineer a threat detection system to understand its algorithms, allowing them to craft strategies to bypass the system and maintain persistence and perform threat evasion activities, ultimately reducing security effectiveness for the company.

Cybersecurity conscious businesses must ask ourselves what the future may hold for threats that may emerge due to AI integration.

An Imminent Unseen Threat

The 2019 SolarWinds hack illustrates how attackers can exploit a trusted supply chain to gain and maintain unauthorized access. By embedding malicious code into SolarWinds’ software updates, the attackers leveraged the inherent trust in these updates to silently infiltrate and persist within numerous high-profile organizations. This method allowed them to maintain a hidden presence and execute their objectives over an extended period.

In a similar vein, consider a hypothetical scenario where a business implements a third-party developed AI model into its core processes to improve overall efficiency. However, once integrated into the company’s systems, this AI model could turn malicious. Rather than simply improving efficiency, the AI might use its position to initiate or perpetuate an attack. By blending its activities with normal operations, the AI could exploit its trusted status to execute malicious actions, such as extracting sensitive data or deploying exploits, while remaining undetected.

This scenario mirrors the SolarWinds attack because both exploit an initial layer of trust. While SolarWinds’ attackers took advantage of the confidence users placed in legitimate software updates, the supply chain AI threat would exploit the trust in an AI model implemented for beneficial purposes. In both cases, the threat actors rely on integrating their actions into trusted systems to avoid detection and achieve their objectives.

Overall, this scenario aims to demonstrate how advanced techniques can leverage trust within a system, whether through software updates or AI models, to maintain concealed and persistent access and execute potentially harmful actions.

Staying Ahead of the Curve

Whether you are building an AI model from the ground up to service a need in the market or bolster an internal process, or you are integrating a third-party AI system into a business process, it is important to assess and mitigate the inherent and residual risk that these ventures may pose to your enterprise. 

To remain three steps ahead of emerging threats like this scenario, companies need to thoroughly understand what data and functions the integrations have access to, monitor the activity of all users and processes within the environment, and perform advanced threat prevention through not just signature-based as well as behavioral detection. 

Defense-in-depth coupled with zero trust are essential concepts for security practitioners as adversaries leverage more sophisticated AI-enabled tactics.

Other steps that companies should take to remain proactive about mitigating known threats from AI include: 

  • End User Training: Train end users on the acceptable and unacceptable use of AI systems, emphasizing the consequences of misuse for both individuals and the organization. By educating users about the risks involved with AI models, businesses can reduce non-compliance with data handling policies, mitigating potential breaches.
  • Data Validation and Input Sanitization: Regularly validate and cleanse AI training and input data to maintain accuracy and integrity. You can employ automated tools to detect and flag anomalies, which prevents data poisoning or injection attacks and preserves the trustworthiness of AI-driven decisions.
  • Rate Limiting and Anomaly Detection: Introduce rate limiting to control the volume of data or requests processed by AI systems. This prevents system overloads, while anomaly detection systems can identify suspicious or harmful input patterns that may indicate overflow attacks or injection attempts.
  • Data Governance and Compliance: Develop comprehensive data governance policies to comply with regulations such as GDPR and CCPA. Organizations can implement AI-based frameworks and regulations like the EU AI Act, the NIST AI RMF, and ISO 42001 to inform AI risk management and governance. Regular compliance audits and privacy assessments will help organizations meet regulatory requirements, address privacy concerns, and mitigate legal risks.
  • Regular Security Risk Assessments: Conduct regular security risk assessments to identify and address AI risks to the enterprise. By leveraging frameworks, such as the NIST AI RMF, organizations can understand the full scope of risks posed by AI.

Due Diligence is Key

The rapid advancement of AI technology presents both unprecedented opportunities and significant challenges for businesses and cybersecurity initiatives. As AI-driven threats evolve, organizations must remain vigilant and adapt their cybersecurity strategies accordingly. 

This adaptation must start with the human element of security awareness and training. Then, whether you build an AI model or implement one, you need to ask the right questions about how to govern, map, measure, and manage the functions of this AI. By technically assessing the model and reviewing the surrounding processes, you will gain an overall picture of the potential threats this system may pose to the business, allowing you to mitigate them accordingly.

To truly stay ahead of AI-related threats, regular and proactive independent security assessments, vulnerability testing, and third-party risk management assessments are essential. For more information on how Coalfire can help you get ahead of emerging AI risks, visit Coalfire.com to understand Coalfire’s AI Advisory Services.