Cybersecurity

A rundown of the OWASP top 10 for large language model applications

Priya P headshot jpeg

Priyadharshini Parthasarathy

Senior Security Consultant, Application Security, Coalfire

Blog Images 2023 Coalfire Main Image Blog Rundown OWASP 800x420 FINAL

As part of the Open Worldwide Application Security Project (OWASP) AI Project, a community of international experts published a list of the top 10 critical vulnerabilities seen in Large Language Model (LLM) applications.

Key takeaways:

  • Prompt Injection ranks #1 in the Top 10 LLM vulnerabilities
  • Six new vulnerabilities were identified in the research, including Training Data Poisoning (LLM03), Supply Chain Vulnerabilities (LLM05), Insecure Plugin Design (LLM07), Excessive Agency (LLM08), Overreliance (LLM09), Model Theft (LLM10)
  • The remaining four are from existing OWASP Web/API top 10 vulnerabilities, including Prompt Injection (LLM01), Insecure Output Handling (LLM02), Model Denial of Service (LLM04), Sensitive Information Disclosure (LLM06)
  • The vulnerabilities listed are applicable to those adopting LLM, such as developers, data scientists, and security experts

The unprecedented market entry of ChatGPT and the ensuing explosion in generative AI development over the last year have simultaneously catalyzed powerful innovation and introduced dangerous vulnerabilities for malicious actors to exploit. Security professionals are left to navigate the uncharted waters of the evolving LLM landscape.

In response, an international collective of nearly 500 experts (with over 125 active contributors) banded together with OWASP to research, analyze, and propose a top 10 list of vulnerabilities facing LLM. I had the pleasure of being a part of it!

Together, we identified 43 distinct threats and narrowed the list down to 10. We found six new critical vulnerabilities in addition to four existing ones from the original OWASP Top Ten. Let’s dive in!

  1. Prompt Injection (LLM01)

    Yet again, number one on the list is a form of injection - Prompt Injection.

    Attackers can craft malicious input to manipulate LLMs into unknowingly executing unintended actions. There are two types of Prompt Injection – direct and indirect. An example of direct injection is an attacker sending a prompt injection to an LLM for interacting with insecure functions and exploiting backend systems. Indirect injection could consist of an LLM accepting input from an external source, such as a website or files controlled by a malicious user.

  2. Insecure Output Handling (LLM02)

    Similar to web application output handling, the LLM model output is parsed without any filtration. This can allow for XSS, SSRF, privilege escalation, or remote code execution.

  3. Training Data Poisoning (LLM03) *new*

    Any machine learning model requires training data (raw text) to service the user needs. Training data should contain a broad range of genres, domains, languages, and content. The manipulation of data or a lack of tuning data can result in data poisoning, which can compromise the model’s security, effectiveness, and accuracy of predictions. Some threat vectors include using data from an unverified source, inadequate sandboxing/training, or the creation of falsified documents impacting the model’s outputs.

  4. Model Denial of Service (LLM04)

    An attacker can consume a high volume of resources by interacting with an LLM to cause Denial of Service. Since we are in the early stages of understanding and implementing LLMs, developers need to understand the input and output capacity of the model.

  5. Supply Chain Vulnerabilities (LLM05) *new*

    This vulnerability exists in the current AppSec realm and extends to training data, vulnerable ML models, third-party packages, and deprecated models. Understanding the terms and conditions of specific models, mitigating vulnerable and outdated components, updating inventory per the Software Bill of Materials (SBOM), and implementing the patching policy will prevent these types of vulnerabilities.

  6. Sensitive Information Disclosure (LLM06)

    LLMs may reveal sensitive information (e.g., proprietary algorithms) if output is not properly sanitized. Those adopting LLM models should understand how each model works and avoid the risk of revealing sensitive information in the output via different streams.

  7. Insecure Plug-in Design (LLM07) *new*

    This occurs when a potential attacker constructs a malicious request to the plugin, resulting in a wide range of undesired behaviors (e.g., remote code execution). An example would include a plugin accepting configuration strings instead of parameters to override entire configuration settings.

  8. Excessive Agency (LLM08) *new*

    Excessive Agency arises when an LLM agent or plugin is provisioned with excessive permissions to read, write, or execute the required operation. An example is a plugin requiring access to modify the data used in the application. Although read permission is necessary, edit permission may not be required.

  9. Overreliance (LLM09) *new*

    Systems depending excessively on LLM models for decision-making may produce inaccurate information and misleading content. Attack scenarios include using AI models for news organizations and Codex (a general-purpose programming model) to develop code with security vulnerabilities. This will generate an output that is syntactically correct but may not be semantically correct.

  10. Model Theft (LLM10) *new*

    Last on the top ten list, Model Theft is about the propriety LLM model being compromised and extracted to another model. It compromises the confidentiality and integrity of the LLM and provides unauthorized access to any sensitive information contained within the model.

With the advent of AI into multi-cloud environments, supply chains, and global sales channels, security concerns relating to privacy and intellectual property are top-of-mind for development security operations teams. Understanding the hierarchy of vulnerabilities is now requisite for IT professionals along every step of the vulnerability management lifecycle.

References

https://owasp.org/www-project-top-10-for-large-language-model-applications/
https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-v1_0_1.pdf