OWASP’s LLM working group published their analysis of LLM security threats, based on their expertise and the work of nearly “500 security specialists, AI researchers, developers, industry leaders and academics”. The detailed version is here – it is a must read as LLMs require a secure by default posture.
Below is a summary of the OWASP top 10 LLM cybersecurity threats. We’ll then analyze how NetFoundry’s open source OpenZiti platform and CloudZiti SaaS services are a part of a secure by default, defense in depth posture for LLM operators, helping to mitigate against 5 of the worse threats, in a software-only deployment model which can be done in hours by an expert or days by a beginner.
The good news is that those 5 share a primary enemy – and it is the same threat which OpenZiti secures LLMs against – which is why a single OpenZiti implementation is so helpful for LLM security. The OWASP top 10 list:
LLM01: Prompt Injection
This manipulates a large language model (LLM) through crafty inputs, causing unintended actions by the LLM. Direct injections overwrite system prompts, while indirect ones manipulate inputs from external sources.
LLM02: Insecure Output Handling
This vulnerability occurs when an LLM output is accepted without scrutiny, exposing backend systems. Misuse may lead to severe consequences like XSS, CSRF, SSRF, privilege escalation, or remote code execution.
LLM03: Training Data Poisoning
This occurs when LLM training data is tampered, introducing vulnerabilities or biases that compromise security, effectiveness, or ethical behavior. Sources include Common Crawl, WebText, OpenWebText, & books.
LLM04: Model Denial of Service
Attackers cause resource-heavy operations on LLMs, leading to service degradation or high costs. The vulnerability is magnified due to the resource-intensive nature of LLMs and unpredictability of user inputs.
LLM05: Supply Chain Vulnerabilities
LLM application lifecycle can be compromised by vulnerable components or services, leading to security attacks. Using third-party datasets, pre-trained models, and plugins can add vulnerabilities.
LLM06: Sensitive Information Disclosure
LLM’s may inadvertently reveal confidential data in its responses, leading to unauthorized data access, privacy violations, and security breaches. It’s crucial to implement data sanitization and strict user policies to mitigate this.
LLM07: Insecure Plugin Design
LLM plugins can have insecure inputs and insufficient access control. This lack of application control makes them easier to exploit and can result in consequences like remote code execution.
LLM08: Excessive Agency
LLM-based systems may undertake actions leading to unintended consequences. The issue arises from excessive functionality, permissions, or autonomy granted to the LLM-based systems.
Systems or people overly depending on LLMs without oversight may face misinformation, miscommunication, legal issues, and security vulnerabilities due to incorrect or inappropriate content generated by LLMs.
LLM10: Model Theft
This involves unauthorized access, copying, or exfiltration of proprietary LLM models. The impact includes economic losses, compromised competitive advantage, and potential access to sensitive information.
How OpenZiti is part of a security by default solution for LLMs
Not surprisingly, the attack vectors described by OWASP are almost always conducted from Internet-based endpoints. In that respect, LLM has the same basic problem as most networked apps, data and APIs, and this is the problem which OpenZiti helps address. Let’s look at a before diagram and after diagram:
Here’s what happens without OpenZiti’s LLM security model, as shown above:
- Current methods, such as firewall ACLs, VPNs or bastions, initially allow a layer three network connection. This is labelled as connect before authorize (rudimentary attempts at auth may be made, e.g. IP-address based or network-level VPN based).
- After the initial connection, the LLM and other software will reject the unauthorized connections (the attackers). Most of the time. Unfortunately, bugs (including zero days), misconfigurations and business logic gaps are a fact of life. As soon as an attacker finds a vulnerability which enables them to bypass the “day two authorization” (after the firewall has allowed the initial layer 3 connection), then the race is on. Every Internet-based attacker racing against IT (who is feverishly trying to identify and patch the bug).
Here’s what happens with OpenZiti’s LLM security model, as shown above:
- No unauthorized layer 3 connections are allowed. The firewall simply denies all inbound connections. Even the ones from ‘good actors’ like their other AI systems or AI suppliers. There is no network connecting the LLM.
- In order to earn an overlay network connection with the LLM, strong identity, authentication and authorization is required, before you can get to the firewall – before there is any network connection. This is done by both authorized LLMs and authorized external systems, so that all systems only open outbound connections, and never need to accept inbound connections. “Strong” includes mutual TLS (mTLS), integrated bidirectional certificate based authentication, least privileged access, encryption and ephemeral overlay network connections.
To be clear, OpenZiti didn’t magically squash all bugs. Instead, OpenZiti ended the race before it started. Unauthorized attackers have no network access by which to exploit the bug.
Here are the OWASP top 10 LLM vulnerabilities which OpenZiti helps with:
- LLM01: Prompt Injection
- LLM03: Training Data Poisoning
- LLM04: Model Denial of Service
- LLM07: Insecure Plugin Design
- LLM10: Model Theft
The main way OpenZiti helps against those threats:
- Massively reduce the attack surface area. Cancel the race against the Internet. No inbound access. This enables other cybersecurity layers to be much more successful in mitigating against attacks from authorized users and systems.
- Help prevent data exfiltration. With the least privileged access, microsegmented model, data can not be sent to destinations which the organization has not specifically allowed.
- Help with telemetry and visibility. OpenZiti eliminates the noise of all the unauthorized attempts and provides granular telemetry data, at a network level, for every authorized attempt.
Security or speed: which do you choose?
Actually, now you can get both. By simplifying. Enterprises use OpenZiti to:
- Accelerate. Enable IT to evaluate and implement AI solutions in hours instead of weeks.
- Radically strengthen security and controls. No open inbound ports, built-in identity, authentication and authorization, least privilege access and data exfiltration prevention, granular visibility and audit controls. A critical part of a secure by default approach to mitigate against LLM threats, such as the ones described so well by OWASP.