Large Language Models (LLMs) have rapidly become essential tools in various sectors, from content generation to decision support. However, their increasing utility and complexity bring forth a slew of security and ethical concerns. The intrinsic nature of LLMs, built upon vast data sources, means they can inadvertently disclose sensitive details or execute unintended actions based on ambiguous prompts. As organizations increasingly adopt LLMs into their workflows, understanding these risks and deploying measures to mitigate them is paramount. This overview delves into the primary risks associated with LLMs and offers insights into safeguarding their use
LLM01: Prompt Injections
Description:
Prompt injection is a vulnerability where malicious actors craft inputs that manipulate LLM behaviour, causing it to deviate from its intended function. This can involve bypassing instructions, revealing sensitive information, or performing unauthorised actions.
Risks:
Unauthorised access to data, execution of harmful commands, manipulation of application logic, and dissemination of misinformation.
Vulnerabilities:
- Bypassing safety guidelines to generate inappropriate or harmful content.
- Tricking the LLM into revealing internal data or configurations.
- Injecting malicious code into generated outputs.
Mitigation Strategies:
- Implement robust input validation and sanitisation techniques.
- Use parameterised queries and avoid direct string concatenation.
- Employ strict access controls and least privilege principles.
- Incorporate human oversight for critical operations.
Attack Examples:
- Injecting prompts to bypass content filters and generate hate speech.
- Manipulating a chatbot to reveal a user’s personal information.
- Injecting code into a code generation tool to introduce vulnerabilities.
LLM02: Insecure Output Handling
Description:
Insecure output handling arises when LLM-generated outputs are not properly sanitised or validated before being used in downstream systems or displayed to users. This can lead to various injection attacks, especially if the output contains executable code or markup.
Risks:
Cross-Site Scripting (XSS), injection attacks (SQL, command injection), data exposure, and manipulation of user interfaces.
Vulnerabilities:
- Directly displaying LLM-generated HTML without proper escaping.
- Using LLM outputs in database queries without parameterisation.
- Passing LLM-generated code directly to an interpreter.
Mitigation Strategies:
- Sanitise and encode all LLM-generated outputs based on context.
- Treat LLM output as untrusted data and subject it to input validation.
- Use templating engines or safe output encoding libraries.
Attack Examples:
- LLM-generated text containing malicious JavaScript being rendered on a webpage (XSS).
- LLM output used in a SQL query leading to data breach.
- LLM-generated code injecting malicious commands into a system.
LLM03: Training Data Poisoning
Training data poisoning occurs when attackers inject malicious or biased data into the training dataset of an LLM. This can compromise the model’s integrity, leading to biased outputs, backdoors, or other undesirable behaviours.
Risks:
Generation of biased or misleading content, backdoor functionality, and reduced model accuracy.
Vulnerabilities:
- Injecting biased examples into the training data to influence the model’s opinions.
- Inserting backdoor triggers that cause the model to behave maliciously under specific conditions.
- Adding noise or irrelevant data to degrade the model’s performance.
Mitigation Strategies:
- Validate and sanitise training data from untrusted sources.
- Use data provenance techniques to track the origin and integrity of training data.
- Employ differential privacy techniques to protect against data poisoning attacks.
Attack Examples:
- Poisoning a sentiment analysis model to classify negative reviews as positive.
- Creating a backdoor in a code generation model to insert vulnerabilities into generated code.
- Degrading a translation model’s accuracy by injecting incorrect translations.
LLM04: Model Denial of Service
Description:
Attackers can overload LLMs with computationally expensive or complex prompts, leading to resource exhaustion and denial of service (DoS) for legitimate users.
Risks:
Service disruption, increased operational costs, and reduced availability of the LLM.
Vulnerabilities:
- Submitting extremely long or complex prompts.
- Requesting computationally intensive tasks repeatedly.
- Exploiting vulnerabilities in the LLM’s input processing to cause excessive resource consumption.
Mitigation Strategies:
- Implement rate limiting and input size restrictions.
- Optimise prompt processing and resource allocation.
- Use caching mechanisms to store responses for common prompts.
Attack Examples:
- Flooding an LLM-powered API with complex queries to exhaust resources.
- Submitting long, repetitive prompts to slow down response times.
LLM05: Supply Chain Vulnerabilities
Description:
LLMs are often integrated with third-party libraries, models, or data sources, introducing potential supply chain vulnerabilities. Compromised dependencies can expose sensitive information or allow for malicious code execution.
Risks:
Data breaches, unauthorised access, and execution of malicious code.
Vulnerabilities:
- Using a compromised pre-trained model.
- Relying on a vulnerable third-party library for data processing.
- Integrating with an untrusted external API.
Mitigation Strategies:
- Verify the integrity of pre-trained models and dependencies.
- Use reputable and secure third-party libraries.
- Implement strong access controls for external integrations.
Attack Examples:
- Attacker compromises a popular LLM library, injecting malicious code that is executed when the library is used.
- Using a pre-trained model containing a backdoor that allows unauthorised access.
LLM06: Sensitive Information Disclosure
Description:
LLMs can inadvertently disclose sensitive information from their training data if not properly sanitised or if they overfit to the training data. This can include private data, intellectual property, or other confidential information.
Risks:
Data breaches, privacy violations, and reputational damage.
Vulnerabilities:
- LLM memorising and regurgitating sensitive information from the training data.
- LLM generating outputs that reveal patterns or insights about confidential data.
Mitigation Strategies:
- Carefully curate and sanitise training data to remove sensitive information.
- Employ differential privacy techniques to protect against data leakage.
- Implement output filtering mechanisms to detect and redact sensitive information.
Attack Examples:
- An LLM trained on customer data inadvertently reveals credit card numbers in its output.
- An LLM used for code generation discloses proprietary algorithms.
LLM07: Insecure Plugin Design
Description:
If LLMs utilise plugins, insecure plugin design can introduce vulnerabilities, allowing unauthorised access to the LLM or connected systems.
Risks:
Unauthorised access, data breaches, and privilege escalation.
Vulnerabilities:
- Plugins with excessive permissions.
- Plugins with vulnerabilities that can be exploited to gain control of the LLM.
- Inadequate input validation in plugins.
Mitigation Strategies:
- Implement least privilege for plugins.
- Thoroughly test and vet plugin code for vulnerabilities.
- Implement strong input validation and sanitisation within plugins.
Attack Examples:
- An attacker exploits a vulnerable plugin to gain access to the LLM’s underlying system.
- A malicious plugin leaks sensitive data from connected systems.
LLM08: Excessive Agency
Description:
LLMs granted excessive agency can perform actions beyond their intended scope, potentially causing harm or disruption. This occurs when an LLM has too much autonomy or access to sensitive functionalities.
Risks:
Unintended or harmful actions, security breaches, and compromised system integrity.
Vulnerabilities:
- Excessive functionality: Access to functions not required for the LLM’s primary purpose.
- Excessive permissions: Plugins or integrations with broader permissions than necessary.
- Lack of confirmation for high-impact actions.
Mitigation Strategies:
- Restrict LLM capabilities to only necessary functions.
- Implement least privilege for all integrations and plugins.
- Require human approval for high-impact or sensitive operations.
Attack Examples:
- A personal assistant LLM with excessive email permissions sends spam or phishing messages.
- An LLM with control over system functions inadvertently deletes critical files.
LLM09: Overreliance
Description:
Overreliance on LLMs without proper human oversight can lead to incorrect decisions, biased outcomes, or the propagation of misinformation.
Risks:
Poor decision-making, biased outcomes, and reputational damage.
Vulnerabilities:
- Blindly trusting LLM-generated content without verification.
- Using LLMs for critical decisions without human review.
Mitigation Strategies:
- Implement human-in-the-loop systems for critical tasks.
- Validate LLM outputs against trusted sources.
- Educate users about the limitations of LLMs.
Attack Examples:
- An organisation relying solely on LLM-generated financial advice makes poor investment decisions.
- A news outlet publishes LLM-generated news articles without fact-checking, leading to the spread of misinformation.
LLM10: Model Theft
Description:
Attackers may attempt to steal valuable LLM models, potentially to gain a competitive advantage or to use the models for malicious purposes.
Risks:
Loss of intellectual property, financial losses, and reputational damage.
Vulnerabilities:
- Unauthorised access to model files or parameters.
- Reverse engineering model architectures through API access.
Mitigation Strategies:
- Implement strict access controls to protect model files.
- Use encryption to secure model data.
- Monitor API usage for suspicious activity.
Attack Examples:
- A competitor steals a proprietary LLM model to develop a competing product.
- Attackers gain unauthorised access to a sensitive LLM and use it to generate disinformation.
References
OWASP Top 10
for LLM – https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-v1_0.pdf