Here’s a refined summary of the OWASP Top 10 for Large Language Models (LLMs) / Generative AI, based on the 2025 version of the project. OWASP+2OWASP Gen AI Security Project+2
Use this as a checklist for your AI/GenAI systems.
| # | Risk | Description |
|---|---|---|
| LLM01: Prompt Injection | Attackers craft inputs (prompts) to manipulate model behavior or bypass safety constraints. OWASP+1 | |
| LLM02: Insecure Output Handling | Failing to validate or sanitize the model’s output can result in leakage, injection, or downstream misuse. OWASP+1 | |
| LLM03: Training Data Poisoning | Training or fine-tune data is maliciously altered so the model behaves incorrectly, unethically or insecurely. OWASP+1 | |
| LLM04: Model Denial of Service | Over-use, resource exhaustion, or malicious input patterns degrade service availability or increase cost. OWASP | |
| LLM05: Supply Chain Vulnerabilities | External models, components, datasets or libraries within the AI chain are compromised, undermining model integrity. OWASP+1 | |
| LLM06: Sensitive Information Disclosure | The model inadvertently reveals private/proprietary/training data (e.g., PII, IP) via its outputs. OWASP Gen AI Security Project+1 | |
| LLM07: Insecure Plugin Design | Plugins/extensions tied to the AI system that handle untrusted inputs or lack access control may open attack surfaces. OWASP | |
| LLM08: Excessive Agency | Giving the AI model too much autonomous capability (actions, changes, workflows) without proper human-in-loop or governance. Kong Inc. | |
| LLM09: Over-reliance | Blind trust in model outputs without validation leads to flawed decisions, propagation of errors, or compliance gaps. OWASP+1 | |
| LLM10: Model Theft | The intellectual property or trained model itself is stolen, copied, or exposed—leading to loss of competitive advantage or unauthorized use. OWASP |
