Independent, Real-World LLM Security Benchmark
The threat landscape for AI is rapidly evolving. The Lakera AI Model Risk Index is the industry’s most rigorous and systematic assessment of how today’s leading LLMs perform under adversarial conditions. Developed by the same experts behind Lakera Guard and Lakera Red, this report provides AI leaders with data-driven insights to evaluate and address inherited LLM risks.
The Lakera AI Model Risk Index systematically evaluates how well leading LLMs uphold their intended purpose when exposed to adversarial attacks. Models with lower scores are more resilient, meaning they are better at maintaining correct, safe, and aligned behavior under pressure. The results below rank models from most resilient to least based on their aggregated risk scores.
Why the Lakera AI Model Risk Index Matters
Our report is designed for security teams who need real world visibility into LLM risks.
- Highlights comparative resilience to inform model selection and risk management decisions
- Benchmarks against real world attack techniques like prompt injections, jailbreaks, data exfiltration, and indirect attack vectors
- Quantifies exploitability across key threat categories
- Provides up-to-date, independent benchmarks for security and AI leaders

What Sets it Apart
The Lakera AI Model Risk Index measures the security performance of LLMs in real-world conditions. Unlike other benchmarks, it evaluates how models maintain intended behavior against adversarial attacks in applied settings.

Evaluates models across weak, medium, and strong system prompt configurations to measure how prompt controls affect security outcomes.

Includes attacks introduced through non-user origins, such as retrieval-augmented generation (RAG) systems and other indirect vectors.

Captures risk across domain-specific and application-aware contexts, offering deeper insights into how vulnerabilities emerge in practice.
How it Works
The Lakera AI Model Risk Index is built on the same expertise and methodology behind Lakera Red, our enterprise red teaming offering for AI systems. We simulate real-world attacker behavior across the full threat spectrum.
