10
min read
•
Research

RAG Under Attack: How the LLM Vulnerability Affects Real Systems

In part one, we showed how LLMs can be tricked into executing data. This time, we look at how that plays out in real-world RAG systems—where poisoned context can lead to phishing, data leaks, and guardrail bypasses, even in internal apps.
Peter Dienes
March 26, 2025
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.