AI security blog
-min%20(1).png)
The Rise of the Internet of Agents: A New Era of Cybersecurity
As AI-powered agents go online, securing our digital infrastructure will require a fundamental shift in cybersecurity.
All topics

Claude 4 Sonnet: A New Standard for Secure Enterprise LLMs?
What Claude Sonnet 4 gets right—and where even the most secure models still fall short.

The Security Company of the Future Will Look Like OpenAI
AI security isn’t just cybersecurity with a twist—it’s a whole new game.

How to Secure Your GenAI App When You Don’t Know Where to Start
The fastest way to secure your GenAI app — even if you don’t know where to start.

How to Secure MCPs with Lakera Guard
This guide explains how to integrate Lakera Guard directly into a Model Context Protocol (MCP) server, giving you an easy way to add advanced threat detection to your MCP workflows.

From Regex to Reasoning: Why Your Data Leakage Prevention Doesn’t Speak the Language of GenAI
Why legacy data leakage prevention tools fall short in GenAI environments—and what modern DLP needs to catch.

RAG Under Attack: How the LLM Vulnerability Affects Real Systems
In part one, we showed how LLMs can be tricked into executing data. This time, we look at how that plays out in real-world RAG systems—where poisoned context can lead to phishing, data leaks, and guardrail bypasses, even in internal apps.

Activate
untouchable mode.
untouchable mode.
Get started for free.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Join our Slack Community.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.