Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
Lakera Research: Advancing AI Security

Lakera's research team is on the mission to secure the Internet of Agents. We uncover fundamental AI vulnerabilities, push the limits of adversarial AI, and develop defenses that reshape how AI systems withstand attacks. Our work combines cutting-edge research with real-world impact, setting new standards for securing autonomous systems.

Latest Research Updates

This section will be regularly updated with insights from our red teaming efforts, including new findings, methodologies, interactive demos, and potential attack vectors that we uncover.

10
min read
March 26, 2025
8
min read
March 10, 2025
10
min read
January 28, 2025
Featured Research

Gandalf: Adaptive Defenses for Large Language Models

This research introduces D-SEC, a threat model that separates attackers from legitimate users and captures dynamic, multi-step interactions. Using Gandalf—a crowd-sourced red-teaming platform—we analyze 279k real-world attacks and show how some defenses degrade usability. We highlight effective strategies like adaptive defenses and defense-in-depth.

“We have been impressed throughout our collaboration with Lakera”

Trusted by GenAI leaders to secure mission-critical applications.

“The team has extensive expertise and deep understanding of complex security challenges like prompt injection attacks and other AI security threats. We look forward to continuing to work together to address these.”

Read case study
Seraphina Goldfarb-Tarrant
Head of Safety at Cohere

Meet the Scientists Behind Lakera's AI Security Research

Our research team consists of experts in AI security, machine learning, and adversarial defense strategies. They work at the intersection of cutting-edge research and practical security applications, ensuring AI systems remain robust and resilient.

Mateo Rojas-Carulla
Chief Scientist & Co-Founder
Niklas Pfister
Senior Research Scientist
Kyriacos Shiarlis
Senior Research Scientist
Julia Bazinska
Senior Research Engineer
Jared Niederhauser
Staff Research Engineer

Join Us in Securing the Future of AI

We invite researchers, developers, and security professionals to collaborate with us. Whether you’re interested in contributing to our projects, testing new defense strategies, or exploring novel AI security concepts, we welcome you to join us.

Contact Us