Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
Gandalf: GenAI Security Training

Build AI Security Awareness with Gandalf.

The most engaging way for teams to upskill in AI security, understand emerging risks, and mitigate vulnerabilities through hands-on learning.

Run structured, hands-on workshops on AI security.

Understand real AI vulnerabilities and defenses.

Comply with regulatory requirements for security training.

Make compliance training really fun.

What is Gandalf

Gandalf is an educational platform that helps organizations understand AI security risks through hands-on experience.

Gandalf is an educational platform that helps organizations understand AI security risks through hands-on experience.

Gandalf is an educational platform that helps organizations understand AI security risks through hands-on experience.

1M+
Total Players
45M+
Total Prompts
25+ Years
Total Time Spent Playing
Play Gandalf
LEVEL 2
The Danger of Unrestricted Access
LEVEL 2
Basic AI Guardrails and Their Limitations
LEVEL 2
Basic AI Guardrails and Their Limitations
LEVEL 2
Basic AI Guardrails and Their Limitations
LEVEL 2
Basic AI Guardrails and Their Limitations
LEVEL 2
Basic AI Guardrails and Their Limitations
LEVEL 2
Basic AI Guardrails and Their Limitations
LEVEL 2
Basic AI Guardrails and Their Limitations
LEVEL 1
Basic AI Guardrails and Their Limitations
LEVEL 2
Basic AI Guardrails and Their Limitations
LEVEL 3
Introduction to Output Filtering
LEVEL 4
Advanced Detection with a Secondary Language Model
LEVEL 5
Naive Input Filtering and Its Limitations
LEVEL 6
Contextual Input Filtering with an AI Classifier
LEVEL 7
Comprehensive Multi-Layered Defenses
LEVEL 8
The Ultimate Challenge – Lakera Guard
LEVEL 1
Basic AI Guardrails and Their Limitations
LEVEL 2
Basic AI Guardrails and Their Limitations
LEVEL 3
Introduction to Output Filtering
LEVEL 4
Advanced Detection with a Secondary Language Model
LEVEL 5
Naive Input Filtering and Its Limitations
LEVEL 6
Contextual Input Filtering with an AI Classifier
LEVEL 7
Comprehensive Multi-Layered Defenses
LEVEL 8
The Ultimate Challenge – Lakera Guard

Learning Objectives

Recognize Vulnerabilities

Learn how generative AI systems can be exploited with nothing more than carefully worded inputs.

Understand AI Security Risks

Recognize the risks of using AI systems without proper safeguards and the importance of adhering to security guidelines.

Actionable AI Security Skills

Develop practical expertise in AI red-teaming, increasing AI robustness, and understanding how to secure AI systems effectively.

AI security isn’t just an upgrade to traditional cybersecurity.
It’s a new era.
73%
of Gandalf players applied learnings to their work, focusing on internal testing and prompt robustness.

Why Focus on AI Security?

Most employees use GenAI without being aware of the risks, putting organizations at risk in ways traditional cybersecurity doesn’t cover.

GenAI models can be manipulated through language, meaning attackers don’t need technical skills—just the right words. Instead of hacking networks or exploiting software, attackers can craft prompts that bypass safeguards and manipulate AI outputs.

Organizations worldwide are integrating AI into daily workflows, often without realizing the risks. Gandalf provides the most accessible and engaging way to explore these risks in a practical and fun way.

Identify, understand, and mitigate AI vulnerabilities.

By adding Gandalf to your AI security strategy, you equip your team with practical skills to identify, understand, and mitigate AI vulnerabilities before they become real threats.

Gandalf showed me how easily sensitive data could be exposed. It changed the way I approach AI tools in my organization.”

– Gandalf Player

“Playing Gandalf made my team’s security assessments sharper anzd more informed.”

– Gandalf Player

“I now know how to test and defend our AI tools effectively thanks to Gandalf.”

– Gandalf Player

“We ran a series of seminars on ethical AI use, and Gandalf played a key part.”

– Gandalf Player

Make Your AI Security Training Even More Effective.

Use the Gandalf AI Security Guide as a companion resource for structured learning.

Enhance team training sessions with detailed breakdowns of Gandalf’s security challenges.

Understand key attack techniques and how AI models defend against them.

Meet compliance requirements for security training from AI regulatory frameworks, including the EU AI Act.

Download the Guide

Download the AI Security Training Guide