Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy

Gandalf: Introducing a Sleek New UI and Enhanced AI Security Education

Gandalf, our viral prompt-injection game and the world’s most popular AI security education platform, gets the new look and feel.

Lakera Team
July 25, 2024
Hide table of contents
Show table of contents

We’re excited to introduce the new look and feel of Gandalf, our viral prompt-injection game and the world’s most popular AI security education platform. This redesign was prompted (no pun intended 😜) by our commitment to making the game more intuitive and educational, significantly enhancing the user experience.

The new UI includes hints, links to educational materials, and additional explanations of different aspects of AI security. Additionally, we’re developing new Gandalf Adventures which will be released separately.

Gandalf started off as a hackathon project and quickly gained popularity, becoming a favorite educational tool for AI security. It has been used by leading corporations such as Microsoft, Google, and Amazon, and featured in high-profile media outlets and educational institutions like Harvard’s CS50.

Gandalf in Numbers

  • Launched in May 2023.
  • Played by over 1 million people.
  • Almost 17% of players reached the final level.
  • Collected 40M+ prompts and guesses.
  • On average, about 107k prompts submitted daily.
  • Played in more than 68 languages.

Maintaining the Core Experience

With all the changes implemented, we made a point of not interfering with Gandalf’s core experience.

So fret not—

Gandalf remains free, ungated, and the progressively difficult levels that users have come to love are exactly as they used to be, providing the same challenging and engaging journey.

These levels continue to teach users about AI security through practical prompting challenges, ensuring that the foundational elements of Gandalf are preserved.

Developing Gandalf has been quite the journey. We've been attentive to player experiences and feedback, which has helped shape our approach to creating new levels. We have some promising features in the pipeline that should enhance the learning experience. I'm looking forward to seeing how users engage with them.

Athanasios Theocharis, Software Engineer at Lakera

The New UI—Enhancing the Educational Experience

The highlight of this release is the introduction of a redesigned UI that enhances the Gandalf experience.

This new look is crafted to improve usability and accessibility, ensuring that users of all ages and backgrounds can easily navigate and engage with the game.

The interface is intuitive and user-friendly, with built-in hints.

One of the new sections we have introduced is called "Gandalf's AI Security Vault" which you may want to explore in between completing the prompting challenges.

Here, you'll find helpful tips and tricks about tackling Gandalf levels, as well as educational materials, links, and detailed explanations to deepen your understanding of various aspects of AI security.

The growth of Gandalf has been impressive. Our Momentum community is expanding steadily, and the feedback from user interactions, both online and in-person, has been invaluable for improving the game. It's rewarding to see that we've managed to make AI security education both educational and engaging—which is no small feat given the complexity of the subject.

‍Max Mathys, ML Engineer at Lakera

From Internal Hackathon to the World’s Most Popular AI Security Educational Platform: The Journey of Gandalf

Gandalf’s story began in April 2023, during a Lakera hackathon focused on addressing the safety concerns of large language models (LLMs) like ChatGPT.

This hackathon gave birth to Gandalf, a game where users could test their skills in tricking AI systems into revealing secrets, thus learning about the vulnerabilities of LLMs. 

Through Gandalf, we brought to light a profound shift in cybersecurity—

With the advent of LLMs, anyone can be a hacker simply by using natural language.

This “democratization of hacking” highlights the vulnerabilities of LLMs, which can be manipulated to perform unintended actions without requiring traditional coding skills.

Gandalf plays a crucial role in raising awareness about these vulnerabilities, making it clear that AI systems, while powerful and innovative, also need robust security measures.

Future New Levels

While the new UI is a big enhancement, we are also planning to introduce new Gandalf adventures very soon. 

These new levels will offer fresh challenges and learning opportunities

At the RSA Conference 2024, we introduced a special edition of Gandalf designed specifically for the event. 

This version included a unique community element where participants were divided into blue and red teams, engaging collaboratively through our Slack community, Momentum.

This special edition attracted over 500 participants, showcasing Gandalf’s versatility and fostering a sense of community and teamwork among players.

Gandalf started as a project during a company hackathon—we didn't expect it to gain this much traction. The demand for AI security education is clearly substantial. It's been fascinating to see Gandalf become a go-to tool for learning about prompt injection and other AI vulnerabilities. The interactive format seems to resonate with users more effectively than traditional methods—it's a more accessible way to tackle complex concepts.‍

‍Václav Volhejn, Senior ML Scientist at Lakera

Industry Impact and Recognition

Gandalf’s role in shaping the AI security landscape has been cemented by its inclusion in Microsoft’s PyRIT toolkit.

This toolkit, aimed at improving AI system security, uses Gandalf as a practical example of how to educate users on AI security through interactive gameplay.

You can watch Microsoft’s demonstration of PyRIT in action using Gandalf here:

Gandalf has also been praised in forums like Hacker News and covered in-depth by TechCrunch, highlighting its role in the broader AI security discourse.

Lakera’s Vision

Lakera’s mission has always been to make AI applications more secure. Gandalf is a central part of this vision, providing insights and education on AI security.

Gandalf’s evolution reflects our dedication to staying ahead of emerging threats and equipping users with the knowledge and tools they need to tackle the complexities of AI security.

Looking Ahead

The release of the new UI marks a milestone in Gandalf’s journey. As we continue to innovate and expand, we invite you to experience the new Gandalf, explore its advanced features, and join us in our mission to make AI security accessible and engaging for everyone.

Stay tuned for the upcoming Gandalf adventures that will further enrich your learning experience!

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

GenAI Security Preparedness 

Report 2024

Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.

Free Download
Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Understand AI Security Basics.

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Optimize LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Master Prompt Injection Attacks.

Discover risks and solutions with the Lakera LLM Security Playbook.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

You might be interested
Investing in Lakera to help protect GenAI apps from malicious prompts
Investing in Lakera to help protect GenAI apps from malicious prompts
min read
•
Media Coverage

Investing in Lakera to help protect GenAI apps from malicious prompts

Investing in Lakera to help protect GenAI apps from malicious prompts

Citi Ventures invests in Lakera, the leading solution for securing AI applications at run-time.
Lakera, which protects enterprises from LLM vulnerabilities, raises $20M
Lakera, which protects enterprises from LLM vulnerabilities, raises $20M
5
min read
•
Media Coverage

Lakera, which protects enterprises from LLM vulnerabilities, raises $20M

Lakera, which protects enterprises from LLM vulnerabilities, raises $20M

Lakera, a Swiss startup that’s building technology to protect generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A round led by European venture capital firm, Atomico.
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.