Weâre excited to introduce the new look and feel of Gandalf, our viral prompt-injection game and the worldâs most popular AI security education platform. This redesign was prompted (no pun intended đ) by our commitment to making the game more intuitive and educational, significantly enhancing the user experience.
The new UI includes hints, links to educational materials, and additional explanations of different aspects of AI security. Additionally, weâre developing new Gandalf Adventures which will be released separately.
Gandalf started off as a hackathon project and quickly gained popularity, becoming a favorite educational tool for AI security. It has been used by leading corporations such as Microsoft, Google, and Amazon, and featured in high-profile media outlets and educational institutions like Harvardâs CS50.
With all the changes implemented, we made a point of not interfering with Gandalfâs core experience.
So fret notâ
Gandalf remains free, ungated, and the progressively difficult levels that users have come to love are exactly as they used to be, providing the same challenging and engaging journey.
These levels continue to teach users about AI security through practical prompting challenges, ensuring that the foundational elements of Gandalf are preserved.
Developing Gandalf has been quite the journey. We've been attentive to player experiences and feedback, which has helped shape our approach to creating new levels. We have some promising features in the pipeline that should enhance the learning experience. I'm looking forward to seeing how users engage with them.
Athanasios Theocharis, Software Engineer at Lakera
The highlight of this release is the introduction of a redesigned UI that enhances the Gandalf experience.
This new look is crafted to improve usability and accessibility, ensuring that users of all ages and backgrounds can easily navigate and engage with the game.
The interface is intuitive and user-friendly, with built-in hints.
One of the new sections we have introduced is called "Gandalf's AI Security Vault" which you may want to explore in between completing the prompting challenges.
Here, you'll find helpful tips and tricks about tackling Gandalf levels, as well as educational materials, links, and detailed explanations to deepen your understanding of various aspects of AI security.
The growth of Gandalf has been impressive. Our Momentum community is expanding steadily, and the feedback from user interactions, both online and in-person, has been invaluable for improving the game. It's rewarding to see that we've managed to make AI security education both educational and engagingâwhich is no small feat given the complexity of the subject.
âMax Mathys, ML Engineer at Lakera
Gandalfâs story began in April 2023, during a Lakera hackathon focused on addressing the safety concerns of large language models (LLMs) like ChatGPT.
This hackathon gave birth to Gandalf, a game where users could test their skills in tricking AI systems into revealing secrets, thus learning about the vulnerabilities of LLMs.Â
Through Gandalf, we brought to light a profound shift in cybersecurityâ
With the advent of LLMs, anyone can be a hacker simply by using natural language.
This âdemocratization of hackingâ highlights the vulnerabilities of LLMs, which can be manipulated to perform unintended actions without requiring traditional coding skills.
Gandalf plays a crucial role in raising awareness about these vulnerabilities, making it clear that AI systems, while powerful and innovative, also need robust security measures.
While the new UI is a big enhancement, we are also planning to introduce new Gandalf adventures very soon.Â
These new levels will offer fresh challenges and learning opportunities
At the RSA Conference 2024, we introduced a special edition of Gandalf designed specifically for the event.Â
This version included a unique community element where participants were divided into blue and red teams, engaging collaboratively through our Slack community, Momentum.
This special edition attracted over 500 participants, showcasing Gandalfâs versatility and fostering a sense of community and teamwork among players.
Gandalf started as a project during a company hackathonâwe didn't expect it to gain this much traction. The demand for AI security education is clearly substantial. It's been fascinating to see Gandalf become a go-to tool for learning about prompt injection and other AI vulnerabilities. The interactive format seems to resonate with users more effectively than traditional methodsâit's a more accessible way to tackle complex concepts.â
âVĂĄclav Volhejn, Senior ML Scientist at Lakera
Gandalfâs role in shaping the AI security landscape has been cemented by its inclusion in Microsoftâs PyRIT toolkit.
This toolkit, aimed at improving AI system security, uses Gandalf as a practical example of how to educate users on AI security through interactive gameplay.
You can watch Microsoftâs demonstration of PyRIT in action using Gandalf here:
Gandalf has also been praised in forums like Hacker News and covered in-depth by TechCrunch, highlighting its role in the broader AI security discourse.
Lakeraâs mission has always been to make AI applications more secure. Gandalf is a central part of this vision, providing insights and education on AI security.
Gandalfâs evolution reflects our dedication to staying ahead of emerging threats and equipping users with the knowledge and tools they need to tackle the complexities of AI security.
The release of the new UI marks a milestone in Gandalfâs journey. As we continue to innovate and expand, we invite you to experience the new Gandalf, explore its advanced features, and join us in our mission to make AI security accessible and engaging for everyone.
Stay tuned for the upcoming Gandalf adventures that will further enrich your learning experience!
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.
Compare the EU AI Act and the White Houseâs AI Bill of Rights.
Get Lakera's AI Security Guide for an overview of threats and protection strategies.
Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.
Use our checklist to evaluate and select the best LLM security tools for your enterprise.
Discover risks and solutions with the Lakera LLM Security Playbook.
Discover risks and solutions with the Lakera LLM Security Playbook.
Subscribe to our newsletter to get the recent updates on Lakera product and other news in the AI LLM world. Be sure youâre on track!
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. â¨Come join us and 1000+ others in a chat thatâs thoroughly SFW.