Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
DEFCON Welcomes Mosscap: Lakera’s AI Security Game to Tackle Top LLM Vulnerabilities
Get ready to embark on an exciting AI security adventure with Mosscap! Inspired by the captivating "Monk and Robot" series, Lakera’s team has worked tirelessly to create a fun and educational experience, shedding light on prompt injection vulnerabilities present in LLMs.
We are thrilled to announce Mosscap, the much-anticipated spin-off of our world-renowned AI security game, Gandalf, developed together with our partners - the AI Village for DEFCON.
Created by the talented Lakera AI team, this fun AI security challenge is now available for DEFCON, the AI Village, and the GRT Challenge attendees. Mosscap promises to provide participants with an invaluable opportunity to enhance their expertise in AI security and effectively safeguard against prompt injection vulnerabilities.
Building on the success of Gandalf, which has garnered global recognition and acclaim (over 20 million interactions to date), Mosscap takes AI security gaming to new heights. Inspired by the captivating "Monk and Robot" series, Lakera’s team has worked tirelessly to create a fun and educational experience, shedding light on prompt injection vulnerabilities present in Large Language Models (LLMs) like OpenAI’s ChatGPT, Anthropic’s Claude, or Google’s Bard.
Prompt injection stands out as the number one vulnerability on OWASP's Top 10 LLM Vulnerabilities Report, underscoring the critical importance of this game in equipping participants with essential knowledge of LLM security risks.
Aligned with our commitment to the research community, we are pleased to announce that the data collected through Mosscap will be made available for research purposes, further advancing the understanding and mitigation of prompt injection vulnerabilities in LLMs.
Get ready to embark on an exciting AI security adventure with Mosscap! Stay tuned for updates and further details as we continue to make AI security and corresponding education accessible and engaging for all.
Lakera is an AI security company based in Zurich, Switzerland. Founded by former Google, Meta, and Daedalean engineers in 2021, the company is on a mission to put safety and security expertise in any AI developer’s toolkit.
Lakera, a Swiss startup that’s building technology to protect generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A round led by European venture capital firm, Atomico.