Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy

DEFCON Welcomes Mosscap: Lakera’s AI Security Game to Tackle Top LLM Vulnerabilities

Get ready to embark on an exciting AI security adventure with Mosscap! Inspired by the captivating "Monk and Robot" series, Lakera’s team has worked tirelessly to create a fun and educational experience, shedding light on prompt injection vulnerabilities present in LLMs.

Lakera Team
November 14, 2023
Hide table of contents
Show table of contents

We are thrilled to announce Mosscap, the much-anticipated spin-off of our world-renowned AI security game, Gandalf, developed together with our partners - the AI Village for DEFCON.

Created by the talented Lakera AI team, this fun AI security challenge is now available for DEFCON, the AI Village, and the GRT Challenge attendees. Mosscap promises to provide participants with an invaluable opportunity to enhance their expertise in AI security and effectively safeguard against prompt injection vulnerabilities.

Mosscap AI security game

Building on the success of Gandalf, which has garnered global recognition and acclaim (over 20 million interactions to date), Mosscap takes AI security gaming to new heights. Inspired by the captivating "Monk and Robot" series, Lakera’s team has worked tirelessly to create a fun and educational experience, shedding light on prompt injection vulnerabilities present in Large Language Models (LLMs) like OpenAI’s ChatGPT, Anthropic’s Claude, or Google’s Bard.

Prompt injection stands out as the number one vulnerability on OWASP's Top 10 LLM Vulnerabilities Report, underscoring the critical importance of this game in equipping participants with essential knowledge of LLM security risks.

**Pro Tip: Read OWASP Top 10 for Large Language Model Applications Explained: A Practical Guide**

Aligned with our commitment to the research community, we are pleased to announce that the data collected through Mosscap will be made available for research purposes, further advancing the understanding and mitigation of prompt injection vulnerabilities in LLMs.

Get ready to embark on an exciting AI security adventure with Mosscap! Stay tuned for updates and further details as we continue to make AI security and corresponding education accessible and engaging for all.

For more information and sneak peeks, please visit Mosscap's official website and follow us on our Twitter/X and LinkedIn.

About Lakera

Lakera is an AI security company based in Zurich, Switzerland. Founded by former Google, Meta, and Daedalean engineers in 2021, the company is on a mission to put safety and security expertise in any AI developer’s toolkit.

For media inquiries, please contact: info@lakera.ai

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

GenAI Security Preparedness 

Report 2024

Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.

Free Download
Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Understand AI Security Basics.

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Optimize LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Master Prompt Injection Attacks.

Discover risks and solutions with the Lakera LLM Security Playbook.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

You might be interested
Investing in Lakera to help protect GenAI apps from malicious prompts
Investing in Lakera to help protect GenAI apps from malicious prompts
min read
•
Media Coverage

Investing in Lakera to help protect GenAI apps from malicious prompts

Investing in Lakera to help protect GenAI apps from malicious prompts

Citi Ventures invests in Lakera, the leading solution for securing AI applications at run-time.
Lakera, which protects enterprises from LLM vulnerabilities, raises $20M
Lakera, which protects enterprises from LLM vulnerabilities, raises $20M
5
min read
•
Media Coverage

Lakera, which protects enterprises from LLM vulnerabilities, raises $20M

Lakera, which protects enterprises from LLM vulnerabilities, raises $20M

Lakera, a Swiss startup that’s building technology to protect generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A round led by European venture capital firm, Atomico.
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.