Last week, the Lakera team attended DEFCON31, one of the most prominent cybersecurity conference held in Las Vegas from August 10th to 13th. As official sponsors of the AI Village, we also contributed to the community by creating Mosscap - an AI security game specifically tailored for DEFCON31.
In this brief article, we'd like to present our key event highlights and insights on the state of AI Security. To set the stage, let's start with a brief overview of DEFCON itself.
DEF CON is renowned as the world's largest and longest running underground hacking conference. Here are a few facts worth noting:
We spent most of our time hanging out in the AI Village. This consisted of two rooms: a room for talks and a room for the Generative Red Team (GRT) challenge - which was the largest ever in-person assessment for any group of AI models.
Check out Generative Red Team Challenge here.
**š”ļø Discover how Lakeraās Red Teaming solutions can safeguard your AI applications with automated security assessments, as well as identifying and addressing vulnerabilities effectively.**
Within the AI village, participants had the opportunity to take on the GRT challenge ā a captivating competition offering 50 minutes to "hack" LLMs from a spectrum of providers: Cohere, Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stability. The evaluation was facilitated through a platform developed by Scale AI.
This event garnered support from the White House Office of Science, Technology, and Policy, the National Science Foundationās Computer and Information Science and Engineering (CISE) Directorate, and the Congressional AI Caucus.
As previously mentioned, Lakera introduced Mosscap, a spin-off of Gandalf designed to equip participants with insights into prompt injection as they navigated the challenge.
At the GRT, there were several tasks you could choose from:
The submissions are graded manually, and you can get points for each (challenge, model) pair. There were about 50 laptops set up for the contest, along with a dynamic leaderboard. The results of the challenge will be announced in approximately a month's time.
Here are some of our key learnings from DEFCON and highlights regarding the state of AI security.
Enterprises are honing in on LLM security as a primary concern. The spotlight is on prompt injections prevention, defense strategies against data leakage, and safeguarding against misbehavior and model misuse. Companies are proactively seeking innovative solutions to fortify themselves against evolving LLM-based threats.
DEFCON was the ideal stage for initiatives like GRT, Mosscap & Gandalf to shine a light on specific types of security risks.
Craig Martell, Chief Digital and AI Officer at the U.S. Defense Department, conveyed a resounding message by saying:
Iām here today because I need hackers everywhere to tell us how this stuff breaks. [ā¦] Because if we donāt know how it breaks, we canāt get clear on the acceptability conditions and if we canāt get clear on the acceptability conditions we canāt push industry towards building the right thing, so that we can deploy it and use it.ā
He underlined the need for in-depth research into LLM vulnerabilities, emphasizing its vital role in shaping industry standards and the deployment of secure AI systems. Martell's call for collaboration with hackers underscores the pursuit of comprehensive AI security.
DEFCON discussions resonated with policymakers in the US and EU, emphasizing the importance of regulations governing foundation models.
Nicolas Moƫs, Director of The Future Society - a non-profit committed to European AI Governance, illuminated critical concerns by saying:
āSome of the biggest risks tied to the evolution of foundation models involve inherent biases embedded within them, and the intricate nature of these models, which can give rise to unforeseen behaviors causing harm. It is imperative that we establish more effective safeguards to counteract these potential outcomes.ā
The call for enhanced measures to mitigate these risks underscores the paramount importance of responsible AI development.
As the AI landscape continues to evolve, a pivotal category emergesāAISec. With organizations increasingly harnessing the power of LLMs to drive their internal systems, the implementation of robust safety measures becomes critical. These measures are crucial for safeguarding applications against a spectrum of threats, including prompt injections, hallucinations, and data leakage, among others.
Speaking of whichā¦
Lastly, DEFCON also signaled the BETA launch of our new product - Lakera Guard, a powerful API designed to safeguard LLMs.
Check out our official announcement here: An Overview of Lakera Guard ā Bringing Enterprise-Grade Security to LLMs with Just One Line of Code
The #AISec community at DEFCON responded with overwhelmingly positive feedback, and we're delighted to share that this product launch sparked numerous discussions about AI security and AI regulations with EU and US policymakers.
You can try Lakera Guard Playground and sign up here: https://platform.lakera.ai/
Finally, here are a couple of pictures of fellow white-hat hackers who decided to join forces with the Lakera team on our quest to protect LLM models š
And thatās it! We hope to see you next year at DEFCON32!
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.
Compare the EU AI Act and the White Houseās AI Bill of Rights.
Get Lakera's AI Security Guide for an overview of threats and protection strategies.
Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.
Use our checklist to evaluate and select the best LLM security tools for your enterprise.
Discover risks and solutions with the Lakera LLM Security Playbook.
Discover risks and solutions with the Lakera LLM Security Playbook.
Subscribe to our newsletter to get the recent updates on Lakera product and other news in the AI LLM world. Be sure youāre on track!
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. āØCome join us and 1000+ others in a chat thatās thoroughly SFW.