Lessons Learned from Crowdsourced LLM Threat Intelligence
![](https://cdn.prod.website-files.com/65080baa3f9a607985451de3/655f8ad96e2d30c7746839ae_Group%20316128163.webp)
![](https://cdn.prod.website-files.com/65080baa3f9a607985451de3/655f8ad9d9c273a37a3ec566_Group%20316128165.webp)
![](https://cdn.prod.website-files.com/65080baa3f9a607985451de3/656de11a0ccf9cd92950c6f4_Group%20316128161.webp)
2023 was a wild year for Large Language Models. It started off with Bard, Bing, and Llama, saw GPT-4 and multimodal models arrive, and ended with Mamba, Mixtral, and Phi-2. It was also a wild year for everyone’s favorite new security vulnerability: Prompt Injection.
Many teams were gathering prompt injection and LLM vulnerability data throughout the year, and in this panel we’ll dig into what we learned about prompt injections in 2023 and how we can use that information to build more secure LLM-enabled applications in 2024.
We’ll discuss insights from:
- Lakera’s Gandalf prompt injection game
- The HackAPrompt competition from LearnPrompting.org
- The Tensor Trust prompt injection and prompt defense game
- The LVE Project’s community challenges and LLM vulnerability initiative
Join us - and a representative from each of these projects - for a discussion panel and interactive Q&A session.
Don't want your question to get lost in the Q&A? Submit your questions for the panel today.
- Get an overview of what we have learned about prompt injections over the last year
- Understand what you can do to make your LLM-enabled applications more secure today
- Discover tools and datasets for your own prompt injection evaluations
- Discuss the importance of crowdsourcing attack and vulnerability data and contributing to our collective knowledge
untouchable mode.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.