Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy
Webinar
Virtual
Upcoming
On-demand
All

Masterclass in AI Threat Modeling: Addressing Prompt Injections

September
25
,
2024
9:00 am
PT
September 25, 2024
Mateo Rojas-Carulla
Chief Scientist & Co-Founder of Lakera
Elliot Ward
Senior Security Researcher @ Snyk
Nate Lee
CISO at Cloudsec.ai
September 25, 2024

Join Mateo Rojas Carulla (Chief Scientist at Lakera), Nate Lee (CISO at CloudSec), and Elliot Ward (Security Researcher at Snyk) for a live discussion on the intricacies of AI threat modeling and the pressing challenges in securing AI systems.

As AI systems become more sophisticated, the threats they face grow in complexity. One of the most pressing challenges today is effectively modeling and defending against AI-specific attacks, such as prompt injections.

This webinar will explore how to effectively model AI-specific threats, address emerging vulnerabilities, and establish a proactive security strategy. The session will also place a special emphasis on prompt injections—an emerging and particularly dangerous form of attack on Generative AI systems. Attendees will gain insights into the latest defense strategies and practical ways to secure AI-driven applications against these sophisticated threats.

Agenda

Join this session to:

  • Understand the unique security risks posed by AI and how to model them effectively.
  • Learn about the growing threat of prompt injections and how they exploit GenAI systems.
  • Explore cutting-edge research and real-world examples of LLM exploits.
  • Discover actionable techniques to defend your AI applications from emerging attack vectors.
Speakers
Mateo Rojas-Carulla
Chief Scientist & Co-Founder of Lakera

Dr. Mateo Rojas-Carulla is the Chief Scientist and Co-Founder of Lakera. With over 10 years of experience in artificial intelligence, Mateo has worked on building large language models in the industry and conducted leading AI research at Meta’s FAIR labs.

Read more
Elliot Ward
Senior Security Researcher @ Snyk

Elliot is a senior security researcher at Snyk and a project lead for the Large Language Model Security Verification Standard by OWASP.

Read more
Nate Lee
CISO at Cloudsec.ai

Nate Lee is the Chief Information Security Officer at Cloudsec.ai and the lead author on the recent Cloud Security Alliance paper - Securing LLM Backed Systems: Essential Authorization Practices.

Read more

https://lakera/event/masterclass-in-ai-threat-modeling-addressing-prompt-injections

Check out similar events
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.