min read
•
AI Security

LLM Vulnerability Series: Direct Prompt Injections and Jailbreaks

of prompt injections that are currently in discussion. What are the specific ways that attackers can use prompt injection attacks to obtain access to credit card numbers, medical histories, and other forms of personally identifiable information?
Daniel Timbrell
November 13, 2024
min read
•
Large Language Models

OpenAI’s CLIP in production

We have released an implementation of OpenAI’s CLIP model that completely removes the need for PyTorch, enabling you to quickly and seamlessly install this fantastic model in production and even possibly on edge devices.
Daniel Timbrell
November 13, 2024
6
min read
•
AI Security

The Beginner's Guide to Visual Prompt Injections: Invisibility Cloaks, Cannibalistic Adverts, and Robot Women

What is a visual prompt injection attack and how to recognize it? Read this short guide and check out our real-life examples of visual prompt injections attacks performed during Lakera's Hackathon.
Daniel Timbrell
November 13, 2024
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.