AIÂ security blog
The Rise of the Internet of Agents: A New Era of Cybersecurity
As AI-powered agents go online, securing our digital infrastructure will require a fundamental shift in cybersecurity.
All topics
You shall not pass: the spells behind Gandalf
In this first post of a longer series around Gandalf, we want to highlight some of the inner workings of Gandalf: what exactly is happening at each level, and how is Gandalf getting stronger?
Your validation set won’t tell you if a model generalizes. Here’s what will.
As we all know from machine learning 101, you should split your dataset into three parts: the training, validation, and test set. You train your models on the training set. You choose your hyperparameters by selecting the best model from the validation set. Finally, you look at your accuracy (F1 score, ROC curve...) on the test set. And voilà , you’ve just achieved XYZ% accuracy.
[Updated for YOLOv8] How robust are pre-trained object detection ML models like YOLO or DETR?
Deep-dive into advanced comparison methods beyond standard performance metrics to build computer vision models that consistently perform over the long term.
Continuous testing and model selection with Lakera and Voxel51
We are excited to announce the release of our first integration with FiftyOne by Voxel51. This integration makes it possible to benefit from FiftyOne's powerful visualization features to dig into the insights generated by Lakera's MLTest. Read on to learn how you can benefit from this.
OpenAI’s CLIP in production
We have released an implementation of OpenAI’s CLIP model that completely removes the need for PyTorch, enabling you to quickly and seamlessly install this fantastic model in production and even possibly on edge devices.
Stress-test your models to avoid bad surprises.
Will my system work if image quality starts to drop significantly? If my system works at a given occlusion level, how much stronger can occlusion get before the system starts to underperform? I have faced such issues repeatedly in the past, all related to an overarching question: How robust is my model and when does it break?
Activate
untouchable mode.
untouchable mode.
Get started for free.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Join our Slack Community.
Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.