Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy

AI security blog

All topics
10
min read
•
Gandalf

You shall not pass: the spells behind Gandalf

In this first post of a longer series around Gandalf, we want to highlight some of the inner workings of Gandalf: what exactly is happening at each level, and how is Gandalf getting stronger?
Max Mathys
November 13, 2024
June 2, 2023
min read
•
Machine Learning

Your validation set won’t tell you if a model generalizes. Here’s what will.

As we all know from machine learning 101, you should split your dataset into three parts: the training, validation, and test set. You train your models on the training set. You choose your hyperparameters by selecting the best model from the validation set. Finally, you look at your accuracy (F1 score, ROC curve...) on the test set. And voilà, you’ve just achieved XYZ% accuracy.
Václav Volhejn
November 13, 2024
February 7, 2023
11
min read
•

[Updated for YOLOv8] How robust are pre-trained object detection ML models like YOLO or DETR?

Deep-dive into advanced comparison methods beyond standard performance metrics to build computer vision models that consistently perform over the long term.
Justin Deschenaux
November 13, 2024
January 26, 2023
min read
•

Continuous testing and model selection with Lakera and Voxel51

We are excited to announce the release of our first integration with FiftyOne by Voxel51. This integration makes it possible to benefit from FiftyOne's powerful visualization features to dig into the insights generated by Lakera's MLTest. Read on to learn how you can benefit from this.
Santiago Arias
November 13, 2024
January 6, 2023
min read
•
Large Language Models

OpenAI’s CLIP in production

We have released an implementation of OpenAI’s CLIP model that completely removes the need for PyTorch, enabling you to quickly and seamlessly install this fantastic model in production and even possibly on edge devices.
Daniel Timbrell
November 13, 2024
November 29, 2022
2
min read
•
Machine Learning

Stress-test your models to avoid bad surprises.

Will my system work if image quality starts to drop significantly? If my system works at a given occlusion level, how much stronger can occlusion get before the system starts to underperform? I have faced such issues repeatedly in the past, all related to an overarching question: How robust is my model and when does it break?
Mateo Rojas-Carulla
November 13, 2024
July 7, 2022
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.