Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy
Back

The computer vision bias trilogy: Drift and monitoring.

Unforeseen data may be presented to the computer vision system during operation despite careful mitigation of datasets and shortcuts.

Lakera Team
November 13, 2024
April 19, 2022
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

In-context learning

As users increasingly rely on Large Language Models (LLMs) to accomplish their daily tasks, their concerns about the potential leakage of private data by these models have surged.

[Provide the input text here]

[Provide the input text here]

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?

Title italic

A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.

English to French Translation:

Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?

Lorem ipsum dolor sit amet, line first
line second
line third

Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?

Title italic Title italicTitle italicTitle italicTitle italicTitle italicTitle italic

A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.

English to French Translation:

Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?

Hide table of contents
Show table of contents

If the past three years have taught us anything, it is that the world around us can take unexpected turns. The same can be true for your computer vision models.

Unforeseen data may be presented to the computer vision model during operation despite careful mitigation of datasets and shortcuts. One such phenomenon is data drift.

A hospital may change their x-ray machine and keep using the same computer vision model to diagnose, even though the system was not trained with this kind of input data. Similarly, an autonomous car solely built for European streets notable for their twists and turns, may not perform as expected if deployed in an American city.

Fail, but fail gracefully.

ML models tend to fail silently and make predictions regardless, albeit erroneous ones. One can mitigate operational bias by adding the right mitigation strategies: the wider ML system should detect in operation if an image looks “suspicious” or “unknown”, and gracefully fail (for example, by asking the doctor for a closer look).

Out-of-distribution detection.

The problem of finding such problematic inputs is called out-of-distribution detection. The challenging problem involves comparing the distribution of high-dimensional objects. If you’re interested in learning more about it, the research in the area is extensive [1], [2], [3]. Note that out-of-distribution detection is a key part of many learning systems.

For example, Generative Adversarial Networks train a discriminator network whose sole task is to detect if a generated image is “suspicious” when judged against a reference dataset. Systems in production should be endowed with an out-of-distribution detector in order to detect problematic samples on the fly. If a problematic image is detected, the system should fail gracefully, thus reducing the risk of silent failures of your computer vision system.

It is essential to keep data drift in mind once your system is in production. Keeping the data and model up-to-date is just a part of any AI’s lifecycle. In the meantime, ensure that mitigation strategies are in place so those suspicious outcomes are detected and looked at by humans in the loop.

Get started with Lakera today.

Get in touch with mateo@lakera.ai to find out more about what Lakera can do for your team, or get started right away.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

Download Free

Explore Prompt Injection Attacks.

Learn LLM security, attack strategies, and protection tools. Includes bonus datasets.

Unlock Free Guide

Learn AI Security Basics.

Join our 10-lesson course on core concepts and issues in AI security.

Enroll Now

Evaluate LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Download Free

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Download Free

The CISO's Guide to AI Security

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Download Free

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Download Free
Lakera Team

GenAI Security Preparedness
Report 2024

Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.

Free Download
Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Understand AI Security Basics.

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Optimize LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Master Prompt Injection Attacks.

Discover risks and solutions with the Lakera LLM Security Playbook.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

You might be interested
7
min read
Computer Vision

Medical imaging as a serious prospect: Where are we at?

The promise these possibilities hold has put medical imaging in the lead of the race toward landing in hospitals. But that is not the end of the discussion…
Lakera Team
November 13, 2024
min read
Computer Vision

The computer vision bias trilogy: Shortcut learning.

Nobel Prize-winning economist, Daniel Kahneman once remarked “by their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but their heuristics of AI are not necessarily the human ones”. This is certainly the case when we talk about “shortcut learning”.
Lakera Team
November 13, 2024
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.