Back

Lakera releases robustness testing suite for digital pathology

Lakera now offers you the opportunity to easily test whether your algorithms are robust to histological artifacts and variations. It lets you stress test your computer vision models to gain confidence in their robustness properties prior to clinical validation and deployment.

Lakera Team
December 1, 2023
November 1, 2022
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

In-context learning

As users increasingly rely on Large Language Models (LLMs) to accomplish their daily tasks, their concerns about the potential leakage of private data by these models have surged.

[Provide the input text here]

[Provide the input text here]

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?

Title italic

A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.

English to French Translation:

Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?

Lorem ipsum dolor sit amet, line first
line second
line third

Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?

Title italic Title italicTitle italicTitle italicTitle italicTitle italicTitle italic

A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.

English to French Translation:

Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?

Hide table of contents
Show table of contents

We are thrilled to announce the release of our robustness testing suite for state-of-the-art computer vision in digital pathology. With the explosion of this new field, the medical sector is experiencing a tremendous flurry of activity, particularly in the space of medical imaging.

**Pro tip:💡 Want to join other pathology companies in automating model validation with MLTest? You can get started in minutes.**

A major headache for pathology imaging, however, is ensuring model robustness during operation. It's not only that histological slides are very heterogeneous, but the abundance of histological artifacts that commonly appear in operation can lead to severe underperformance as well. These include dust particles, oily spots, loose cells from tissue tearing, staining - the list goes on and on. Unfortunately, each one of these culprits can play havoc with your algorithms, leading to missed objects or too much debris masquerading as your target.

A few examples of Lakera’s robustness tests for pathology. From left to right: dark spots caused by e.g. dust, an oily spot caused by e.g. fingerprints, squamous epithelial cells, and synthetic threads causing local focus deterioration. These histological artifacts typically lead to severe model underperformance during operation.

Digital pathology teams are already using Lakera to speed up model validation, train better models, and expedite clinical validation and certification processes.

With this latest release, they bring their machine learning testing capabilities to a new level. This test suite includes new types of robustness tests, such as:

  • Dark spots, often caused by dust particles or glass scratches.
  • Oily spots, often caused by residue from, for example, fingerprints.
  • Squamous epithelia, one of the most common artifacts resulting from contamination of biopsy specimens during tissue processing.
  • Synthetic threads, a common artifact that causes focus deterioration.
  • Image quality differences, often caused when data comes from multiple scanning devices.
  • Variations in focus, often caused by the presence of foreign objects.
  • Variations in lighting conditions (e.g. brightness and contrast) often caused by differences in tissue, processing, or scanning devices.

Lakera now offers you the opportunity to easily test whether your algorithms are robust to histological artifacts and variations. It lets you stress test your computer vision models to gain confidence in their robustness properties prior to clinical validation and deployment.

Does your tumor detector still work with dust on the slides? Can something as simple as extra tissue cells dramatically increase your number of false positives? If questions like these are on your mind, then head on over to MLTest to learn why leading medical companies trust Lakera.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

Download Free

Master Prompt Injection Attacks.

Learn LLM security, attack strategies, and protection tools. Includes bonus datasets.

Unlock Free Guide

Learn AI Security Basics.

Join our 10-lesson course on core concepts and issues in AI security.

Enroll Now

Optimize LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Download Free

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Download Free

Understand AI Security Basics.

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Download Free

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Download Free
Lakera Team
Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download
You might be interested
4
min read
Product Updates

Introducing Lakera Chrome Extension - Privacy Guard for Your Conversations with ChatGPT

Lakera introduces Lakera PII Extension—a user-friendly Chrome plugin that allows you to input prompts to ChatGPT securely.
Lakera Team
December 1, 2023
10
min read
Product Updates

ChainGuard: Guard Your LangChain Apps with Lakera

In this tutorial, we'll show you how to integrate Lakera Guard into your LangChain applications to protect them from the most common AI security risks, including prompt injections, toxic content, data loss, and more!
Lakera Team
March 15, 2024
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.