Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy
Back

Free of bias? We need to change how we build ML systems.

The topic of bias in ML systems has received significant attention recently. And rightly so. The core input to ML systems is data. And data is biased due to a variety of factors. Building a system free of bias is challenging. And in fact, the ML community has long struggled to define what a bias-free or fair system is.

Lakera Team
November 13, 2024
June 29, 2021
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

In-context learning

As users increasingly rely on Large Language Models (LLMs) to accomplish their daily tasks, their concerns about the potential leakage of private data by these models have surged.

[Provide the input text here]

[Provide the input text here]

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?

Title italic

A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.

English to French Translation:

Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?

Lorem ipsum dolor sit amet, line first
line second
line third

Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?

Title italic Title italicTitle italicTitle italicTitle italicTitle italicTitle italic

A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.

English to French Translation:

Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?

Hide table of contents
Show table of contents

The topic of bias in ML systems has received significant attention recently. And rightly so. The recent documentary Coded Bias highlighted how algorithmic decision-making leads to biased results. At worst, these can affect whole sections of the population, for instance when it comes to teacher evaluations.

The core input to ML systems is data. And data is biased due to a variety of factors – such as societal, collection, and annotation biases. People training models on such data carry the burden to ensure that the systems do not discriminate or use bias to perpetuate an unfair status quo. Building a system free of bias is challenging. And in fact, the ML community has long struggled to define what a bias-free or fair system is.

Achieving a valid definition of fairness requires a wider discussion with legal professionals and regulatory bodies. In the meantime, changing the way we build ML systems, and putting testing at the core of development, can go a long way in reducing bias in our systems.

Creating fair systems is hard – and needs participation beyond data science...

One way to approach bias is fairness. A recent push to find the right definition for algorithmic fairness focused on establishing good metrics for measuring fairness, that is, building a system with an encoded notion of it.

For example, consider a machine-learning system that predicts whether a person will pay back a bank loan. The bank cares about not discriminating between two demographics. One possible notion of fairness, “Demographic Parity”, ensures that the system has the same probability of granting a loan to both demographics. This makes intuitive sense. Another notion, “Equality of Opportunity”, would grant loans to the same portion of individuals that are likely to repay the loan in each demographic.

“Computer scientists were left to decide what a fair algorithm is, despite being ill-equipped to make such decisions.”

While these two metrics make sense, it was soon observed that they cannot be mutually satisfied. That is, if the system satisfies one of the properties, the other cannot hold. Computer scientists were left to decide what a fair algorithm is, despite being ill-equipped to make such decisions. The question of fairness has received significant attention in the last century in legal and social science publications. Stakeholders from these fields should be included in the discussion around algorithmic bias. Input from legal experts and regulators is fundamental for establishing concrete guidance that helps companies build bias-free systems.

… but a rigorous process is a good place to start.

In many applications, a rigorous testing process can go a long way in ensuring that systems are less discriminatory. This requires developing ML software as we have been developing safety-critical systems for decades. When building facial-recognition algorithms, establishing a clear definition of the scenarios in which the system is expected to work is key. Then, diligent testing around these scenarios (for example, testing on a wide range of demographics) is the first step to ensuring that the system will be fair. This process should be a core component of development, not a last-minute tweak.

“Standard testing methodologies for ML systems rely simply on validating on left-out data. This validation data may not be fully representative of the real world or of the unexpected scenarios that the system may face in production”

Careful, in-depth testing is central to building traditional software systems. Yet, standard testing methodologies for ML systems rely simply on validating on left-out data. This validation data may not be fully representative of the real world or of the unexpected scenarios that the system may face in production. If we more carefully lay out and test the explicit requirements of our machine-learning systems (e.g., equal performance among demographics), we can take a fundamental step towards building systems with fewer unwanted and unexpected behaviors.

The recent EU proposal to regulate AI systems is a big step in the right direction. The “high-risk” category proposed in the report should include all systems that can cause harm or perpetuate the status quo. We welcome this step towards a clearer set of concrete and actionable guidance on the matter – from both methodological and regulatory perspectives. Accountability by companies using ML is important, and we should aim to understand the processes that must be followed to minimize the chance of unintended consequences.

While we may not be able to build ML that is universally free of bias, we can better detect and control bias in individual systems. So, let’s make that a priority!

What do you think we could do better to avoid bias in our ML systems? We would love to chat further. Get in touch with us here or sign up for updates below!

💡 Read next:

Introduction to Large Language Models: Everything You Need to Know in 2023 (+ Resources)

The List of 11 Most Popular Open Source LLMs of 2023

Foundation Models Explained: Everything You Need to Know in 2023

The Ultimate Guide to LLM Fine Tuning: Best Practices & Tools

OWASP Top 10 for Large Language Model Applications Explained: A Practical Guide

Jailbreaking Large Language Models: Techniques, Examples, Prevention Methods

The Beginner’s Guide to Hallucinations in Large Language Models

A Step-by-step Guide to Prompt Engineering: Best Practices, Challenges, and Examples

What is In-context Learning, and how does it work: The Beginner’s Guide

Evaluating Large Language Models: Methods, Best Practices & Tools

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

Download Free

Explore Prompt Injection Attacks.

Learn LLM security, attack strategies, and protection tools. Includes bonus datasets.

Unlock Free Guide

Learn AI Security Basics.

Join our 10-lesson course on core concepts and issues in AI security.

Enroll Now

Evaluate LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Download Free

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Download Free

The CISO's Guide to AI Security

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Download Free

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Download Free
Lakera Team

GenAI Security Preparedness
Report 2024

Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.

Free Download
Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Understand AI Security Basics.

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Optimize LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Master Prompt Injection Attacks.

Discover risks and solutions with the Lakera LLM Security Playbook.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

You might be interested
min read
Machine Learning

Why ML testing is crucial for reliable computer vision.

Sounds like a lot of work? It used to be, but with the advent of artificial intelligence (AI) observability software, such assessments become as easy as training a new model.
Matthias Kraft
November 13, 2024
12
min read
Machine Learning

Why we need better data management for mission-critical AI

In order to enable mission-critical ML applications, we need to create appropriate guidance for data management, both at the formal regulatory level and in our everyday best practices.
Mateo Rojas-Carulla
November 13, 2024
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.