Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy

While GenAI Adoption Surges, Report Shows Security Preparedness Lags 

Ninety-five percent of cybersecurity experts express low confidence in GenAI security measures while red team data shows anyone can easily hack GenAI models

August 22, 2024
August 21, 2024
Hide table of contents
Show table of contents

‍San Francisco, USA and Zurich, Switzerland /  August 21, 2024 —Lakera, the world’s leading real-time Generative AI (GenAI) Security company, today released the 2024 GenAI Security Readiness Report. This report confirms GenAI adoption is surging but surfaces a concerning blind spot for hundreds of enterprises – GenAI security.  

Attack methods specific to GenAI, or prompt attacks, are easily used by anyone to manipulate the applications, gain unauthorized access, steal confidential data and take unauthorized actions. Realizing this, only five percent of the 1,000 cybersecurity experts surveyed have confidence in the security measures protecting their GenAI applications even as 90% are actively using or exploring GenAI.

“With just a few well-crafted words, even a novice can manipulate AI systems, leading to unintended actions and data breaches,” said David Haber, co-founder and CEO at Lakera. “As businesses increasingly rely on GenAI to accelerate innovation and manage sensitive tasks, they unknowingly expose themselves to new vulnerabilities that traditional cybersecurity measures don’t address. The combination of high adoption and low preparedness may not be that surprising in an emerging area, but the stakes have never been higher. ” 

With GenAI Everyone is a Potential Hacker

Gandalf, an AI educational game created by Lakera, has attracted more than one million players including cybersecurity experts attempting to breach its defenses. Remarkably, 200,000 of these players have successfully completed seven levels of the game, demonstrating their ability to manipulate GenAI models into taking unintended actions. This provides an incredibly valuable reference point for the magnitude of the problem. Entering commands using their native language and a bit of creativity allowed these players to trick Gandalf’s level seven in only 45 minutes on average. This stark example underscores a troubling truth: everyone is now a potential hacker and businesses require a new approach to security for GenAI.

“The race to adopt GenAI, fueled by C-suite demands, makes security preparedness more vital now than at any pivotal moment in technology’s evolution. GenAI is a once in a lifetime disruption,” said Joe Sullivan, ex-CSO of Cloudflare, Uber, and Meta (Facebook), and advisor to Lakera. “To harness its potential, though, businesses must consider its challenges and that, hands down, is the security risk. Being prepared and mitigating that risk is the #1 job at hand for those companies leading adoption.” 

Additional Key Findings 

  • LLM reliability and accuracy is the number 1 barrier to adoption: 35% of respondents are fearful of LLM reliability and accuracy, while 34% are concerned with data privacy and security. The lack of skilled personnel accounts for 28% of the concerns.
  • 45% of respondents are exploring GenAI use cases; 42% are actively using and implementing GenAI Just 9% have no current plans to adopt LLMs.
  • Only 22% of respondents have adopted ​​AI-specific threat modeling to prepare for GenAI specific threats.

The GenAI Security Readiness Report survey was conducted between May 15-22, 2024. It received 1,000 responses from individuals, 60 percent of whom have more than five years of cybersecurity experience. Lakera expects to execute the survey and produce the GenAI Security Readiness Report annually to track how preparedness changes as teams are more informed about the security risks of GenAI.  

For more information and to download the complete GenAI Readiness Report, please visit: http://aisecurity.report/

Lakera recently announced its $20 million Series A Funding round led by European VC Atomico, with participation from Citi Ventures, Dropbox Ventures, and existing investors including redalpine. As businesses worldwide scramble to harness the power of GenAI without exposing themselves to AI-specific risks, the demand for Lakera’s platform is expected to continue growing at a rapid clip. 

About Lakera

Lakera is the world’s leading real-time GenAI security company. Customers rely on Lakera for security that doesn’t slow down their AI applications. To accelerate secure adoption of AI, the company created Gandalf, an educational tool, where more than one million users have learned about AI security. Lakera uses AI to continuously evolve defenses, so customers can stay ahead of emerging threats. Lakera was founded by David Haber, Mateo Rojas-Carulla and Matthias Kraft in 2021.

###

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

GenAI Security Preparedness 

Report 2024

Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.

Free Download
Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Understand AI Security Basics.

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Optimize LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Master Prompt Injection Attacks.

Discover risks and solutions with the Lakera LLM Security Playbook.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

You might be interested
Investing in Lakera to help protect GenAI apps from malicious prompts
Investing in Lakera to help protect GenAI apps from malicious prompts
min read
•
Media Coverage

Investing in Lakera to help protect GenAI apps from malicious prompts

Investing in Lakera to help protect GenAI apps from malicious prompts

Citi Ventures invests in Lakera, the leading solution for securing AI applications at run-time.
Lakera, which protects enterprises from LLM vulnerabilities, raises $20M
Lakera, which protects enterprises from LLM vulnerabilities, raises $20M
5
min read
•
Media Coverage

Lakera, which protects enterprises from LLM vulnerabilities, raises $20M

Lakera, which protects enterprises from LLM vulnerabilities, raises $20M

Lakera, a Swiss startup that’s building technology to protect generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A round led by European venture capital firm, Atomico.
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.