Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy

Lakera Report: AI Adoption Surges, Security Preparedness Lags Behind

Our findings reveal a rapid adoption of GenAI technologies, with nearly 90% of organizations actively implementing or planning to explore LLM use cases. Despite this high adoption rate, only about 5% of organizations feel highly confident in their AI security preparedness.

David Haber
August 21, 2024
August 21, 2024
Hide table of contents
Show table of contents

We founded Lakera knowing that traditional security tools have become increasingly insufficient against the new GenAI threat landscape. A more adaptive, AI-driven approach to securing AI itself is necessary and businesses must protect themselves with AI that learns and evolves just as the threats against them do.

The Gandalf Phenomenon: A Game Changing Perspective

Many of you have successfully taken the challenge with Gandalf, our educational GenAI hacking platform. Remarkably, 200,000 players out of over 1 million have successfully completed the game’s seven core levels, demonstrating they could manipulate AI models into taking unintended actions.This underscores a troubling truth: with the barriers to entry so low, everyone is now a potential GenAI hacker. 

One major concern is that the speed of AI technology development has outpaced the security field. This gap exists not only in the tools available to security teams but also in the expertise needed to address AI-related threats.

We not only need an adaptive, AI-driven approach to securing GenAI, the community of AI developers and security professionals need a deep understanding of how to address these new threats. 

Understanding the Community’s Perspective

To understand our readiness to combat GenAI threats, we conducted a survey of AI and security professionals. We asked them about their backgrounds, levels of experience, industries they represent, and their organizations’ current and planned use of GenAI applications. We also inquired about any security issues they had encountered, their concerns, and how prepared they felt for this new era of cybersecurity.

We received responses from 1,070 individuals and included insights from security leaders at companies including Disney, GE Healthcare, and Citibank in our report. 

The diversity of roles, including executives, security analysts, and developers, ensures a comprehensive understanding of GenAI security from various organizational perspectives. Notably, over 60% of respondents have substantial experience in cybersecurity, lending credibility to their insights.

Key Takeaways from the Report

Our findings reveal a rapid adoption of GenAI technologies, with nearly 90% of organizations actively implementing or planning to explore LLM use cases. Despite this high adoption rate, only about 5% of organizations feel highly confident in their AI security preparedness.

Moreover, nearly 40% of organizations that lack standard AI security best practices are actively using GenAI. This juxtaposition of high adoption and low preparedness highlights the critical need for robust security strategies tailored to the unique challenges of GenAI.

‍

“In the upcoming year, my primary concern regards the increasing danger of prompt injection attacks, which can manipulate AI-generated content and compromise data integrity. Prompt injection attacks can result in the release of private information and generate damaging output. The jailbreaking of AI systems, a concept in which adversaries abuse how AI interprets input to get around security controls, is also concerning as it enables unapproved actions. The combination of prompt injection and jailbreaking makes AI systems highly susceptible to malicious manipulation and misuse.”

Ryan Wiliams, Cybersecurity Engineer at Waterstons, Australia

Respondents Know Cybersecurity 

GenAI Adoption is Rapid

Organizations are at various stages of GenAI/LLM adoption. Forty-two percent are actively using and implementing LLMs across various functions, indicating a significant commitment to leveraging AI capabilities for business innovation.

Another 45% are exploring use cases and integration possibilities, reflecting a high level of interest and recognition of GenAI’s potential value.

Only 9% have no current plans to adopt LLMs, suggesting strong industry-wide momentum towards AI adoption and highlighting the competitive risk for organizations not embracing these technologies.

‍

“One of the biggest obstacles to securing AI systems right now is lack of knowledge on the part of both engineers and security teams. A great number of people are building systems that utilize LLMs without an understanding of how these components actually work or the implications that LLM's non-determinism has on concepts such as authorization. This makes securing the systems a fundamentally different challenge than we've seen with traditional components.”

Nate Lee, CISO at Cloudsec.ai

Low Confidence in Security Measures

Confidence levels in current security measures are low, with only 5% of respondents rating their confidence at five out of five. There is uncertainty about the effectiveness of existing security approaches in protecting against sophisticated AI attacks, with 86% having moderate or low confidence levels. This cautious approach acknowledges the rapid evolution of threats and the need for AI-specific security frameworks that can learn and adapt as quickly as the threats themselves.

“I'm most concerned about the overconfidence of security professionals who believe that AI-related vulnerabilities can be discovered and remediated by traditional means.”

Debbie Taylor Moore, Executive Board Member at The Cyber AB & Consumer Technology Association

Conclusion

The rapid adoption of GenAI technologies, coupled with low preparedness for AI-related security threats, underscores the critical need for new, robust security strategies. Insights from cybersecurity professionals and Gandalf highlight diverse challenges and the urgent need for a paradigm shift in approaching AI security. 

‍

As we continue to leverage the transformative potential of GenAI, it is crucial to stay vigilant and proactive in addressing the unique threats it poses. The future of AI security depends on our ability to implement AI that learns and evolves, just as the threats against it do. We need AI to protect AI.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

GenAI Security Preparedness 

Report 2024

Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.

Free Download
Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Understand AI Security Basics.

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Optimize LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Master Prompt Injection Attacks.

Discover risks and solutions with the Lakera LLM Security Playbook.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

You might be interested
Investing in Lakera to help protect GenAI apps from malicious prompts
Investing in Lakera to help protect GenAI apps from malicious prompts
min read
•
Media Coverage

Investing in Lakera to help protect GenAI apps from malicious prompts

Investing in Lakera to help protect GenAI apps from malicious prompts

Citi Ventures invests in Lakera, the leading solution for securing AI applications at run-time.
Lakera, which protects enterprises from LLM vulnerabilities, raises $20M
Lakera, which protects enterprises from LLM vulnerabilities, raises $20M
5
min read
•
Media Coverage

Lakera, which protects enterprises from LLM vulnerabilities, raises $20M

Lakera, which protects enterprises from LLM vulnerabilities, raises $20M

Lakera, a Swiss startup that’s building technology to protect generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A round led by European venture capital firm, Atomico.
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.