Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy

Lakera Report: AI Adoption Surges, Security Preparedness Lags Behind

Our findings reveal a rapid adoption of GenAI technologies, with nearly 90% of organizations actively implementing or planning to explore LLM use cases. Despite this high adoption rate, only about 5% of organizations feel highly confident in their AI security preparedness.

David Haber
August 21, 2024
August 21, 2024
Hide table of contents
Show table of contents

We founded Lakera knowing that traditional security tools have become increasingly insufficient against the new GenAI threat landscape. A more adaptive, AI-driven approach to securing AI itself is necessary and businesses must protect themselves with AI that learns and evolves just as the threats against them do.

The Gandalf Phenomenon: A Game Changing Perspective

Many of you have successfully taken the challenge with Gandalf, our educational GenAI hacking platform. Remarkably, 200,000 players out of over 1 million have successfully completed the gameā€™s seven core levels, demonstrating they could manipulate AI models into taking unintended actions.This underscores a troubling truth: with the barriers to entry so low, everyone is now a potential GenAI hacker.Ā 

One major concern is that the speed of AI technology development has outpaced the security field. This gap exists not only in the tools available to security teams but also in the expertise needed to address AI-related threats.

We not only need an adaptive, AI-driven approach to securing GenAI, the community of AI developers and security professionals need a deep understanding of how to address these new threats.Ā 

Understanding the Communityā€™s Perspective

To understand our readiness to combat GenAI threats, we conducted a survey of AI and security professionals. We asked them about their backgrounds, levels of experience, industries they represent, and their organizationsā€™ current and planned use of GenAI applications. We also inquired about any security issues they had encountered, their concerns, and how prepared they felt for this new era of cybersecurity.

We received responses from 1,070 individuals and included insights from security leaders at companies including Disney, GE Healthcare, and Citibank in our report.Ā 

The diversity of roles, including executives, security analysts, and developers, ensures a comprehensive understanding of GenAI security from various organizational perspectives. Notably, over 60% of respondents have substantial experience in cybersecurity, lending credibility to their insights.

Key Takeaways from the Report

Our findings reveal a rapid adoption of GenAI technologies, with nearly 90% of organizations actively implementing or planning to explore LLM use cases. Despite this high adoption rate, only about 5% of organizations feel highly confident in their AI security preparedness.

Moreover, nearly 40% of organizations that lack standard AI security best practices are actively using GenAI. This juxtaposition of high adoption and low preparedness highlights the critical need for robust security strategies tailored to the unique challenges of GenAI.

ā€

ā€œIn the upcoming year, my primary concern regards the increasing danger of prompt injection attacks, which can manipulate AI-generated content and compromise data integrity. Prompt injection attacks can result in the release of private information and generate damaging output. The jailbreaking of AI systems, a concept in which adversaries abuse how AI interprets input to get around security controls, is also concerning as it enables unapproved actions. The combination of prompt injection and jailbreaking makes AI systems highly susceptible to malicious manipulation and misuse.ā€

Ryan Wiliams, Cybersecurity Engineer at Waterstons, Australia

Respondents Know CybersecurityĀ 

GenAI Adoption is Rapid

Organizations are at various stages of GenAI/LLM adoption. Forty-two percent are actively using and implementing LLMs across various functions, indicating a significant commitment to leveraging AI capabilities for business innovation.

Another 45% are exploring use cases and integration possibilities, reflecting a high level of interest and recognition of GenAIā€™s potential value.

Only 9% have no current plans to adopt LLMs, suggesting strong industry-wide momentum towards AI adoption and highlighting the competitive risk for organizations not embracing these technologies.

ā€

ā€œOne of the biggest obstacles to securing AI systems right now is lack of knowledge on the part of both engineers and security teams. A great number of people are building systems that utilize LLMs without an understanding of how these components actually work or the implications that LLM's non-determinism has on concepts such as authorization. This makes securing the systems a fundamentally different challenge than we've seen with traditional components.ā€

Nate Lee, CISO at Cloudsec.ai

Low Confidence in Security Measures

Confidence levels in current security measures are low, with only 5% of respondents rating their confidence at five out of five. There is uncertainty about the effectiveness of existing security approaches in protecting against sophisticated AI attacks, with 86% having moderate or low confidence levels. This cautious approach acknowledges the rapid evolution of threats and the need for AI-specific security frameworks that can learn and adapt as quickly as the threats themselves.

ā€œI'm most concerned about the overconfidence of security professionals who believe that AI-related vulnerabilities can be discovered and remediated by traditional means.ā€

Debbie Taylor Moore, Executive Board Member at The Cyber AB & Consumer Technology Association

Conclusion

The rapid adoption of GenAI technologies, coupled with low preparedness for AI-related security threats, underscores the critical need for new, robust security strategies. Insights from cybersecurity professionals and Gandalf highlight diverse challenges and the urgent need for a paradigm shift in approaching AI security.Ā 

ā€

As we continue to leverage the transformative potential of GenAI, it is crucial to stay vigilant and proactive in addressing the unique threats it poses. The future of AI security depends on our ability to implement AI that learns and evolves, just as the threats against it do. We need AI to protect AI.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

GenAI Security Preparedness ā€Ø
Report 2024

Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.

Free Download
Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download

Explore AI Regulations.

Compare the EU AI Act and the White Houseā€™s AI Bill of Rights.

Understand AI Security Basics.

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Optimize LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Master Prompt Injection Attacks.

Discover risks and solutions with the Lakera LLM Security Playbook.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

You might be interested
While GenAI Adoption Surges, Report Shows Security Preparedness LagsĀ 
While GenAI Adoption Surges, Report Shows Security Preparedness LagsĀ 
5
min read
ā€¢
Press Release

While GenAI Adoption Surges, Report Shows Security Preparedness LagsĀ 

While GenAI Adoption Surges, Report Shows Security Preparedness LagsĀ 

Ninety-five percent of cybersecurity experts express low confidence in GenAI security measures while red team data shows anyone can easily hack GenAI models
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. ā€ØCome join us and 1000+ others in a chat thatā€™s thoroughly SFW.