Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy

Lakera’s GenAI Security Readiness Report 2024

Discover insights shared by over 1,000 industry leaders. Gain a comprehensive view of GenAI security readiness from CISOs, security professionals, developers, and data scientists.

Download the report

Fill out the form below to receive your free copy of the "GenAI Security Readiness Report 2024."

Gain Insights Into

How organizations are adopting GenAI/LLMs, their confidence in existing security measures, the challenges they face, and their concerns about potential risks.

Organizations’ experiences with GenAI/LLM vulnerabilities, the nature and impact of these vulnerabilities, and their response strategies.

Security practices adopted by organizations, the presence of formal security policies, and how these organizations stay informed about the latest security threats.

The most significant risks perceived by organizations and their readiness to tackle these threats.

90%

of organizations are actively implementing or planning to explore LLM use cases.

5%

feel highly confident in their Al security preparedness.

22%

of organizations are doing Al-specific threat modeling.

Stephen Germain

Information Security & Risk Management Leader

"Carefully review the terms of service of your SaaS providers entrusted with key data assets. Ensure that your agreements clearly outline all security requirements, especially those related to AI. It's important to stay informed about new AI-enabled features before enabling them so you can properly assess, manage, and mitigate any risks to your data."

Dr. Christina Liaghati

MITRE ATLAS Lead

"If you are struggling with how or where to start in AI security, leverage our collaboratively developed public resources and community of industry, government, and academic AI security leaders. With over 100 diverse organizations involved in the ATLAS community, we are working together to share intel, characterize, and mitigate these rapidly evolving threats to AI-enabled systems."

Avinash Sinha

Sr. Staff Cyber Security

"Gen AI can be used to improve overall security posture of the organization such as for pro-active threat hunting and incident response based on real time threat intelligence data that is collected.

Private LLM's trained on right set of data offers most accurate results so if possible build from ground up. GenAI provides an opportunity to clear some of the most difficult audits, of course human review and domain expertise helps to build on top of that and simultaneously improves productivity and timelines."

David Campbell

AI Security Risk Lead & Generative Red Teaming at Scale AI

"I am particularly concerned about data exfiltration in the coming year as businesses increasingly adopt user-facing systems with Retrieval-Augmented Generation (RAG) backends. These systems can inadvertently expose sensitive data through seemingly innocuous queries, highlighting the need for robust data protection and monitoring strategies."

Elliot Ward

Senior Security Researcher at Snyk

"Our primary concern lies not with vulnerabilities specific to large language models (LLMs) but with traditional web vulnerabilities within AI and LLM tooling and frameworks. While significant attention is given to unique AI security threats, the secure design and implementation of the frameworks used to build AI-powered systems often receive less focus. Our research has identified multiple issues, such as Remote Code Execution (RCE), in leading LLM SDKs that can be triggered via standard prompt injections. It is crucial to not neglect the traditional security landscape to ensure robust and secure AI-powered applications."

Ads Dawson

Project Lead at OWASP Top 10 for LLM Applications

"My advice to organizations just starting to implement AI security measures would be to first, involve relevant cross-team stakeholders to abstract away the most simple degree on precisely what the newly implemented AI integration is adding to your environment. "

Marcel Winandy

Senior Expert Cyber Security Architect

"Building an understanding and awareness of AI/LLM specific security threats needs to come first. We do this by assessing each new AI application design through a threat modeling that looks at both “classical” security topics (such as authentication, authorization) as well as AI-specific threats (e.g. prompt injection). It is key that people understand what could potentially happen and go wrong and what we could do about it."

Download Report