The rapid adoption of Large Language Models (LLMs) has highlighted a multitude of security concerns that demand immediate attention. Issues such as prompt injection attacks, data leaks, phishing attempts, hallucinations, toxic language output, and more, have emerged as formidable threats, putting organizations relying on LLMs at risk.
Here, at Lakera, we have been actively engaged in extensive discussions with hundreds of developers and security engineers who are leading the way in developing LLM-powered systems in recent months. Among the primary challenges they face in deploying those systems at scale, security concerns take center stage.
There is no single magical solution to the growing number of threats we encounter and we’ve learnt that security is not a challenge limited to LLM providers; it extends its shadow over app builders and end-users alike as we continue to integrate these models into our daily lives.
We believe that it is imperative for the entire AI community to unite and collaborate in tackling these evolving challenges.
In the spirit of this collective effort, in July 2023, Lakera and Cohere came together with a shared goal—to define new LLM security standards and empower organizations to confidently deploy LLM-based systems at scale. Cohere focuses on enabling generative AI in enterprise environments and is at the forefront of establishing safety and security requirements for AI technology and LLMs.Â
{{Advert}}
This shared commitment to addressing the most prevalent LLM cybersecurity threats has resulted in the creation of two valuable resources: the LLM Security Playbook and the Prompt Injection Attacks Cheatsheet.
Mateo Rojas-Carulla, Co-Founder and CPO of Lakera, shared:
Collaborating with Cohere and red-teaming their model has provided us with unique insights into the intricate nature of LLMs. Exploring novel and imaginative methods to break the model was both challenging and… fun. Red teaming offers a valuable opportunity to step into the shoes of potential attackers who can now manipulate LLMs using natural language rather than coding, opening up numerous new possibilities for anyone to exploit these models, potentially leading to harmful actions.
‍
In August, both teams also participated in DEFCON31's Generative Red Teaming AI Challenge, organized by AI Village, where participants were tasked with "hacking" Cohere's model (as well as other LLMs) that had previously undergone red-teaming by Lakera's team. DEFCON31 sparked numerous discussions about AI security and underscored the necessity for collaboration across the entire AI community to ensure the responsible use of LLMs.
Ads Dawson, Senior Security Engineer at Cohere and a founding core contributor to the OWASP Top 10 for LLM Applications project, added:
It’s essential for us to collaborate with companies like Lakera to refine our security practices continuously. Our red-teaming exercises allow us to uncover weak points in our security infrastructure and strengthen our defenses proactively. Also, our collaboration with other industry experts helps us stay informed about emerging threats and evolving security trends. By leading discussions on security challenges and solutions, we contribute to the collective effort to enhance the security posture of AI applications, making LLMs safer to use.
‍
The technical expertise of both teams, coupled with our insights from launching the largest global red teaming initiative - Gandalf, helped us redefine our approach to LLM security and has inspired us to seek innovative ways to ensure it.
This collaboration comes at a pivotal moment when organizations are seeking to harness the vast potential of LLMs and AI technology.
About Cohere
Cohere is the leading AI platform for enterprise providing access to advanced Large Language Models and NLP tools through one easy-to-use API.
About Lakera
Lakera is the leading AI security company building developer-first solutions that empower developers to confidently build secure AI applications and deploy them at scale.
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.
Compare the EU AI Act and the White House’s AI Bill of Rights.
Get Lakera's AI Security Guide for an overview of threats and protection strategies.
Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.
Use our checklist to evaluate and select the best LLM security tools for your enterprise.
Discover risks and solutions with the Lakera LLM Security Playbook.
Discover risks and solutions with the Lakera LLM Security Playbook.
Subscribe to our newsletter to get the recent updates on Lakera product and other news in the AI LLM world. Be sure you’re on track!
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.