Announcements
Lakera Report: AI Adoption Surges, Security Preparedness Lags Behind
Lakera Report: AI Adoption Surges, Security Preparedness Lags Behind
Our findings reveal a rapid adoption of GenAI technologies, with nearly 90% of organizations actively implementing or planning to explore LLM use cases. Despite this high adoption rate, only about 5% of organizations feel highly confident in their AI security preparedness.
Gandalf: Introducing a Sleek New UI and Enhanced AI Security Education
Gandalf: Introducing a Sleek New UI and Enhanced AI Security Education
Gandalf, our viral prompt-injection game and the world’s most popular AI security education platform, gets the new look and feel.
Advancing AI Security With Insights From The World’s Largest AI Red Team
Advancing AI Security With Insights From The World’s Largest AI Red Team
Watch David Haber’s RSA Conference 2024 talk on advancing AI security with insights from the world’s largest AI red team.
Lakera Recognized in Gartner's GenAI Security Risks Report
Lakera Recognized in Gartner's GenAI Security Risks Report
Gartner's report on GenAI security risks recognizes Lakera's solutions.
Lakera Featured in a NIST Report on AI Security
Lakera Featured in a NIST Report on AI Security
Lakera's technology has been recognized by NIST in their report on Adversarial Machine Learning.
David Haber, Lakera's CEO, and Elias Groll from CyberScoop Discuss AI Security in a Safe Mode Podcast Episode
David Haber, Lakera's CEO, and Elias Groll from CyberScoop Discuss AI Security in a Safe Mode Podcast Episode
Join our CEO, David Haber, and Elias Groll from CyberScoop in a discussion on AI security.
Help Net Security Names Lakera as One of 2024’s Cybersecurity Companies to Watch
Help Net Security Names Lakera as One of 2024’s Cybersecurity Companies to Watch
Lakera recognized by Help Net Security as a leading cybersecurity startup for 2024.
Microsoft Features Gandalf in Their Latest AI Security Toolkit Announcement
Microsoft Features Gandalf in Their Latest AI Security Toolkit Announcement
Microsoft's PyRIT toolkit highlights Lakera's Gandalf game, showcasing advancements in AI system security.
Lakera Named as Europe’s Leader in AI Security by Sifted
Lakera Named as Europe’s Leader in AI Security by Sifted
Lakera makes the list of top startups to watch in 2024, and is named a leader in LLM security in a poll among investors conducted by Sifted.
AI Safety Unplugged: Key Takeaways and Highlights from the World Economic Forum
AI Safety Unplugged: Key Takeaways and Highlights from the World Economic Forum
Read about key insights on AI safety straight from the World Economic Forum 2024.
Lakera CEO Joins Leaders from Meta, Cohere and MIT for AI Safety Session at AI House Davos
Lakera CEO Joins Leaders from Meta, Cohere and MIT for AI Safety Session at AI House Davos
Fellow "AI Safety Unplugged” panelists include Yann LeCun, Chief AI Scientist at Meta, Max Tegmark, MIT Professor & President of the Future of Institute, and Seraphina Goldfarb-Tarrant, Head of Safety at Cohere
Lakera Earns a Spot on the Financial Times' Tech Champions List for IT & Cyber Security
Lakera Earns a Spot on the Financial Times' Tech Champions List for IT & Cyber Security
Financial Times lists Lakera in Tech Champions 2023 for our contributions to AI security.
Lakera Selected as a Swiss Startup to Keep an Eye on in 2024
Lakera Selected as a Swiss Startup to Keep an Eye on in 2024
Lakera named among Switzerland’s top startups for 2024, highlighting our focus on secure AI.
Life vs. ImageNet Webinar: Lessons Learnt From Bringing Computer Vision to the Real World
Life vs. ImageNet Webinar: Lessons Learnt From Bringing Computer Vision to the Real World
Lakera hosted its first webinar Life vs ImageNet last week. We had exciting discussions around the main challenges in building Machine Learning (ML) for real-world applications.
Lakera's CEO Joins the Datadog Cloud Security Lounge Podcast to Talk about LLM security
Lakera's CEO Joins the Datadog Cloud Security Lounge Podcast to Talk about LLM security
Lakera’s co-founder and CEO, David, joined Jb Aviat (Staff Engineer at Datadog) and Izar Tarandach (Sr. Staff Engineer at Datadog) on the Datadog Cloud Security Lounge podcast to chat about LLMs, security, Gandalf, and everything in between.
Lakera and Cohere Set the Bar for New Enterprise LLM Security Standards
Lakera and Cohere Set the Bar for New Enterprise LLM Security Standards
Lakera and Cohere come together with a shared goal—to define new LLM security standards and empower organizations to confidently deploy LLM-based systems at scale.
Announcing Lakera's SOC 2 Compliance
Announcing Lakera's SOC 2 Compliance
We are proud to announce that we have achieved SOC 2 Type I compliance for Lakera Guard in accordance with the American Institute of Certified Public Accountants (AICPA) standards for SOC for Service Organizations, also known as SSAE 18. We have successfully completed a AICPA SOC2 Type I Audit performed by Prescient Assurance.
DEFCON Welcomes Mosscap: Lakera’s AI Security Game to Tackle Top LLM Vulnerabilities
DEFCON Welcomes Mosscap: Lakera’s AI Security Game to Tackle Top LLM Vulnerabilities
Get ready to embark on an exciting AI security adventure with Mosscap! Inspired by the captivating "Monk and Robot" series, Lakera’s team has worked tirelessly to create a fun and educational experience, shedding light on prompt injection vulnerabilities present in LLMs.
Lakera Co-publishes Article in a Nature Journal on Testing Medical Imaging Systems
Lakera Co-publishes Article in a Nature Journal on Testing Medical Imaging Systems
The paper that we have now published in Nature summarizes the results and derives general recommendations for the collection of test datasets in pathology and medical imaging.
Lakera Wins the "Startups" Category at the DEKRA Award 2021
Lakera Wins the "Startups" Category at the DEKRA Award 2021
Lakera wins the DEKRA Award 2021 in the category "Startups". The company was selected by the DEKRA jury for the final and won the online voting. Lakera AI from Zurich / Switzerland wants to use a validation platform to ensure that AI is transparent, safe, and trustworthy.