Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy

AI security blog

All topics
14
min read
•
Large Language Models

A Step-by-step Guide to Prompt Engineering: Best Practices, Challenges, and Examples

Explore the realm of prompt engineering and delve into essential techniques and tools for optimizing your prompts. Learn about various methods and techniques and gain insights into prompt engineering challenges.
Mikolaj Kowalczyk
November 13, 2024
September 4, 2023
9
min read
•
Large Language Models

Evaluating Large Language Models: Methods, Best Practices & Tools

Learn what is LLM evaluation and why is it important. Explore 7 effective methods, best practices, and evolving frameworks for assessing LLMs' performance and impact across industries.
Armin Norouzi
November 13, 2024
August 31, 2023
5
min read
•
Large Language Models

The Beginner’s Guide to Hallucinations in Large Language Models

As LLMs gain traction across domains, hallucinations—distortions in LLM output—pose risks of misinformation and exposure of confidential data. Delve into the causes of hallucinations and explore best practices for their mitigation.
Deval Shah
November 13, 2024
August 23, 2023
7
min read
•
Large Language Models

The List of 11 Most Popular Open Source LLMs of 2024

Discover the top 11 open-source Large Language Models (LLMs) that are shaping the landscape of AI. Explore their features, benefits, and challenges in this comprehensive guide to stay updated on the latest developments in the world of language technology.
Armin Norouzi
November 13, 2024
August 21, 2023
min read
•
AI Security

LLM Vulnerability Series: Direct Prompt Injections and Jailbreaks

of prompt injections that are currently in discussion. What are the specific ways that attackers can use prompt injection attacks to obtain access to credit card numbers, medical histories, and other forms of personally identifiable information?
Daniel Timbrell
November 13, 2024
July 18, 2023
min read
•
Computer Vision

Not All mAPs are Equal and How to Test Model Robustness

Model selection is a fundamental challenge for teams deploying to production: how do you choose the model that is most likely to generalize to an ever-changing world?
Mateo Rojas-Carulla
November 13, 2024
June 13, 2023
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.