VOCABULARY
LLMOps
LLMOps is a sub-category within MLOps, LLMOps emphasizes the operational capabilities, tools, and infrastructure necessary to fine-tune and deploy Large Language Models (LLMs) as integrated components of a product. It acknowledges the distinct challenges and requirements of handling LLMs in comparison to traditional ML models.
LLMOps in practice
- Fine-tuning and Deployment: Though foundational models are enormous and computationally intensive to train from scratch, fine-tuning them for specific tasks remains crucial. LLMOps manages this fine-tuning process, ensuring optimal results.
- High Compute Infrastructure: Given the sheer size and complexity of LLMs, robust GPU setups capable of working in parallel are essential. LLMOps ensures the right infrastructure is in place.
- Inference Management: The process of generating outputs (inferences) from LLMs can involve chains of models and additional checks. This is to ensure the output meets the desired standard for the end-user.
- LLM-as-a-Service: Some vendors, recognizing the challenges of running these models in-house, offer LLMs as an API, effectively outsourcing the compute and operational challenges.
- Prompt Engineering Tools: These facilitate in-context learning, allowing for model optimization without necessarily fine-tuning the entire model.
- Prompt Logging, Testing, and Analytics: An emerging area in LLMOps focusing on analyzing, optimizing, and ensuring the efficacy of prompts used with LLMs.
Learn how to protect against the most common LLM vulnerabilities
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Related terms
Activate
untouchable mode.
untouchable mode.
Get started for free.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Join our Slack Community.
Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.