From the course: MLOps and Data Pipeline Orchestration for AI Systems
Unlock this course with a free trial
Join today to access over 24,700 courses taught by industry experts.
LLM model development and evaluation
From the course: MLOps and Data Pipeline Orchestration for AI Systems
LLM model development and evaluation
- [Presenter] Let's discuss some of the important steps in LLM model development and evaluation. The first is prompt engineering and fine-tuning. Prompt engineering shapes outputs using carefully crafted inputs while fine-tuning update's model bits using domain-specific data. LLMOps supports both strategies for aligning models with specific use cases efficiently. Human feedback, often via reinforcement learning from human feedback or RLHF, is used to align LLMs with user intent and safety guidelines. LLMOps incorporates pipelines to collect, evaluate, and apply this feedback continuously. LLM evaluation goes beyond accuracy, requiring human-in-the-loop assessments, scenario tests, and toxicity and bias checks. LLMOps frameworks enable automated and manual evaluation workflows to ensure consistent quality. It's possible for LLMs to memorize training data, raising privacy and compliance concerns. LLMOps enforces data versioning, anonymization, and governance to ensure ethical data use…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.