Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn the fundamentals of modern large language model (LLM) post-training through this 23-minute conference talk by Maxime Labonne, PhD, Head of Post-Training at Liquid AI. Discover how high-quality data generation forms the core of post-training processes, with emphasis on accuracy, diversity, and complexity of training samples. Explore essential training techniques including supervised fine-tuning and preference alignment methods used at various scales with concrete examples. Examine evaluation frameworks and their respective advantages and disadvantages for measuring model performance effectively. Gain insights into emerging trends in post-training methodologies and understand their implications for the future development of large language models. Master practical skills for generating post-training data, training LLMs using appropriate libraries and tools, and implementing proper evaluation techniques to assess model performance.
Syllabus
Introduction to LLM Post Training by Maxime Labonne, PhD
Taught by
Open Data Science