Streamline LLM Fine-Tuning on Kubernetes with Kubeflow LLM Trainer
CNCF [Cloud Native Computing Foundation] via YouTube
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Learn how to simplify large language model fine-tuning on Kubernetes through this 21-minute conference talk by Kubeflow maintainer Shao Wang. Discover the challenges data scientists face when fine-tuning LLMs on Kubernetes, including complex configurations, diverse fine-tuning techniques, and various distributed strategies like data and model parallelism. Explore Kubeflow LLM Trainer, a specialized tool that uses pre-configured blueprints and flexible configuration overrides to streamline the entire LLM fine-tuning lifecycle on Kubernetes infrastructure. See demonstrations of how this tool integrates with multiple fine-tuning techniques and distributed strategies while providing a simple yet flexible Python API. Understand how the solution enables LLM fine-tuning on Kubernetes with just a single line of code, effectively hiding complex infrastructure configurations from users while allowing seamless transitions between different models, datasets, fine-tuning techniques, and distributed strategies.
Syllabus
Streamline LLM Fine-Tuning on Kubernetes with Kubeflow LLM Trainer - Shao Wang, Kubeflow Maintainer
Taught by
CNCF [Cloud Native Computing Foundation]