Streamline LLM Fine-tuning on Kubernetes With Kubeflow LLM Trainer
CNCF [Cloud Native Computing Foundation] via YouTube
Google AI Professional Certificate - Learn AI Skills That Get You Hired
Learn Backend Development Part-Time, Online
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Learn how to simplify large language model fine-tuning on Kubernetes through a 16-minute conference talk that introduces Kubeflow LLM Trainer as a solution to complex infrastructure challenges. Discover how data scientists can overcome the difficulties of managing Kubernetes configurations, diverse fine-tuning techniques, and distributed strategies like data and model-parallelism when working with LLMs. Explore the tool's pre-configured blueprints and flexible configuration overrides that streamline the entire LLM fine-tuning lifecycle on Kubernetes infrastructure. See demonstrations of how Kubeflow LLM Trainer integrates with multiple fine-tuning techniques and distributed strategies while providing a simple yet flexible Python API. Witness how the platform enables LLM fine-tuning on Kubernetes with just a single line of code, effectively hiding complex infrastructure configurations from users while allowing graceful transitions between different fine-tuning approaches and distributed computing strategies.
Syllabus
Streamline LLM Fine-tuning on Kubernetes With Kubeflow LLM Trainer - Shao Wang & Andrey Velichkevich
Taught by
CNCF [Cloud Native Computing Foundation]