Streamline LLM Fine-Tuning on Kubernetes with Kubeflow LLM Trainer
CNCF [Cloud Native Computing Foundation] via YouTube
Learn Generative AI, Prompt Engineering, and LLMs for Free
Build AI Apps with Azure, Copilot, and Generative AI — Microsoft Certified
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to simplify large language model fine-tuning on Kubernetes through this 21-minute conference talk by Kubeflow maintainer Shao Wang. Discover the challenges data scientists face when fine-tuning LLMs on Kubernetes, including complex configurations, diverse fine-tuning techniques, and various distributed strategies like data and model parallelism. Explore Kubeflow LLM Trainer, a specialized tool that uses pre-configured blueprints and flexible configuration overrides to streamline the entire LLM fine-tuning lifecycle on Kubernetes infrastructure. See demonstrations of how this tool integrates with multiple fine-tuning techniques and distributed strategies while providing a simple yet flexible Python API. Understand how the solution enables LLM fine-tuning on Kubernetes with just a single line of code, effectively hiding complex infrastructure configurations from users while allowing seamless transitions between different models, datasets, fine-tuning techniques, and distributed strategies.
Syllabus
Streamline LLM Fine-Tuning on Kubernetes with Kubeflow LLM Trainer - Shao Wang, Kubeflow Maintainer
Taught by
CNCF [Cloud Native Computing Foundation]