Streamline LLM Fine-tuning on Kubernetes With Kubeflow LLM Trainer
CNCF [Cloud Native Computing Foundation] via YouTube
PowerBI Data Analyst - Create visualizations and dashboards from scratch
The Private Equity Associate Certification
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to simplify large language model fine-tuning on Kubernetes through a 16-minute conference talk that introduces Kubeflow LLM Trainer as a solution to complex infrastructure challenges. Discover how data scientists can overcome the difficulties of managing Kubernetes configurations, diverse fine-tuning techniques, and distributed strategies like data and model-parallelism when working with LLMs. Explore the tool's pre-configured blueprints and flexible configuration overrides that streamline the entire LLM fine-tuning lifecycle on Kubernetes infrastructure. See demonstrations of how Kubeflow LLM Trainer integrates with multiple fine-tuning techniques and distributed strategies while providing a simple yet flexible Python API. Witness how the platform enables LLM fine-tuning on Kubernetes with just a single line of code, effectively hiding complex infrastructure configurations from users while allowing graceful transitions between different fine-tuning approaches and distributed computing strategies.
Syllabus
Streamline LLM Fine-tuning on Kubernetes With Kubeflow LLM Trainer - Shao Wang & Andrey Velichkevich
Taught by
CNCF [Cloud Native Computing Foundation]