Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CNCF [Cloud Native Computing Foundation]

Streamline LLM Fine-Tuning on Kubernetes with Kubeflow LLM Trainer

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Learn how to simplify large language model fine-tuning on Kubernetes through this 21-minute conference talk by Kubeflow maintainer Shao Wang. Discover the challenges data scientists face when fine-tuning LLMs on Kubernetes, including complex configurations, diverse fine-tuning techniques, and various distributed strategies like data and model parallelism. Explore Kubeflow LLM Trainer, a specialized tool that uses pre-configured blueprints and flexible configuration overrides to streamline the entire LLM fine-tuning lifecycle on Kubernetes infrastructure. See demonstrations of how this tool integrates with multiple fine-tuning techniques and distributed strategies while providing a simple yet flexible Python API. Understand how the solution enables LLM fine-tuning on Kubernetes with just a single line of code, effectively hiding complex infrastructure configurations from users while allowing seamless transitions between different models, datasets, fine-tuning techniques, and distributed strategies.

Syllabus

Streamline LLM Fine-Tuning on Kubernetes with Kubeflow LLM Trainer - Shao Wang, Kubeflow Maintainer

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of Streamline LLM Fine-Tuning on Kubernetes with Kubeflow LLM Trainer

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.