Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CNCF [Cloud Native Computing Foundation]

Streamline LLM Fine-tuning on Kubernetes With Kubeflow LLM Trainer

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Learn how to simplify large language model fine-tuning on Kubernetes through a 16-minute conference talk that introduces Kubeflow LLM Trainer as a solution to complex infrastructure challenges. Discover how data scientists can overcome the difficulties of managing Kubernetes configurations, diverse fine-tuning techniques, and distributed strategies like data and model-parallelism when working with LLMs. Explore the tool's pre-configured blueprints and flexible configuration overrides that streamline the entire LLM fine-tuning lifecycle on Kubernetes infrastructure. See demonstrations of how Kubeflow LLM Trainer integrates with multiple fine-tuning techniques and distributed strategies while providing a simple yet flexible Python API. Witness how the platform enables LLM fine-tuning on Kubernetes with just a single line of code, effectively hiding complex infrastructure configurations from users while allowing graceful transitions between different fine-tuning approaches and distributed computing strategies.

Syllabus

Streamline LLM Fine-tuning on Kubernetes With Kubeflow LLM Trainer - Shao Wang & Andrey Velichkevich

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of Streamline LLM Fine-tuning on Kubernetes With Kubeflow LLM Trainer

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.