Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Optimizing Large-Scale Model Training with Ray Compiled Graphs

Anyscale via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore advanced techniques for training large-scale models in this Ray Summit 2024 conference talk. Discover how Ray Core's latest features enhance training efficiency for LLMs and multimodal AI models. Learn about Ray's native GPU-GPU communication and pre-compiled execution paths, and their application in complex data and control flows for distributed model training. Gain insights into implementing pipeline parallelism and training multimodal models using heterogeneous GPUs. Compare Ray implementations with NCCL and PyTorch, focusing on simplicity and maintainability. Examine benchmarks on throughput and GPU utilization to understand the practical benefits of these optimizations. Ideal for ML researchers and engineers working on large-scale AI projects, this talk provides valuable knowledge on maximizing accelerator utilization and improving training efficiency in the era of increasingly large and complex models.

Syllabus

Optimizing Large-Scale Model Training with Ray Compiled Graphs | Ray Summit 2024

Taught by

Anyscale

Reviews

Start your review of Optimizing Large-Scale Model Training with Ray Compiled Graphs

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.