Benchmarking Your Distributed ML Training on the K8s Platform
CNCF [Cloud Native Computing Foundation] via YouTube
Free courses from frontend to fullstack and AI
Build the Finance Skills That Lead to Promotions — Not Just Certificates
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This lightning talk explores how to benchmark distributed machine learning training on Kubernetes platforms. Discover the challenges of running ML training workloads on Kubernetes, including dynamic resource scaling, GPU scheduling, and efficient inter-node communication. Learn about recent advancements like KubeRay, Kubeflow, and Slurm integration that have expanded Kubernetes' capabilities for handling complex, large-scale ML training tasks. Explore the design and implementation of a benchmarking platform that provides actionable insights to improve throughput, scalability, and efficiency of distributed ML training workloads on Kubernetes.
Syllabus
Lightning Talk: Benchmarking Your Distributed ML Training on the K8s Platform - Liang Yan, CoreWeave
Taught by
CNCF [Cloud Native Computing Foundation]