Efficient Multi-Cluster GPU Workload Management with Karmada and Volcano
CNCF [Cloud Native Computing Foundation] via YouTube
AI Adoption - Drive Business Value and Organizational Impact
The Most Addictive Python and SQL Courses
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore efficient multi-cluster GPU workload management using Karmada and Volcano in this informative conference talk. Discover solutions to critical challenges faced when running AI/ML workloads on large-scale, heterogeneous GPU environments across multiple Kubernetes clusters. Learn about intelligent GPU workload scheduling, cluster failover support for seamless workload migration, two-level scheduling consistency and efficiency, and balancing utilization and QoS for resource sharing among workloads with different priorities. Gain insights into addressing resource fragmentation, operational costs, and cross-resource scheduling in cloud native AI platforms spanning multiple data centers and diverse GPU types.
Syllabus
Efficient Multi-Cluster GPU Workload Management with Karmada and Volcano - Kevin Wang, Huawei
Taught by
CNCF [Cloud Native Computing Foundation]