Mind the Topology - Smarter Scheduling for AI Workloads on Kubernetes
CNCF [Cloud Native Computing Foundation] via YouTube
-
28
-
- Write review
UC San Diego Product Management Certificate — AI-Powered PM Training
Free courses from frontend to fullstack and AI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the critical challenges of topology-aware scheduling for AI workloads on Kubernetes in this lightning talk from KubeCon + CloudNativeCon. Learn how AI workloads such as distributed training, inference, and data preprocessing require high bandwidth, low latency, and efficient accelerator access, making topology-aware scheduling essential for optimal performance. Discover the complexities involved in balancing competing priorities like locality, fairness, and resource fragmentation across diverse workloads and dynamic cluster states. Understand why Kubernetes' independent pod scheduling approach and lack of standard topological information handling creates difficulties in achieving scalable locality, fairness, and resource utilization. Examine current solutions and their limitations before diving into how the KAI Scheduler addresses these gaps through innovative design choices that enable scalable, topology-aware workload placement. Gain insights into considerations for zones, racks, and device proximity when placing AI workloads, and understand the technical approaches needed to overcome the inherent challenges of managing AI infrastructure at scale on Kubernetes.
Syllabus
Lightning Talk: Mind the Topology: Smarter Scheduling for AI Workloads on Kubernetes - Roman Baron
Taught by
CNCF [Cloud Native Computing Foundation]