Unleashing the Power of Dynamic Resource Allocation for Just-in-Time GPU Slicing
CNCF [Cloud Native Computing Foundation] via YouTube
Finance Certifications Goldman Sachs & Amazon Teams Trust
Master AI and Machine Learning: From Neural Networks to Applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the innovative approach of Dynamic Resource Allocation (DRA) for just-in-time GPU slicing in this 39-minute conference talk from the Cloud Native Computing Foundation (CNCF). Discover how AI/ML experts can dynamically allocate GPUs and GPU slices based on workload demand in Kubernetes clusters. Learn about the challenges of implementing DRA, including changes to Kubernetes scheduling mechanisms and the introduction of new resource classes and claims. Examine the InstaSlice solution, which enables just-in-time GPU slicing on large production Kubernetes clusters without requiring changes to queued workloads or Kubernetes schedulers. Gain insights into optimizing GPU utilization for training, fine-tuning, and serving large language models (LLMs) in cloud-native environments.
Syllabus
Unleashing the Power of DRA (Dynamic Resource Allocation) for Just-in-Time GPU Slicing
Taught by
CNCF [Cloud Native Computing Foundation]