Dynamic Resource Sharing and Diverse Compute Platforms for AI - The Slinky Solution
OpenInfra Foundation via YouTube
Google AI Professional Certificate - Learn AI Skills That Get You Hired
AI Engineer - Learn how to integrate AI into software applications
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about innovative approaches to high-performance computing (HPC) infrastructure flexibility through this 31-minute conference talk that addresses the revolutionary impact of AI on HPC software workflows. Explore how finite budgets and high costs of AI-capable infrastructure drive the need for maximum efficiency and utilization of high-value resources. Discover recent research into compute platform agility that combines batch-scheduled workflows with interactive platforms within a common shared infrastructure framework. Examine Slinky, an open source project from SchedMD that integrates Slurm and Kubernetes compute platforms to create more flexible HPC services. Understand how Slinky can be utilized with preemption strategies to maximize GPU resource utilization by enabling sharing between interactive, cloud-native, and batch-scheduled workflows. Gain insights into the potential consequences and current limitations of this innovative approach to dynamic resource sharing in AI and HPC environments.
Syllabus
Dynamic resource sharing and diverse compute platforms for AI The Slinky Solution
Taught by
OpenInfra Foundation