High-Performance AI Workloads in KubeVirt VMs With NVIDIA GPUs - Challenges and Real-World Solutions
CNCF [Cloud Native Computing Foundation] via YouTube
The Fastest Way to Become a Backend Developer Online
Most AI Pilots Fail to Scale. MIT Sloan Teaches You Why — and How to Fix It
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore how to run high-performance AI/ML workloads in KubeVirt virtual machines with NVIDIA GPUs while achieving near bare-metal performance in this 30-minute conference talk. Learn to leverage KubeVirt for running AI workloads inside VMs with NVIDIA GPUs and NVLink technology, enabling multi-tenancy, enhanced security, and efficient resource partitioning that benefits both service providers and customers. Discover how VM-based worker nodes create virtual Kubernetes clusters on shared infrastructure, supporting both full bare-metal nodes and partitioned node configurations. Examine the technical challenges involved, including integrating NVIDIA Fabric Manager with Kubernetes/KubeVirt workflows, optimizing NUMA and PCI topology configurations, and aligning Kubernetes scheduling mechanisms with VM-based GPU layouts. Gain insights from real-world customer use cases that demonstrate the practical need for isolated, high-performance AI environments using Kubernetes-native tooling, moving beyond traditional Pod-based deployments on bare-metal that lack strong isolation and flexibility.
Syllabus
High-Performance AI Workloads in KubeVirt VMs With NVIDIA GPUs: Ch... Ezra Silvera & Michael Hrivnak
Taught by
CNCF [Cloud Native Computing Foundation]