Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Advanced Multi-GPU Scaling: Communication Libraries

Nvidia via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This 42-minute conference talk from NVIDIA GTC 2025 explores scaling applications beyond single GPU capabilities using various multi-GPU communication libraries. Learn how to compute larger workloads or achieve faster performance when dedicated multi-GPU libraries aren't available for your specific needs. Discover the practical applications of CUDA-aware MPI, NVSHMEM, and NCCL through real-world examples presented by Jiri Kraus, Principal Developer Technology at NVIDIA. The session (ID: S72578) focuses on accelerated computing libraries within the Models/Libraries/Frameworks topic area, covering NVIDIA technologies including CUDA, CUDA-X, NCCL, and NVLink/NVSwitch. Perfect for developers looking to expand their applications across multiple GPUs and nodes for enhanced performance and scalability.

Syllabus

Advanced Multi-GPU Scaling: Communication Libraries | NVIDIA GTC 2025

Taught by

NVIDIA Developer

Reviews

Start your review of Advanced Multi-GPU Scaling: Communication Libraries

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.