Speed of Light Inference with NVIDIA and AMD GPUs Using the Modular Platform
Generative AI on AWS via YouTube
-
13
-
- Write review
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore how to achieve optimal AI inference performance across NVIDIA and AMD GPU architectures using the Modular platform in this technical presentation. Learn about the collaborative functionality of the complete Modular stack, including Modular Cloud for cluster-level solutions, MAX framework and runtime, and the Mojo programming language. Discover how these integrated components work together to deliver exceptional performance while significantly reducing Total Cost of Ownership (TCO) for AI workloads. Gain insights into scaling AI applications across various clusters and understand the technical approaches for maximizing inference speed on different GPU platforms through practical demonstrations and real-world examples.
Syllabus
Speed of Light Inference w/ NVIDIA + AMD GPUs and Modular by Abdul Dakkak, Head of Gen AI @ Modular
Taught by
Generative AI on AWS