Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

MangoBoost Full Stack AI Infrastructure Solutions - MLPerf Inference, Training, and Storage Case Study

Open Compute Project via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore MangoBoost's comprehensive AI infrastructure solutions in this 15-minute conference talk that addresses the critical challenges of deploying large-scale AI systems with multiple GPUs, high-speed NICs, and storage devices. Learn how MangoBoost's holistic approach tackles the key challenge of efficiently orchestrating data movement across GPUs, storage, and networks while maximizing the massive compute capabilities of modern GPUs. Discover the LLMBoost software's full-stack solution covering inference, training, RAG (Retrieval-Augmented Generation), and management capabilities. Examine the innovative DPU hardware that fully accelerates GPU-to-GPU communication through GPUBoost and GPU-to-storage access via StorageBoost, operating at line rate over standard ethernet protocols including RoCEv2, NVME-of, and UEC. Review MangoBoost's top-ranked performance results across MLPerf Inference, Training, and Storage benchmarks using their solutions on modern GPUs such as AMD MI300X. Understand the capabilities of MangoBoost DPU cards that deliver 400 Gbps network performance and explore the DPU-enabled JBOF (Just a Bunch of Flash) and JBOD (Just a Bunch of Disks) headless storage solutions specifically designed for AI systems.

Syllabus

MangoBoost Full Stack AI Infrastructure Solutions MLPerf Inference, Training, Storage Case St

Taught by

Open Compute Project

Reviews

Start your review of MangoBoost Full Stack AI Infrastructure Solutions - MLPerf Inference, Training, and Storage Case Study

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.