Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Near GPU Storage Requirements for Accelerating Storage to Scale AI Workloads

Open Compute Project via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about the critical storage infrastructure requirements needed to support scaling AI workloads in this 19-minute conference presentation from the Open Compute Project. Explore how the evolution and scaling of AI workloads and their underlying hardware creates an urgent need for high-performance, flexible storage solutions positioned near GPUs to enable scalable system designs that can accommodate diverse AI use cases. Discover the fundamental importance of performant and flexible storage building blocks in achieving scalable AI systems, and examine the focus areas and strategic direction for developing a new category of "near-GPU storage" that serves as essential building blocks to make storage an enabler rather than a bottleneck for AI system and workload scaling. Gain insights from Meta systems engineers and Facebook research scientists on the technical considerations and architectural approaches needed to address these emerging storage challenges in AI infrastructure.

Syllabus

Near GPU Storage Requirements for Accelerating Storage to Scale AI Workloads

Taught by

Open Compute Project

Reviews

Start your review of Near GPU Storage Requirements for Accelerating Storage to Scale AI Workloads

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.