Near GPU Storage Requirements for Accelerating Storage to Scale AI Workloads
Open Compute Project via YouTube
Get 35% Off CFI Certifications - Code CFI35
Power BI Fundamentals - Create visualizations and dashboards from scratch
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about the critical storage infrastructure requirements needed to support scaling AI workloads in this 19-minute conference presentation from the Open Compute Project. Explore how the evolution and scaling of AI workloads and their underlying hardware creates an urgent need for high-performance, flexible storage solutions positioned near GPUs to enable scalable system designs that can accommodate diverse AI use cases. Discover the fundamental importance of performant and flexible storage building blocks in achieving scalable AI systems, and examine the focus areas and strategic direction for developing a new category of "near-GPU storage" that serves as essential building blocks to make storage an enabler rather than a bottleneck for AI system and workload scaling. Gain insights from Meta systems engineers and Facebook research scientists on the technical considerations and architectural approaches needed to address these emerging storage challenges in AI infrastructure.
Syllabus
Near GPU Storage Requirements for Accelerating Storage to Scale AI Workloads
Taught by
Open Compute Project