Memory Expansion Requirements for AI Systems in Hyperscale Data Centers
Open Compute Project via YouTube
Stuck in Tutorial Hell? Learn Backend Dev the Right Way
Live Online Classes in Design, Coding & AI — Small Classes, Free Retakes
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn about evolving memory requirements for AI systems in hyperscale data centers in this 33-minute technical presentation from Meta's AI Systems Technologist Manoj Wadekar and Microsoft's Principal Architect Samir Rajadnya. Explore how data center infrastructure is shifting from traditional CPU-centric platforms optimized for scale-out stateless applications to GPU/accelerator-focused systems that support next-generation AI applications. Discover the innovations needed to address growing memory demands as hyperscale facilities adapt to support increasingly complex artificial intelligence workloads and zettabyte-scale storage requirements.
Syllabus
General compute and AI Needs for Memory Expansion for Hyperscalers
Taught by
Open Compute Project