Memory Expansion Requirements for AI Systems in Hyperscale Data Centers
Open Compute Project via YouTube
Earn Your Business Degree, Tuition-Free, 100% Online!
Learn AI, Data Science & Business — Earn Certificates That Get You Hired
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn about evolving memory requirements for AI systems in hyperscale data centers in this 33-minute technical presentation from Meta's AI Systems Technologist Manoj Wadekar and Microsoft's Principal Architect Samir Rajadnya. Explore how data center infrastructure is shifting from traditional CPU-centric platforms optimized for scale-out stateless applications to GPU/accelerator-focused systems that support next-generation AI applications. Discover the innovations needed to address growing memory demands as hyperscale facilities adapt to support increasingly complex artificial intelligence workloads and zettabyte-scale storage requirements.
Syllabus
General compute and AI Needs for Memory Expansion for Hyperscalers
Taught by
Open Compute Project