Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Memory Technology Optimized for At-Scale AI Systems - Bandwidth, Capacity, and Connectivity

Open Compute Project via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore memory technology optimization for large-scale AI systems through this 24-minute conference talk examining bandwidth, capacity, and connectivity requirements. Learn how data-intensive applications and throughput-computing workloads demand efficient data-movers, with high-speed interconnects like UCIe becoming standard for die-to-die connections while photonics plays a crucial role in high-speed chip-to-chip interconnect. Discover how memory-optimized architectures balance capacity and bandwidth considerations within at-scale systems, and understand the need for efficient, unimpeded connectivity between peer devices while maintaining physical serviceability. Follow the exploration of memory-oriented, high-speed interconnect solutions ranging from internal wide die-to-die interconnects to external serial interconnects such as photonics specifically designed for at-scale AI-ML systems.

Syllabus

Memory technology optimized for at scale AI systems Bandwidth, Capacity, and Connectivity

Taught by

Open Compute Project

Reviews

Start your review of Memory Technology Optimized for At-Scale AI Systems - Bandwidth, Capacity, and Connectivity

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.