Memory Technology Optimized for At-Scale AI Systems - Bandwidth, Capacity, and Connectivity
Open Compute Project via YouTube
PowerBI Data Analyst - Create visualizations and dashboards from scratch
Build the Finance Skills That Lead to Promotions — Not Just Certificates
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore memory technology optimization for large-scale AI systems through this 24-minute conference talk examining bandwidth, capacity, and connectivity requirements. Learn how data-intensive applications and throughput-computing workloads demand efficient data-movers, with high-speed interconnects like UCIe becoming standard for die-to-die connections while photonics plays a crucial role in high-speed chip-to-chip interconnect. Discover how memory-optimized architectures balance capacity and bandwidth considerations within at-scale systems, and understand the need for efficient, unimpeded connectivity between peer devices while maintaining physical serviceability. Follow the exploration of memory-oriented, high-speed interconnect solutions ranging from internal wide die-to-die interconnects to external serial interconnects such as photonics specifically designed for at-scale AI-ML systems.
Syllabus
Memory technology optimized for at scale AI systems Bandwidth, Capacity, and Connectivity
Taught by
Open Compute Project