Optical CXL for Disaggregated Compute Architectures in AI and LLM Processing
Open Compute Project via YouTube
AI, Data Science & Cloud Certificates from Google, IBM & Meta
Finance Certifications Goldman Sachs & Amazon Teams Trust
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how optical CXL technology is revolutionizing datacenter architectures for AI and Large Language Model (LLM) processing in this 13-minute technical presentation from Ron Swartzentruber, Director of Engineering at Lightelligence. Explore how CXL-capable processors, accelerators, switches, and memories can be integrated to build massive systems connecting compute arrays to extensive memory resources across multiple datacenter racks. Discover the critical role of memory bandwidth and latency in AI model training, and understand how CXL over optics addresses these challenges while enabling memory pooling for improved performance. Examine real-world latency improvements, distance advantages, and decode throughput results specifically demonstrated through LLM inference applications.
Syllabus
Optical CXL for disaggregated compute architectures
Taught by
Open Compute Project