Enabling Composable Scalable Memory for AI Inference with CXL Switch
Open Compute Project via YouTube
Launch Your Cybersecurity Career in 6 Months
Learn the Skills Netflix, Meta, and Capital One Actually Hire For
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how CXL 2.0 switch technology enables composable and scalable memory systems for AI inference workloads in this technical presentation from Xconn Technologies and H3 Platform executives. Explore the architecture, configuration, and components of a real composable memory system designed to address the substantial memory demands of Large Language Models (LLM). Discover the working mechanisms behind CXL 2.0-based systems becoming available in 2024, examine their performance characteristics, and understand how these systems enhance AI inference performance through practical demonstrations and architectural insights.
Syllabus
Enabling Composable Scalable Memory for AI Inference with CXL Switch
Taught by
Open Compute Project