Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Next Gen AI HPC Server Performance with CXL3.1 Tiered Memory and MRDIMM Solution

Open Compute Project via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore next-generation AI and HPC server performance optimization through CXL3.1 tiered memory and MRDIMM solutions in this 15-minute conference talk. Learn how high memory capacity directly impacts deep learning model performance and computational speed, with detailed examination of CPU extended memory (far memory) paging capabilities using CXL NUMA nodes. Discover market requirements for AI-HPC servers, analyze memory capacity and performance considerations with CXL NUMA node implementations, and examine MRDIMM solution architectures. Review real-world applications including Meta's CachLIB and Alibaba's KVcache to understand big memory use cases with CXL extended memory solutions and their proven value propositions. Understand why AI and HPC servers require increased memory capacity for handling vast datasets and complex computations, particularly for memory-intensive applications in HPC servers (IMDB and DLRM) and AI servers (training and inference workloads).

Syllabus

Next Gen AI HPC Server Performance with CXL3 1 Tiered Memory and MRDIMM solution

Taught by

Open Compute Project

Reviews

Start your review of Next Gen AI HPC Server Performance with CXL3.1 Tiered Memory and MRDIMM Solution

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.