Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Heterogeneous Memory System Architecture for Next-Generation Agentic AI Server Platform

Open Compute Project via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a collaborative R&D initiative addressing the evolving memory challenges in AI server platforms through this 16-minute conference presentation from the Open Compute Project. Learn how Samsung Electronics, Montage Technology, and H3 Platform are tackling the increasing demand for larger memory capacity in LLM training and inference as computing pressure shifts from GPU systems. Discover the "Lake Tahoe" project, a next-generation AI-HPC server built on the AMD EPYC Venice platform that introduces new memory tiers including SOCAMM, MRDIMM, and CXL technologies. Understand the technical challenges of increased data traffic between CPUs and GPUs, greater memory capacity requirements, faster CPU memory access, and energy consumption reduction during data transfer. Gain insights into forward-looking memory architecture and technologies designed to deliver high-speed, high-capacity memory infrastructure for the evolving demands of agentic AI platforms, presented by industry experts including Jinin So from Samsung Electronics, Geof Findley from Montage Technology, and Brian Pan from H3 Platform.

Syllabus

Heterogeneous Memory System Architecture for Next gen Agentic AI Server Platform Montage, H

Taught by

Open Compute Project

Reviews

Start your review of Heterogeneous Memory System Architecture for Next-Generation Agentic AI Server Platform

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.