Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This 16-minute talk from the Open Compute Project features Nilesh Shah (VP Business Development at Zeropoint Technologies), Angelos Arelakis (CTO at Zeropoint Technologies), and Andy Green (from Numem UK) discussing memory solutions for large language model (LLM) inference. Explore how LLM inference faces memory-bound challenges with a 6:1 read-to-write ratio, while current HBM-based GPUs are designed for balanced access patterns. Learn about compression-enabled MRAM memory chiplet subsystems as a potential solution to address the specific memory requirements of LLM inference accelerators.