Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

대용량 AI 모델의 학습 및 추론 기술 발전에 따른 가속기 트렌드와 HBM 전망

SK AI SUMMIT 2024 via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about the accelerator trends and HBM prospects in large-scale AI model training and inference in this technical talk from SK AI SUMMIT 2024. Explore the challenges of massive parallel processing on GPUs for training large AI models, which require tens of thousands of high-performance GPUs to complete training within 1-2 months. Gain insights into commercial solutions for these challenges and understand the relationship between accelerators and HBM. Examine the cost-accuracy relationship of AI model inference, as exemplified by OpenAI's o1, and discover why inference costs are expected to increase rapidly. Delve into the cost challenges of large-scale AI model inference, the significance of HBM bandwidth and capacity, and learn about the latest server-side AI accelerator architectures. Benefit from the expertise of Professor Seung-Joo Yoo from Seoul National University, who brings extensive experience from TIMA Laboratory, Samsung Electronics, and Facebook, and currently leads research in computing and memory architecture while fostering talent in the global semiconductor and system industry.

Syllabus

대용량 AI 모델의 학습 및 추론 기술 발전에 따른 가속기 Trend와 HBM 전망 | 서울대학교 유승주

Taught by

SK AI SUMMIT 2024

Reviews

Start your review of 대용량 AI 모델의 학습 및 추론 기술 발전에 따른 가속기 트렌드와 HBM 전망

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.