Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Optimizing AI Inference with ML Compilers and Hardware

Data Science Conference via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical role of machine learning compilers and hardware innovations in optimizing AI inference in this 34-minute conference talk from DSC EUROPE 24. Milan Stankic delves into the growing complexity of large language models (LLMs) and addresses the increasing demand for efficient inference solutions. Learn how ML compilers translate high-level model descriptions into optimized hardware instructions to maximize performance and efficiency. Understand the challenges posed by LLMs, including high computational and memory requirements, and discover strategies to overcome these limitations through hardware advancements and compiler optimizations. The presentation highlights the latest trends in AI inference, covering specialized hardware like GPUs, NPUs, and custom accelerators that are shaping the future of AI deployment. This talk was delivered on November 20th at DSC EUROPE 24 in Belgrade.

Syllabus

Optimizing AI Inference with ML Compilers & Hardware | Milan Stankic | DSC EUROPE 24

Taught by

Data Science Conference

Reviews

Start your review of Optimizing AI Inference with ML Compilers and Hardware

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.