Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Zero to 50 ExaFLOPS in Under a Year - Lessons from the Trenches

Linux Foundation via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore the remarkable journey of scaling AI inference from zero to 50 exaFLOPS in under a year through this keynote presentation by Hagay Lupesko, SVP of AI Inference at Cerebras Systems. Learn about the Cerebras Wafer Scale Engine, the world's largest chip that is 50 times bigger than a GPU and capable of over 100 petaFLOPS of AI compute, and discover how clusters of these engines power Cerebras Inference to deliver the world's fastest AI inference service for industry leaders including Meta, Mistral, and IBM. Gain insights into the critical lessons learned during this massive scaling journey, including understanding what systems break and when, why system design is more crucial than model optimizations, and how to build robust observability, security frameworks, and organizational culture that can survive and thrive at exascale operations. Receive unvarnished, practical insights from the trenches of scaling AI inference infrastructure without having to experience the challenges firsthand.

Syllabus

Keynote: Zero to 50 ExaFLOPS in Under a Year: Lessons from the Trenches - Hagay Lupesko

Taught by

Linux Foundation

Reviews

Start your review of Zero to 50 ExaFLOPS in Under a Year - Lessons from the Trenches

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.