Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CNCF [Cloud Native Computing Foundation]

An Optimized Linux Stack for GenAI Workloads

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how to optimize Linux environments for Generative AI workloads through this keynote presentation from the Cloud Native Computing Foundation. Learn about the challenges of running GenAI applications on Linux, particularly the complexity of AI runtime toolchains and dependencies across heterogeneous GPU devices, especially within containerized environments where host and guest operating systems must maintain compatible GPU drivers and software stacks. Discover how CNCF's Flatcar Linux project addresses these challenges with its immutable system design optimized for both host and guest systems, featuring support for cross-platform and cross-GPU WebAssembly workloads. Understand the capabilities of WebAssembly runtimes like WasmEdge and LlamaEdge in supporting diverse AI models, making Flatcar Linux an ideal candidate for containerized GenAI deployments. Gain insights into Flatcar's fundamentals and WebAssembly runtime support, examine WasmEdge's portable AI model capabilities and inference applications, and witness a comprehensive demonstration of a complete GenAI application running on Flatcar across both GPU and CPU environments.

Syllabus

Keynote: An Optimized Linux Stack for GenAI Workloads - Michael Yuan, WasmEdge

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of An Optimized Linux Stack for GenAI Workloads

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.