Cloud-Native AI: Wasm in Portable, Secure AI/ML Workloads
CNCF [Cloud Native Computing Foundation] via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch a 29-minute conference talk exploring WebAssembly (Wasm) as an innovative solution for running AI/ML workloads in cloud-native environments. Learn how Wasm enables the execution of AI models like Llama3, Grok by X, and Mixtral across diverse cloud and edge platforms while maintaining optimal performance. Discover the benefits of combining Rust and WebAssembly for AI/ML applications, with particular emphasis on portability, speed, and security features. Through practical demonstrations, examine the deployment process of AI inference models using Wasm runtime in Kubernetes environments, and understand how these systems can be orchestrated and executed across different devices. Gain valuable insights into cutting-edge approaches for AI deployment, particularly relevant for cloud-native practitioners and AI/ML enthusiasts seeking to enhance their understanding of modern deployment strategies.
Syllabus
Cloud-Native AI: Wasm in Portable, Secure AI/ML Workloads - Miley Fu & Michael Yuan, Second State
Taught by
CNCF [Cloud Native Computing Foundation]