Trusted AI Anywhere - Kata Containers and Confidential Containers at Scale
OpenInfra Foundation via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to implement trusted AI solutions at scale using Kata Containers and Confidential Containers in this conference talk. Explore the challenges of running AI on private data while protecting model intellectual property at GPU scale, and understand the Trust Gap through three key pillars: Cryptographic Compute (MPC/HE), Software Sandboxes, and Trusted Execution Environments. Discover why 2025 represents a strategic inflection point as modern GPUs and Kubernetes infrastructure finally align for secure AI deployment. Examine a Kubernetes-native implementation path featuring RuntimeClass with Kata/Confidential-Containers, Dynamic Resource Allocation (DRA) for secure device assignment, peer-pods architecture, full-stack attestation, and memory-safe implementations. Review flexible deployment models including on-premises, cloud VM-as-a-Service, and edge computing with Arm CCA, plus gain insights into the hardware outlook covering Blackwell, Arm CCA, and TDISP technologies for building upstream-first trusted AI solutions across GPU fleets.
Syllabus
Trusted AI Anywhere: Kata Containers & Confidential Containers at Scale
Taught by
OpenInfra Foundation