How Ant Group Achieves Data Security in LLM Inference with Kata Based Confidential Computing
OpenInfra Foundation via YouTube
AI Adoption - Drive Business Value and Organizational Impact
The Most Addictive Python and SQL Courses
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how Ant Group implements Kata-Based Confidential Computing (CoCo) to secure sensitive data during Large Language Model inference in this 32-minute conference talk. Explore the critical role of confidential computing in protecting both model data and user privacy within cloud-native architectures as LLMs become increasingly widespread. Discover solutions to key technical challenges including application measurement, compatibility issues after disabling file sharing, persistent storage encryption, image pull performance optimization, Kubernetes control plane and kubectl security, and hardware dependency management. Examine HyperGPU, a Trusted Execution Environment (TEE) implementation on commodity hardware that creates secure inference environments to prevent data breaches. Gain comprehensive insights into how Kata-Based confidential computing can significantly enhance data security and understand its broader applications in cloud-native AI deployments.
Syllabus
How Ant Group Achieves Data Security in LLM Inference with Kata Based Confidential Computing
Taught by
OpenInfra Foundation