How Ant Group Achieves Data Security in LLM Inference with Kata Based Confidential Computing
OpenInfra Foundation via YouTube
Learn Backend Development Part-Time, Online
Launch a New Career with Certificates from Google, IBM & Microsoft
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how Ant Group implements Kata-Based Confidential Computing (CoCo) to secure sensitive data during Large Language Model inference in this 32-minute conference talk. Explore the critical role of confidential computing in protecting both model data and user privacy within cloud-native architectures as LLMs become increasingly widespread. Discover solutions to key technical challenges including application measurement, compatibility issues after disabling file sharing, persistent storage encryption, image pull performance optimization, Kubernetes control plane and kubectl security, and hardware dependency management. Examine HyperGPU, a Trusted Execution Environment (TEE) implementation on commodity hardware that creates secure inference environments to prevent data breaches. Gain comprehensive insights into how Kata-Based confidential computing can significantly enhance data security and understand its broader applications in cloud-native AI deployments.
Syllabus
How Ant Group Achieves Data Security in LLM Inference with Kata Based Confidential Computing
Taught by
OpenInfra Foundation