Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

How Ant Group Achieves Data Security in LLM Inference with Kata Based Confidential Computing

OpenInfra Foundation via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how Ant Group implements Kata-Based Confidential Computing (CoCo) to secure sensitive data during Large Language Model inference in this 32-minute conference talk. Explore the critical role of confidential computing in protecting both model data and user privacy within cloud-native architectures as LLMs become increasingly widespread. Discover solutions to key technical challenges including application measurement, compatibility issues after disabling file sharing, persistent storage encryption, image pull performance optimization, Kubernetes control plane and kubectl security, and hardware dependency management. Examine HyperGPU, a Trusted Execution Environment (TEE) implementation on commodity hardware that creates secure inference environments to prevent data breaches. Gain comprehensive insights into how Kata-Based confidential computing can significantly enhance data security and understand its broader applications in cloud-native AI deployments.

Syllabus

How Ant Group Achieves Data Security in LLM Inference with Kata Based Confidential Computing

Taught by

OpenInfra Foundation

Reviews

Start your review of How Ant Group Achieves Data Security in LLM Inference with Kata Based Confidential Computing

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.