Confidential Computing for Scaling Inference Workloads
Confidential Computing Consortium via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how confidential computing technologies can be leveraged to scale machine learning inference workloads while maintaining data privacy and security in this 37-minute conference talk by Julian Stephen from the Confidential Computing Consortium. Explore the intersection of confidential computing and AI inference, understanding how trusted execution environments and hardware-based security features can protect sensitive data during model inference operations. Discover practical approaches for implementing confidential computing solutions in production inference systems, including considerations for performance optimization, scalability challenges, and security guarantees. Examine real-world use cases where confidential computing enables organizations to process sensitive data for AI inference while meeting compliance requirements and maintaining data sovereignty. Gain insights into the technical architecture and implementation strategies for deploying confidential computing in inference pipelines, including hardware requirements, software frameworks, and integration patterns with existing ML infrastructure.
Syllabus
Confidential Computing for Scaling Inference Workloads – Julian Stephen
Taught by
Confidential Computing Consortium