Confidential Computing for Scaling Inference Workloads
Confidential Computing Consortium via YouTube
Become an AI & ML Engineer with Cal Poly EPaCE — IBM-Certified Training
MIT Sloan: Lead AI Adoption Across Your Organization — Not Just Pilot It
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how confidential computing technologies can be leveraged to scale machine learning inference workloads while maintaining data privacy and security in this 37-minute conference talk by Julian Stephen from the Confidential Computing Consortium. Explore the intersection of confidential computing and AI inference, understanding how trusted execution environments and hardware-based security features can protect sensitive data during model inference operations. Discover practical approaches for implementing confidential computing solutions in production inference systems, including considerations for performance optimization, scalability challenges, and security guarantees. Examine real-world use cases where confidential computing enables organizations to process sensitive data for AI inference while meeting compliance requirements and maintaining data sovereignty. Gain insights into the technical architecture and implementation strategies for deploying confidential computing in inference pipelines, including hardware requirements, software frameworks, and integration patterns with existing ML infrastructure.
Syllabus
Confidential Computing for Scaling Inference Workloads – Julian Stephen
Taught by
Confidential Computing Consortium