Free courses from frontend to fullstack and AI
Earn Your Business Degree, Tuition-Free, 100% Online!
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore the complexities and solutions for implementing confidential AI systems that protect user privacy while leveraging GPU technology in this 47-minute conference talk from the Linux Foundation. Discover how individual open source technologies can be integrated to configure, deploy, and manage confidential Trusted Execution Environments (TEEs), while understanding the challenges of combining multiple components into coherent, secure, and efficient solutions. Learn about the varying attestation and key management methodologies depending on different use cases and stakeholders including cloud, model, and service owners. Examine the additional complexity introduced by confidential GPUs, particularly regarding increased load times that affect services serving multiple models. Understand key components and design decisions essential for enabling confidential AI, including implications of different trust models on solutions and performance tradeoff considerations. Follow a detailed end-to-end demonstration of deploying an inference service on Nvidia H100 GPUs and AMD-based TEE with focus on protecting both the model and user input. Gain insights into why no single confidential AI solution fits all scenarios and determine which design approach works best for specific requirements.
Syllabus
Towards Confidential AI for the Masses! - Julian Stephen & Michael Le, IBM
Taught by
Linux Foundation