Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Enabling Verifiable AI Transparency With Confidential Computing With ManaTEE

OpenSSF via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to implement verifiable AI transparency using confidential computing through ManaTEE, an open-source framework originally developed for TikTok and now part of the Confidential Computing Consortium project. Discover the fundamentals of confidential computing and its critical role in ensuring integrity during AI model evaluation processes. Explore how ManaTEE leverages Trusted Execution Environments (TEEs) to deliver verifiable AI transparency, making it possible to evaluate even proprietary and closed-source models while maintaining security. Examine the ManaTEE workflow through demonstrations of its secure and auditable Jupyter Notebook interface, including capabilities for loading benchmarks, supporting custom evaluation code, and analyzing outputs within a protected environment. Watch a live demonstration of ManaTEE evaluating an AI model to understand how it generates cryptographically verifiable results that balance model confidentiality with transparency requirements. Gain insights into how confidential computing and ManaTEE can enhance trust, privacy, and transparency in modern AI systems, addressing the growing need for secure data analytics and model evaluation in the evolving AI landscape.

Syllabus

Enabling Verifiable AI Transparency With Confidential Computing With ManaTEE - Yonggil Choi, TikTok

Taught by

OpenSSF

Reviews

Start your review of Enabling Verifiable AI Transparency With Confidential Computing With ManaTEE

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.