Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Building Trust in ML - Mapping the Model Lifecycle for ML Integrity and Transparency

Linux Foundation via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to build trust and security in machine learning systems through comprehensive lifecycle mapping and transparency frameworks in this 31-minute conference talk from the Linux Foundation. Explore the critical security challenges facing open machine learning models and datasets, including data poisoning attacks, supply chain vulnerabilities, and malicious backdoors hidden in pre-trained models on platforms like Hugging Face. Discover Atlas, an innovative framework that combines open specifications for data and software supply chain provenance, including Coalition for Content Provenance and Authenticity (C2PA) and Supply-chain Levels for Software Artifacts (SLSA), with transparency logs and trusted hardware to create attestable ML pipelines. Examine the three core verification mechanisms of Atlas: cryptographic artifact authentication, hardware-based attestation of ML systems, and comprehensive provenance tracking across ML pipelines. Gain insights into safeguarding all layers of the ML lifecycle and see a practical demonstration of how Atlas integrates multiple open-source tools to build an end-to-end ML lifecycle transparency system that addresses the growing security risks in AI application development.

Syllabus

Building Trust in ML: Mapping the Model Lifecycle for ML Integrity and Transparency - Marcela Melara

Taught by

Linux Foundation

Reviews

Start your review of Building Trust in ML - Mapping the Model Lifecycle for ML Integrity and Transparency

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.