35% Off Finance Skills That Get You Hired - Code CFI35
Gain a Splash of New Skills - Coursera+ Annual Just ₹7,999
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to build trust and security in machine learning systems through comprehensive lifecycle mapping and transparency frameworks in this 31-minute conference talk from the Linux Foundation. Explore the critical security challenges facing open machine learning models and datasets, including data poisoning attacks, supply chain vulnerabilities, and malicious backdoors hidden in pre-trained models on platforms like Hugging Face. Discover Atlas, an innovative framework that combines open specifications for data and software supply chain provenance, including Coalition for Content Provenance and Authenticity (C2PA) and Supply-chain Levels for Software Artifacts (SLSA), with transparency logs and trusted hardware to create attestable ML pipelines. Examine the three core verification mechanisms of Atlas: cryptographic artifact authentication, hardware-based attestation of ML systems, and comprehensive provenance tracking across ML pipelines. Gain insights into safeguarding all layers of the ML lifecycle and see a practical demonstration of how Atlas integrates multiple open-source tools to build an end-to-end ML lifecycle transparency system that addresses the growing security risks in AI application development.
Syllabus
Building Trust in ML: Mapping the Model Lifecycle for ML Integrity and Transparency - Marcela Melara
Taught by
Linux Foundation