Google Data Analytics, IBM AI & Meta Marketing — All in One Subscription
MIT Sloan: Lead AI Adoption Across Your Organization — Not Just Pilot It
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to establish trust and integrity in machine learning systems through tamper-proof metadata records in this conference talk. Explore the critical importance of model provenance and integrity verification in AI systems, addressing the current gap in standardized approaches for verifying model origins and detecting tampering. Discover the OpenSSF Model Signing project, a PKI-agnostic method for creating verifiable claims on ML artifact bundles, and understand how this approach extends beyond model signing to encompass datasets and associated files within a unified manifest. Examine the foundation this creates for comprehensive AI supply-chain solutions that enhance both security and reduce development costs. Investigate practical applications such as querying dataset origins for specific models and identifying models trained on compromised datasets before production deployment. Understand how merging model signing with model cards, SLSA, and AI-BOM information enables powerful metadata analysis using tools like GUAC, establishing the groundwork for advanced AI supply chain security capabilities.
Syllabus
From Model to Trust: Building Upon Tamper-proof ML Metadata Records - Mihai Maruseac, Google
Taught by
OpenSSF