Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about implementing zero trust principles for machine learning models in this 20-minute conference talk from MLOps World. Discover critical security vulnerabilities in ML systems through comprehensive research findings from Protect AI's collaboration with Hugging Face, which scanned 1.41 million model repositories and identified 352,000 unsafe or suspicious issues across 51,700 ML models. Explore the most prevalent threat to ML models - Model Serialization Attacks (MSA) - and understand how these attacks compromise model safety. Examine real-world scan results that reveal the current state of ML model security across popular repositories. Gain insights into developing a zero trust approach for ML model deployment and usage, essential for protecting production ML systems from sophisticated attacks. Access practical guidance on verifying ML model safety before implementation, supported by findings from Protect AI's publicly available Insights DB database.
Syllabus
Zero Trust For Machine Learning Models
Taught by
MLOps World: Machine Learning in Production