Gain a Splash of New Skills - Coursera+ Annual Nearly 45% Off
AI Engineer - Learn how to integrate AI into software applications
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical security vulnerabilities in Python's pickle serialization format and its dangerous implications for machine learning systems in this 39-minute conference talk. Learn how malicious actors can exploit pickle files to inject harmful code into ML models, similar to how Agent Smith might tamper with Neo's Kung Fu upload in The Matrix. Discover the mechanics behind these "Betrayal ML" attacks, where seemingly innocent model files can contain hidden malicious payloads that execute when loaded. Examine real-world examples of pickle-based attacks and understand why this serialization method poses such significant risks to AI and ML deployments. Gain insights into emerging detection capabilities and defensive strategies to protect your machine learning infrastructure from these sophisticated supply chain attacks. Master the technical details of how pickle deserialization can be weaponized and develop the knowledge needed to identify and mitigate these threats in your own ML workflows.
Syllabus
Death by (Python) Pickle: "Betrayal ML" - Kadi McKean & Andy Lewis, ReversingLabs
Taught by
Linux Foundation