Finance Certifications Goldman Sachs & Amazon Teams Trust
Get 20% off all career paths from fullstack to AI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the critical security vulnerabilities in Python's pickle serialization format and its dangerous implications for machine learning systems in this 39-minute conference talk. Learn how malicious actors can exploit pickle files to inject harmful code into ML models, similar to how Agent Smith might tamper with Neo's Kung Fu upload in The Matrix. Discover the mechanics behind these "Betrayal ML" attacks, where seemingly innocent model files can contain hidden malicious payloads that execute when loaded. Examine real-world examples of pickle-based attacks and understand why this serialization method poses such significant risks to AI and ML deployments. Gain insights into emerging detection capabilities and defensive strategies to protect your machine learning infrastructure from these sophisticated supply chain attacks. Master the technical details of how pickle deserialization can be weaponized and develop the knowledge needed to identify and mitigate these threats in your own ML workflows.
Syllabus
Death by (Python) Pickle: "Betrayal ML" - Kadi McKean & Andy Lewis, ReversingLabs
Taught by
Linux Foundation