Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Loading Models, Launching Shells - Abusing AI File Formats for Code Execution

DEFCONConference via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how trusted AI file formats can be weaponized for malicious code execution in this 19-minute DEF CON 33 conference talk. Discover the hidden dangers lurking in seemingly safe file formats like .onnx, .h5, and .npz that are commonly used in machine learning workflows. Learn how formats including ONNX, HDF5, Feather, YAML, JSON, and NPZ can be exploited to deliver reverse shells and stealth payloads without requiring exploits or unsafe configurations, relying solely on the default behavior of widely used ML libraries such as onnx, h5py, pyarrow, and numpy. Witness a live demonstration showing a healthcare chatbot silently executing malicious code when these formats are deserialized, with no user interaction or security alerts. Gain practical insights into payload delivery techniques, threat detection methods, and hardening strategies to protect AI systems against these sophisticated attacks that transform trusted data containers into malware carriers.

Syllabus

DEF CON 33 - Loading Models, Launching Shells: Abusing AI File Formats fr Code Execution - C Parzian

Taught by

DEFCONConference

Reviews

Start your review of Loading Models, Launching Shells - Abusing AI File Formats for Code Execution

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.