Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to protect AI models from sophisticated side-channel attacks that can extract proprietary neural networks directly from edge hardware through this 19-minute conference talk. Discover a groundbreaking defense technique that leverages stochastic training to create multiple model versions and unpredictably switch between them during inference, achieving approximately 50% reduction in side-channel leakage with minimal accuracy impact. Explore practical demonstrations on real devices from major manufacturers including Nvidia, ARM, NXP, and Google's Coral TPUs, and understand how attackers can exploit hardware vulnerabilities to steal valuable AI intellectual property. Master the technical implementation of layer-wise parameter selection that provides quadratic security improvements over whole-model switching approaches, and see how clever repurposing of ReLU activation functions enables conditional logic on edge TPUs that lack native control flow support. Gain insights into protecting AI deployments without requiring new chip designs or proprietary compiler access, making this security approach immediately applicable to existing edge AI systems in potentially hostile environments.
Syllabus
Stochastic Training for Side - Channel Resilient AI
Taught by
EDGE AI FOUNDATION