Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks

IEEE via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the detection and mitigation of backdoor attacks in deep neural networks in this IEEE conference talk. Delve into the lack of transparency in DNNs that makes them vulnerable to hidden triggers overriding normal classification. Learn about a robust and generalizable system for identifying backdoors and reconstructing possible triggers. Discover multiple mitigation techniques, including input filters, neuron pruning, and unlearning. Examine the efficacy of these techniques through extensive experiments on various DNNs and against different backdoor injection methods. Gain insights into the security risks posed by backdoor attacks in applications such as biometric authentication systems and self-driving cars. Understand the key intuitions behind detecting backdoors and the design overview of the detection process. Review experiment setups, backdoor detection performance, and a brief summary of mitigation strategies.

Syllabus

Intro
Neural Networks: Powerful yet Mysterious
How do we test DNNS?
What about untested samples?
Definition of Backdoor
Prior Work on Injecting Backdoor
Defense Goals and Assumptions
Key Intuition of Detecting Backdoor
Design Overview: Detection
Experiment Setup
Backdoor Detection Performance (3/3)
Brief Summary of Mitigation
One More Thing

Taught by

IEEE Symposium on Security and Privacy

Reviews

Start your review of Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.