Improving Transferability of Adversarial Examples with Input Diversity - CAP6412 Spring 2021
University of Central Florida via YouTube
NY State-Licensed Certificates in Design, Coding & AI — Online
Google Data Analytics, IBM AI & Meta Marketing — All in One Subscription
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a lecture on improving the transferability of adversarial examples using input diversity in machine learning. Delve into the objectives, transformations, and related work in this field. Examine the methodology behind the family of Fast Gradient Sign Method (FGSM) and diverse input patterns. Understand the relationships between different approaches and learn about attacking ensemble networks. Review experimental setups, including attacks on single and ensemble networks, as well as ablation studies. Gain insights from the NIPS 2017 adversarial competition and draw conclusions on the effectiveness of input diversity in enhancing adversarial example transferability.
Syllabus
Introduction
Objectives
Transformations
Related Work
Methodology
Family of FGSM
Diverse Inputs Patterns Methods
Relationships
Attacking on Ensemble Networks
Experiment - Setup
Attacking on Single Networks
Attacking a Ensemble of Network
Ablation Studies
NIPS 2017 adversarial competition
Conclusion
Taught by
UCF CRCV