Building Security Around ML - Adversarial Machine Learning Defense Strategies
AI Engineer via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical challenges of adversarial machine learning in this 25-minute conference talk that examines why attack methods continue to outpace defensive capabilities despite over a decade of research since 2013. Learn about the growing relevance of adversarial examples in the context of agentic multimodal large language models and discover practical approaches to defending these advanced AI systems. Understand the fundamental reasons why neural networks and other machine learning models remain vulnerable to imperceptible input changes, and gain insights into current defensive strategies from a cybersecurity perspective. The presentation draws from extensive research experience in detecting and defending against attacks on ML systems, providing both theoretical understanding and practical guidance for building more secure machine learning implementations in an era of increasingly sophisticated AI agents.
Syllabus
Building security around ML: Dr. Andrew Davis
Taught by
AI Engineer