Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the vulnerabilities of machine learning pipelines to model backdoors in this 35-minute conference talk from BSidesLV. Delve into the concept of incubated ML exploits, where attackers inject backdoors using input-handling bugs in ML tools. Learn about the systematic exploitation of ML model serialization bugs in popular tools to construct backdoors, including the development of malicious artifacts like polyglot and ambiguous files. Discover the contributions made to Fickling, a pickle security tool designed for ML use cases. Gain insights into the guidelines formulated for security researchers and ML practitioners. Understand how incubated ML exploits represent a new class of threats that emphasize the need for a comprehensive approach to ML security by combining system security issues with model vulnerabilities.
Syllabus
Ground Truth, Wed, Aug 7, 12:30 - Wed, Aug 7, CDT
Taught by
BSidesLV