Free courses from frontend to fullstack and AI
Google, IBM & Meta Certificates — 40% Off for a Limited Time
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn to identify and analyze security risks unique to AI and machine learning systems through this comprehensive conference talk that addresses the growing need for AI-specific threat modeling approaches. Discover why traditional threat modeling frameworks fall short when dealing with emerging AI risks such as data poisoning, model inversion, and adversarial inputs that can compromise not only security but also fairness, reliability, and user trust. Explore both conventional security frameworks and AI-specific methodologies like MAESTRO to systematically evaluate threats in machine learning environments. Gain hands-on knowledge of practical techniques and open-source tools for assessing AI systems against model tampering, data leakage, and adversarial attacks. Master the fundamentals of AI threat modeling without requiring advanced technical prerequisites, making this session accessible to security engineers seeking to expand their expertise and developers building AI-powered products who need to understand the security implications of their work.
Syllabus
Spandana Gorantla - Navigating the AI Minefield: Threat Modeling for Emerging AI Risks
Taught by
LASCON