XAI in Cybersecurity - Understanding the Why Behind AI Data-Driven Decisions
NY State-Licensed Certificates in Design, Coding & AI — Online
Learn Generative AI, Prompt Engineering, and LLMs for Free
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the critical importance of explainable artificial intelligence (XAI) in cybersecurity through this 26-minute conference talk that addresses the growing need for transparency in AI-driven security decisions. Learn why traditional AI models in security operate as "black boxes" and discover how explainable AI techniques can illuminate the reasoning behind critical security alerts and vulnerability assessments. Examine real-world use cases demonstrating how XAI enables faster incident response and builds enhanced trust in AI systems, particularly as cloud intrusions have risen 75% year-over-year and 84% of company codebases contain vulnerabilities in open-source software. Understand the challenges of trusting AI models that flag critical vulnerabilities without clear explanations, and discover practical approaches to building transparent, trustworthy, and resilient AI-driven security systems that can explain their decision-making processes and help security teams better understand and respond to threats.
Syllabus
18. Rashmi Nagpal: XAI in Cybersecurity: Understanding the "Why" behind the AI data-driven decisions
Taught by
x33fcon