Explainable AI to Analyze Internal Decision Mechanism of Deep Neural Networks
Institute for Pure & Applied Mathematics (IPAM) via YouTube
Get 20% off all career paths from fullstack to AI
Build the Finance Skills That Lead to Promotions — Not Just Certificates
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the latest advancements in explainable artificial intelligence (XAI) for analyzing the internal decision mechanisms of deep neural networks in this 54-minute conference talk. Delve into the importance of securing safe use of complex AI systems in critical domains such as military, finance, human resources, and autonomous driving. Discover recent approaches to clarify internal decisions of deep neural networks, methods for automatically correcting unreliable internal nodes, and investigate reasons behind unstable nodes in some networks. Gain valuable insights into the field of XAI and its applications in enhancing the reliability and transparency of AI systems.
Syllabus
Jaesik Choi - Explainable AI to Analyze Internal Decision Mechanism of Deep Neural Networks
Taught by
Institute for Pure & Applied Mathematics (IPAM)