Explainable ML in the Wild - When Not to Trust Your Explanations
Association for Computing Machinery (ACM) via YouTube
Learn the Skills Netflix, Meta, and Capital One Actually Hire For
Free courses from frontend to fullstack and AI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Dive into a comprehensive tutorial on the limitations and potential pitfalls of explainable machine learning. Explore real-world scenarios where explanations may be unreliable, presented by experts Shalmali Joshi, Chirag Agarwal, and Himabindu Lakkaraju from Harvard. Learn to critically evaluate and interpret machine learning explanations, understanding when to exercise caution in trusting them. Gain insights into the challenges of implementing explainable ML in practical applications and discover strategies for more robust and trustworthy AI systems. This 91-minute session, part of the FAccT 2021 conference, equips data scientists, researchers, and AI practitioners with essential knowledge for navigating the complex landscape of explainable machine learning in real-world contexts.
Syllabus
Tutorial: Explainable ML in the Wild: When Not to Trust Your Explanations
Taught by
ACM FAccT Conference