Inference for Interpretable Machine Learning: Feature Importance and Beyond
Centre de recherches mathématiques - CRM via YouTube
Learn Generative AI, Prompt Engineering, and LLMs for Free
Google Data Analytics, IBM AI & Meta Marketing — All in One Subscription
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This lecture from the Colloque des sciences mathématiques du Québec (CSMQ) features Genevera Allen from Columbia University discussing the critical challenge of ensuring trustworthiness in machine learning interpretations. Explore how feature importance and interpretability methods can be verified and trusted when making crucial societal, scientific, and business decisions. Learn about Allen's empirical stability study revealing that feature interpretations are generally less reliable than predictions, and discover a new statistical inference framework for quantifying uncertainty in feature importance and higher-order feature patterns. The presentation introduces a distribution-free approach to test whether features significantly contribute to any machine learning model's predictive ability, demonstrated through scientific case studies and illustrative examples. Particularly valuable for researchers and practitioners concerned with trust, transparency, and accountability in machine learning systems.
Syllabus
Genevera Allen: Inference for Interpretable Machine Learning: Feature Importance and Beyond
Taught by
Centre de recherches mathématiques - CRM