Towards Compositional Interpretability for XAI
Institute for Pure & Applied Mathematics (IPAM) via YouTube
Google, IBM & Meta Certificates — 40% Off for a Limited Time
Master Agentic AI, GANs, Fine-Tuning & LLM Apps
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Watch a 53-minute lecture from IPAM's Naturalistic Approaches to Artificial Intelligence Workshop where Sean Tull of Quantinuum explores a mathematical framework for AI model interpretability. Dive into how category theory and string diagrams can be used to analyze deterministic, probabilistic, and quantum AI models through a compositional lens. Examine the interpretability characteristics of various models including neural networks, transformers, rule-based systems, causal models, and 'DisCo' models in NLP. Learn about novel approaches to behavioral explanation like influence arguments, diagram surgery, and rewrite explanations that become possible with Compositionally Interpretable (CI) models. Discover ongoing research in Compositional Intelligence at Quantinuum, presented in collaboration with Robin Lorenz, Stephen Clark, Ilyas Khan, and Bob Coecke.
Syllabus
Sean Tull - Towards Compositional Interpretability for XAI - IPAM at UCLA
Taught by
Institute for Pure & Applied Mathematics (IPAM)