Pragmatic Interpretability - A Human-AI Cooperation Approach
USC Information Sciences Institute via YouTube
Learn Backend Development Part-Time, Online
Live Online Classes in Design, Coding & AI — Small Classes, Free Retakes
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the concept of pragmatic interpretability in machine learning models through this insightful 53-minute talk by Shi Feng from the University of Illinois, Chicago. Delve into the challenges of understanding how AI models work and their potential for intelligence augmentation. Examine a more practical approach to interpretability that emphasizes modeling human needs in AI cooperation. Learn about evaluating and optimizing human-AI teams as unified decision-makers, and discover how models can learn to explain selectively. Investigate methods for incorporating human intuition into models and explanations outside the context of working with AI. Conclude with a discussion on how models can pragmatically infer information about their human teammates. Gain valuable insights from Shi Feng, a postdoctoral researcher at the University of Chicago, whose work focuses on human-AI cooperation in natural language processing.
Syllabus
Pragmatic Interpretability
Taught by
USC Information Sciences Institute