Improving Human-AI Collaboration by Adapting to User Trust
USC Information Sciences Institute via YouTube
Learn Excel & Financial Modeling the Way Finance Teams Actually Use Them
The Fastest Way to Become a Backend Developer Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore how AI systems can adapt their behavior based on user trust levels to improve human-AI collaboration in this research seminar from USC Information Sciences Institute. Learn about the critical balance between under-reliance and over-reliance on AI assistance in high-stakes decision-making scenarios, where users either ignore accurate AI advice due to low trust or accept incorrect recommendations due to excessive trust. Discover innovative trust-adaptive interventions including providing supporting explanations during low-trust moments and counter-explanations during high-trust situations, demonstrated through studies involving laypeople answering science questions and doctors making medical diagnoses. Examine research findings showing up to 38% reduction in inappropriate reliance and 20% improvement in decision accuracy through these adaptive approaches. Understand how forced pauses can promote deliberation and reduce over-reliance when users have high trust in AI systems. Gain insights into human-centered AI design principles that facilitate appropriate reliance and enhance collaborative decision-making in uncertainty-rich environments, presented by PhD researcher Tejas Srinivasan from USC's GLAMOR Lab.
Syllabus
Improving Human-AI Collaboration by Adapting to User Trust
Taught by
USC Information Sciences Institute