Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Improving Human-AI Collaboration by Adapting to User Trust

USC Information Sciences Institute via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how AI systems can adapt their behavior based on user trust levels to improve human-AI collaboration in this research seminar from USC Information Sciences Institute. Learn about the critical balance between under-reliance and over-reliance on AI assistance in high-stakes decision-making scenarios, where users either ignore accurate AI advice due to low trust or accept incorrect recommendations due to excessive trust. Discover innovative trust-adaptive interventions including providing supporting explanations during low-trust moments and counter-explanations during high-trust situations, demonstrated through studies involving laypeople answering science questions and doctors making medical diagnoses. Examine research findings showing up to 38% reduction in inappropriate reliance and 20% improvement in decision accuracy through these adaptive approaches. Understand how forced pauses can promote deliberation and reduce over-reliance when users have high trust in AI systems. Gain insights into human-centered AI design principles that facilitate appropriate reliance and enhance collaborative decision-making in uncertainty-rich environments, presented by PhD researcher Tejas Srinivasan from USC's GLAMOR Lab.

Syllabus

Improving Human-AI Collaboration by Adapting to User Trust

Taught by

USC Information Sciences Institute

Reviews

Start your review of Improving Human-AI Collaboration by Adapting to User Trust

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.