Helpful AI Models - You Can't Always Get What You Want, But You Might Get What You Need
Center for Language & Speech Processing(CLSP), JHU via YouTube
Learn the Skills Netflix, Meta, and Capital One Actually Hire For
The Fastest Way to Become a Backend Developer Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Attend this plenary conference talk exploring how AI models should be optimized for human-computer collaboration rather than just accuracy or user satisfaction. Learn about three research examples demonstrating effective human-AI teamwork: vocabulary learning through adaptive flashcard scheduling that combines perceived and actual helpfulness, strategic negotiation assistance in the board game Diplomacy using grounded statement analysis and value functions, and collaborative fact-checking where computers help humans identify false claims while avoiding overconfident errors. Discover how measuring and optimizing human-computer workflows can lead to more effective AI systems, with insights into balancing human versus computer skills and appropriate evaluation datasets. The presentation, delivered by Jordan Boyd-Graber from the University of Maryland, draws from extensive research in human-centered AI applications including topic modeling, question answering, and machine translation, offering perspectives on creating AI systems that truly enhance human capabilities rather than simply replacing them.
Syllabus
July 24th, 2025 — 11:00 CEST
Taught by
Center for Language & Speech Processing(CLSP), JHU