Google Data Analytics, IBM AI & Meta Marketing — All in One Subscription
Learn the Skills Netflix, Meta, and Capital One Actually Hire For
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the theoretical foundations and practical challenges of human-AI collaborative decision making in this 45-minute conference talk. Examine how humans and AI models can work together to make better decisions than either could achieve independently, moving beyond classical Bayesian assumptions to more computationally tractable frameworks. Learn about tractable agreement protocols that allow human-AI collaboration to converge on accuracy-improving decisions without requiring perfect rationality from either party. Discover how Aumann's classical agreement theorem can be extended using realistic assumptions about knowledge and computational power. Investigate the complex dynamics that arise when AI models may prioritize their designers' interests over users' interests, and understand how market competition between AI providers can mitigate these alignment problems. Analyze the "market alignment" assumption and its role in ensuring users can advance their goals effectively even when individual AI providers are not perfectly aligned. Gain insights into Nash equilibria outcomes in competitive AI markets and their implications for collaborative decision making. The presentation synthesizes findings from three research papers covering tractable agreement protocols, collaborative prediction through information aggregation, and emergent alignment through market competition.
Syllabus
Agreement and Alignment for Human-AI Collaborative Decision Making
Taught by
Simons Institute