Learn Backend Development Part-Time, Online
MIT Sloan AI Adoption: Build a Playbook That Drives Real Business ROI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This seminar presentation explores the critical limitations of Large Language Models (LLMs) regarding working memory capabilities. Learn how human working memory functions as an active cognitive system that enables temporary information storage, processing, and utilization—a capability current LLMs lack. The speaker, Jen-Tse (Jay) Huang from Johns Hopkins University, demonstrates this limitation through three experimental scenarios: the Number Guessing Game, Yes or No Game, and Math Magic. Discover how these experiments reveal that leading LLM families fail to exhibit human-like cognitive behaviors in tasks requiring working memory, highlighting a significant challenge to achieving artificial general intelligence. Dr. Huang, a postdoctoral researcher at the Center for Language and Speech Processing with publications in top AI venues including ICLR 2024, presents this research to encourage development of LLMs with improved working memory capabilities.
Syllabus
LLMs Do Not Have Human-Like Working Memories
Taught by
USC Information Sciences Institute