Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the world of language agents in this informative talk presented by Shunyu Yao from Princeton University. Delve into the Cognitive Architectures for Language Agents (CoALA) framework, which provides a systematic approach to understanding and evaluating AI systems that utilize large language models (LLMs) for world interaction. Learn about three practical benchmarks - WebShop, InterCode, and Collie - designed to develop and assess language agents using web, code, and grammar respectively. Discover how these scalable benchmarks offer simple yet faithful evaluation metrics without relying on human preference labeling or LLM scoring. Gain insights into the future directions of language agent research and development from Yao, a final year PhD student supported by the Harold W. Dodds Fellowship at Princeton NLP Group.
Syllabus
On Formulating and Evaluating Language Agents
Taught by
USC Information Sciences Institute