Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore cutting-edge AI research in this 18-minute video examining two groundbreaking studies that challenge conventional understanding of large language models. Delve into the first research paper "Why Reasoning Fails to Plan: A Planning-Centric Analysis of Long-Horizon Decision Making in LLM Agents" by researchers from University of Notre Dame, Stanford University, University of Edinburgh, Yale University, Purdue University, University of Oxford, and UIUC, which investigates the limitations of current LLMs in complex planning scenarios and long-term decision-making processes. Analyze the second study "Context Structure Reshapes the Representational Geometry of Language Models" conducted by Google DeepMind and Princeton Neuroscience Institute researchers, which reveals how context structure fundamentally alters the internal representations within language models, moving beyond traditional next-token prediction paradigms. Gain insights into the latest developments in AI reasoning capabilities, planning mechanisms, and the geometric properties of neural language representations that are shaping the future of artificial intelligence research.
Syllabus
Google is cooking: Beyond the 'Next-Token' Manifold
Taught by
Discover AI