Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a 40-minute lecture from Harvard University that investigates why large language models (LLMs) improve performance when generating additional "reasoning tokens" or longer chain-of-thought (CoT) during inference time. Discover which aspects of task complexity most strongly determine the optimal amount of reasoning needed. The presentation covers the research "Critical Thinking: Which Kinds of Complexity Govern Optimal Reasoning Length?" by Celine Lee and Alexander M. Rush from Cornell University and Keyon Vafa from Harvard University, using the metaphor of a Möbius strip to explain nonlinear AI reasoning processes. Learn about Test-Time compute Scaling and how different complexities affect AI reasoning capabilities through this insightful academic exploration of advanced AI concepts.
Syllabus
AI Reasoning on a Möbius Strip (Harvard)
Taught by
Discover AI