Physics of Language Models: Knowledge Storage, Extraction, and Manipulation
Harvard CMSA via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a detailed research seminar where CMU Machine Learning Professor and Microsoft Research scientist Yuanzhi Li delves into the complexities of knowledge management in Large Language Models (LLMs). Learn about groundbreaking experiments using synthetic biography datasets that reveal surprising limitations in how LLMs store and retrieve information, even when achieving perfect training performance. Discover the critical role of data augmentation techniques in improving knowledge storage within token embeddings, and understand why LLMs struggle with knowledge manipulation without chain-of-thought reasoning. Examine the relationship between model size and knowledge capacity, investigating whether current parameter scales are sufficient for storing human-level knowledge. Gain insights into fundamental questions about LLM architecture, training methodologies, and the physics-inspired approach to understanding these powerful AI systems.
Syllabus
Yuanzhi Li | Physics of Language Models: Knowledge Storage, Extraction, and Manipulation
Taught by
Harvard CMSA