Stuck in Tutorial Hell? Learn Backend Dev the Right Way
Our career paths help you become job ready faster
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Watch a 46-minute technical talk exploring how physics-inspired methodologies lead to new discoveries in Large Language Models (LLMs). Learn about two key mechanisms discovered through scientific observation and experimentation: dormant attention heads that deactivate when irrelevant to tasks, and random guessing behavior in two-hop reasoning scenarios. Follow along as UC Berkeley PhD student Tianyu Guo demonstrates how these mechanisms were identified through careful observation, hypothesis formation, controlled experimentation, and real-world validation. Gain insights into how physics-based research approaches can advance our understanding of LLM behavior and functionality, with particular focus on model interpretability and causal inference.
Syllabus
Understanding LLMs Like Physicists: Observation, Hypothesis, Experimentation, and Prediction
Taught by
Google TechTalks