Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Understanding LLMs Like Physicists - Observation, Hypothesis, Experimentation, and Prediction

Google TechTalks via YouTube

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Watch a 46-minute technical talk exploring how physics-inspired methodologies lead to new discoveries in Large Language Models (LLMs). Learn about two key mechanisms discovered through scientific observation and experimentation: dormant attention heads that deactivate when irrelevant to tasks, and random guessing behavior in two-hop reasoning scenarios. Follow along as UC Berkeley PhD student Tianyu Guo demonstrates how these mechanisms were identified through careful observation, hypothesis formation, controlled experimentation, and real-world validation. Gain insights into how physics-based research approaches can advance our understanding of LLM behavior and functionality, with particular focus on model interpretability and causal inference.

Syllabus

Understanding LLMs Like Physicists: Observation, Hypothesis, Experimentation, and Prediction

Taught by

Google TechTalks

Reviews

Start your review of Understanding LLMs Like Physicists - Observation, Hypothesis, Experimentation, and Prediction

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.