Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Theories of Neural Computation Underlying Learning, Imagination, Reasoning and Scaling - Of Mice and Machines

Stanford Physics via YouTube

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore theoretical physics approaches to understanding neural computation in both biological and artificial systems through this Stanford Physics colloquium lecture. Delve into four remarkable abilities of brains and machines: learning new behaviors from single examples, creative imagination, language acquisition, and mathematical reasoning. Discover how mice navigate accurately in new environments on first encounter, examine how diffusion models generate exponentially many new images, understand how natural language structure governs learning data requirements, and learn methods for improving mathematical reasoning in language models. Apply statistical mechanics, pattern formation, nonlinear dynamics, high dimensional geometry, scaling analysis, and entropy control to derive quantitatively predictive theories of neural computation. Consider how artificial intelligence represents a new frontier for physics research, potentially yielding fundamental scientific understanding of intelligence similar to how biology once expanded physics into new realms of complexity.

Syllabus

Surya Ganguli- Applied Physics/Physics Colloquium

Taught by

Stanford Physics

Reviews

Start your review of Theories of Neural Computation Underlying Learning, Imagination, Reasoning and Scaling - Of Mice and Machines

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.