Theories of Neural Computation Underlying Learning, Imagination, Reasoning and Scaling - Of Mice and Machines
-
25
-
- Write review
Learn Backend Development Part-Time, Online
Master AI and Machine Learning: From Neural Networks to Applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore theoretical physics approaches to understanding neural computation in both biological and artificial systems through this Stanford Physics colloquium lecture. Delve into four remarkable abilities of brains and machines: learning new behaviors from single examples, creative imagination, language acquisition, and mathematical reasoning. Discover how mice navigate accurately in new environments on first encounter, examine how diffusion models generate exponentially many new images, understand how natural language structure governs learning data requirements, and learn methods for improving mathematical reasoning in language models. Apply statistical mechanics, pattern formation, nonlinear dynamics, high dimensional geometry, scaling analysis, and entropy control to derive quantitatively predictive theories of neural computation. Consider how artificial intelligence represents a new frontier for physics research, potentially yielding fundamental scientific understanding of intelligence similar to how biology once expanded physics into new realms of complexity.
Syllabus
Surya Ganguli- Applied Physics/Physics Colloquium
Taught by
Stanford Physics