Live Online Classes in Design, Coding & AI — Small Classes, Free Retakes
You’re only 3 weeks away from a new language
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a technical lecture from Raphaël Millière of Macquarie University examining how Transformer-based neural networks develop variable binding capabilities - a fundamental aspect of symbolic computation and higher cognitive functions. Delve into an investigation of synthetic programs with variable assignments to understand how these networks maintain and process complex reference chains without explicit architectural support for variable binding. Learn about the Transformer architecture's residual stream and its role in supporting binding through learned subspace partitioning. Examine mechanistic interpretability techniques that reveal the network's progression from basic heuristics to implementing general algorithms for tracking variable assignments. Gain insights into the emergence of symbolic computation capabilities in neural architectures and their implications for cognitive science and artificial intelligence research.
Syllabus
How Do Transformers Learn Variable Binding?
Taught by
Simons Institute