Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a technical lecture from Raphaël Millière of Macquarie University examining how Transformer-based neural networks develop variable binding capabilities - a fundamental aspect of symbolic computation and higher cognitive functions. Delve into an investigation of synthetic programs with variable assignments to understand how these networks maintain and process complex reference chains without explicit architectural support for variable binding. Learn about the Transformer architecture's residual stream and its role in supporting binding through learned subspace partitioning. Examine mechanistic interpretability techniques that reveal the network's progression from basic heuristics to implementing general algorithms for tracking variable assignments. Gain insights into the emergence of symbolic computation capabilities in neural architectures and their implications for cognitive science and artificial intelligence research.
Syllabus
How Do Transformers Learn Variable Binding?
Taught by
Simons Institute