What Transformers Can and Can't Do - A Logical Approach
USC Information Sciences Institute via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the theoretical foundations of transformer neural networks through a formal logic lens in this seminar presented by David Chiang from the University of Notre Dame at USC Information Sciences Institute. Discover how transformers relate to formal logic systems, similar to how finite automata connect to regular expressions and monadic second-order logic. Learn about groundbreaking research proving that unique-hard attention transformers are exactly equivalent to first-order logic of strings with ordering, enabling the transfer of numerous expressivity results from logic to these neural networks. Examine findings on softmax attention transformers with constant precision bits, which are equivalent to temporal logic with counting operators, and understand how this equivalence demonstrates that deeper transformers are strictly more expressive than shallower ones. Gain insights into how these theoretical results accurately predict transformer behavior in practical applications, contributing to the growing field of neural network theoretical analysis at a time when understanding AI capabilities and limitations is increasingly critical.
Syllabus
What Transformers Can and Can’t Do: A Logical Approach
Taught by
USC Information Sciences Institute