Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore a groundbreaking mathematical interpretation of neural networks through game theory in this seminar lecture from Harvard's Center of Mathematical Sciences and Applications. Discover how ReLU neural networks can be understood as zero-sum, turn-based stopping games where the game operates in reverse to the network flow, with input serving as terminal reward and output representing initial game state values. Learn how biases define rewards while weights determine state-transition probabilities, with Max and Min players competing to maximize and minimize rewards respectively. Understand the equivalence between running ReLU networks and the Shapley-Bellman backward recursion for game values, leading to path integral expressions for network outputs and bounds derivation using the monotonic properties of the Shapley operator. Examine how entropic regularization extends this framework to Softplus neural networks, providing new mathematical insights into these commonly used activation functions. Gain exposure to cutting-edge research that bridges neural network theory with game theory and optimal control, presented by Yiannis Vlassopoulos from the Athena Research Center in collaboration with Stéphane Gaubert.