Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore cutting-edge research in algorithmic learning theory through this conference session featuring six technical presentations from leading researchers. Delve into the computational complexity of learning regular expressions and discover why certain pattern recognition tasks remain fundamentally difficult. Examine the Large Average Subtensor Problem, investigating ground-state properties, algorithmic approaches, and inherent computational barriers. Learn about the Planted Number Partitioning Problem and its implications for understanding structured optimization challenges. Study uniform convergence theory that extends beyond classical Glivenko-Cantelli results, exploring new theoretical frameworks for statistical learning. Investigate optimal bounds for Tyler's M-Estimator in the context of elliptical distributions, with applications to robust statistical estimation. Conclude with an analysis of Talagrand's concentration inequalities applied to Gaussian processes, examining both upper and lower bounds on expected soft maxima for finite index sets. Each presentation provides deep theoretical insights into fundamental problems in machine learning, statistical learning theory, and computational complexity, making this essential viewing for researchers and advanced students in these fields.
Syllabus
On the Hardness of Learning Regular Expressions -9:43
Large Average Subtensor Problem: Ground-State, Algorithms, and Algorithmic Barriers 9:45-
The Planted Number Partitioning Problem 21:12-
Uniform Convergence Beyond Glivenko-Cantelli 33:47-
Optimal Bounds for Tyler’s M-Estimator for Elliptical Distributions 44:22-
Talagrand Meets Talagrand: Upper and Lower Bounds on Expected Soft Maxima of Gaussian Processes with Finite Index Sets 56:43-
Taught by
Fields Institute