Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore cutting-edge research in algorithmic learning theory through this conference session featuring six technical presentations from leading researchers. Delve into advanced topics including phase transitions in logistic regression with large weights, optimal L2 regularization techniques for high-dimensional continual linear regression, and quantitative convergence analysis using projected stochastic gradient descent for non-convex optimization problems. Learn about variance reduction methods in stochastic optimization through proximal point approaches, accelerated mirror descent algorithms for non-Euclidean star-convex functions, and DS-compatible log-linear reliability models with KL-Prox EM algorithms. Each presentation addresses fundamental challenges in machine learning optimization, offering theoretical insights and practical solutions for complex algorithmic problems. The session covers mathematical foundations, convergence guarantees, sample complexity analysis, and generalization properties across various optimization landscapes, making it valuable for researchers and practitioners working in machine learning theory, optimization, and statistical learning.
Syllabus
Session 4 | 37th International Conference on Algorithmic Learning Theory (ALT 2026) and ShaiFest
Taught by
Fields Institute