Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore cutting-edge research in algorithmic learning theory through this 42-minute conference session from the 37th International Conference on Algorithmic Learning Theory (ALT 2026) and ShaiFest, hosted by the Fields Institute. Discover four groundbreaking presentations that advance the field of machine learning theory: learn about regularized robustly reliable learners presented by Avrim Blum and Donya Saless, examining how to create learning algorithms that maintain reliability under various conditions; delve into learning with monotone adversarial corruptions as discussed by Kasper Green Larsen, Chirag Pabbaraju, and Abhishek Shetty, focusing on algorithms that can handle systematically corrupted data; understand group-realizable multi-group learning through empirical risk minimization as presented by Navid Ardeshir, Samuel Deng, Daniel Hsu, and Jingwen Liu, exploring fairness and performance across different demographic groups; and examine improved replicable boosting techniques using majority-of-majorities approaches as demonstrated by Kasper Green Larsen, Markus Engelund Mathiasen, and Clement Svendsen, addressing reproducibility challenges in ensemble learning methods.
Syllabus
Regularized Robustly Reliable Learners - 00:11-
Learning with Monotone Adversarial Corruptions - 13:25-
Group-realizable multi-group learning by minimizing empirical risk - 26:08-
Improved Replicable Boosting with Majority-of-Majorities - 32:08-
Taught by
Fields Institute