Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore advanced algorithmic analysis techniques in this Members' Colloquium lecture that examines alternatives to traditional worst-case analysis in computational problems. Learn how worst-case analysis, while providing robust performance guarantees, often falls short for many fundamental problems where more nuanced approaches are necessary. Discover the emerging field of "beyond worst-case analysis" and its applications across diverse areas including clustering, linear programming, and neural network training. Delve into a specific application to online learning theory, focusing on regret-minimization problems where traditional worst-case analysis shows that online learnability is characterized by the Littlestone dimension of hypothesis classes. Understand how this characterization proves brittle when examined through a "smoothed" adversary model, where adversarially chosen inputs undergo small random perturbations, revealing that online learnability becomes characterized by the VC dimension instead. Examine concrete examples demonstrating how the Littlestone dimension can be infinite for simple cases like one-dimensional threshold functions, while the corresponding VC dimension remains manageable. Gain insights into proof techniques that extend to online versions of the Komlos problem in discrepancy theory, representing collaborative research with UC Berkeley and MIT published in the Journal of the ACM.