Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch a 44-minute lecture from the Joint IFML/MPG Symposium where Han Zhao from the University of Illinois Urbana-Champaign explores the theoretical limitations of linear scalarization in multi-task learning (MTL). Delve into the ongoing debate between Specialized Multi-Task Optimizers (SMTOs) and traditional scalarization approaches, examining whether scalarization can fully explore the Pareto front in linear MTL models. Learn about the multi-surface structure of feasible regions in under-parametrized models, understand the necessary and sufficient conditions for full exploration, and discover why scalarization fails to trace the complete Pareto front. Explore extensions of these theoretical findings to nonlinear neural networks and examine recent developments in online Chebyshev scalarization for controlled Pareto optimal solution searching.
Syllabus
Revisiting Scalarization in Multi-Task Learning
Taught by
Simons Institute