Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Revisiting Scalarization in Multi-Task Learning - Theory and Limitations

Simons Institute via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch a 44-minute lecture from the Joint IFML/MPG Symposium where Han Zhao from the University of Illinois Urbana-Champaign explores the theoretical limitations of linear scalarization in multi-task learning (MTL). Delve into the ongoing debate between Specialized Multi-Task Optimizers (SMTOs) and traditional scalarization approaches, examining whether scalarization can fully explore the Pareto front in linear MTL models. Learn about the multi-surface structure of feasible regions in under-parametrized models, understand the necessary and sufficient conditions for full exploration, and discover why scalarization fails to trace the complete Pareto front. Explore extensions of these theoretical findings to nonlinear neural networks and examine recent developments in online Chebyshev scalarization for controlled Pareto optimal solution searching.

Syllabus

Revisiting Scalarization in Multi-Task Learning

Taught by

Simons Institute

Reviews

Start your review of Revisiting Scalarization in Multi-Task Learning - Theory and Limitations

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.