All Models Are Wrong, Some Are Useful: Model Selection with Limited Labels
Scalable Parallel Computing Lab, SPCL @ ETH Zurich via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This conference talk presents the MODEL SELECTOR framework for efficiently selecting pretrained classifiers with minimal labeled data. Learn how this approach can reduce labeling costs by up to 94.15% compared to baseline methods when identifying optimal models for deployment. The presentation covers the complete framework, explaining how MODEL SELECTOR samples only the most informative examples for labeling across diverse datasets. Follow along as speaker Patrik Okanovic from ETH Zurich's Scalable Parallel Computing Lab shares experimental results across 18 model collections on 16 datasets, demonstrating the framework's effectiveness in both selecting the best model and near-best models with accuracy within 1% of optimal performance. The talk progresses from introduction through framework explanation, detailed exploration of the MODEL SELECTOR approach, comprehensive evaluation, and concluding insights.
Syllabus
00:00 Introduction
03:46 Framework
04:46 Model Selector
11:30 Evaluation
24:13 Conclusion
Taught by
Scalable Parallel Computing Lab, SPCL @ ETH Zurich