AI Engineer - Learn how to integrate AI into software applications
Free courses from frontend to fullstack and AI
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a comprehensive benchmark framework designed to standardize the evaluation of hyperparameter optimization (HPO) methods across diverse machine learning tasks. Learn about carps (Comprehensive Automated Research Performance Studies), which enables systematic comparison of N optimizers on M benchmark tasks, addressing the critical need for robust HPO evaluation in machine learning model development. Discover how this framework tackles four essential HPO task types: blackbox optimization, multi-fidelity optimization, multi-objective optimization, and multi-fidelity-multi-objective optimization. Examine the framework's extensive collection of 3336 tasks from 5 community benchmark collections and 28 variants of 9 optimizer families, representing the largest available library for HPO method evaluation and comparison. Understand the lightweight interface design that seamlessly connects optimizers with benchmark tasks, along with the integrated analysis pipeline that facilitates comprehensive optimizer evaluation. Investigate the innovative approach to computational efficiency through representative task subset selection, utilizing star discrepancy minimization to identify 10-30 diverse tasks per task type while maintaining evaluation quality. Gain insights into the baseline results established for future comparisons and learn how the framework supports dynamic subset recomputation as new benchmarks become available, making it a valuable tool for advancing HPO research and standardization.
Syllabus
carps: A Framework for Comparing N Hyperparameter Optimizers on M Benchmarks
Taught by
AutoML Seminars