Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about groundbreaking research findings on improving machine learning model performance through an AutoML seminar presentation by Thomas Nagler and Lennart Schneider. Explore how reshuffling resampling splits during hyperparameter optimization can enhance model generalization on unseen data, challenging conventional wisdom about fixed train-validation splits. Dive into theoretical analysis explaining the asymptotic behavior of validation loss surfaces and expected regret bounds, connecting reshuffling benefits to signal and noise characteristics. Examine evidence from both controlled simulations and large-scale experiments demonstrating how reshuffling can make single train-validation holdout protocols more competitive with cross-validation while reducing computational costs. Master key insights for implementing more effective hyperparameter optimization strategies in machine learning workflows during this 46-minute technical discussion.
Syllabus
Reshuffling Resampling Splits Can Improve Generalization of Hyperparameter Optimization
Taught by
AutoML Seminars