Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

EIT Digital

Problem-Dependent Resampling Techniques

EIT Digital via Coursera

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This course is designed for data scientists, machine learning practitioners, and researchers who want to understand how resampling techniques must be adapted to the structure of the problem at hand. You will learn how standard validation methods such as cross-validation can fail when applied blindly, and how to design problem-dependent resampling strategies for spatial data, pair-input data, and other dependent observation structures. The course also covers spatial cross-validation, dependency-aware evaluation design, and statistical testing methods to assess whether performance estimates are reliable. By the end of the course, you will be able to choose and construct appropriate resampling strategies that reflect the true structure of your data and provide trustworthy performance estimates.

Syllabus

  • Evaluating spatial models with spatial cross-validation
    • In the first module, we describe how cross-validation based model performance estimation can produce optimistic results with spatial data sets. We discuss how the inherent property called spatial autocorrelation in geographical data sets causes an optimistic bias in the cross-validation procedure, and how should this problem be tackled. To take into account the effects of spatial autocorrelation, we discuss the modified version of cross-validation, the spatial cross-validation designed for evaluating model prediction performance with spatial data sets. Furthermore, we present the motivation behind spatial cross-validation from industry perspective, and how the method can be utilized in data sampling.
  • Learning with pair-input data
    • Pair-input data are encountered in many applications and have unique properties that need to be taken into account. In this module, we first discuss what pair-input data are and what key characteristics they have, introducing drug-target interactions as an example. We then examine how dependencies emerge between pair-input observations and discuss how those dependencies can be used to characterize pair-input observations. Building on this categorization, we finally explore how to modify performance evaluation methods to obtain reliable estimates of out-of-sample prediction performance for pair-input data. The modifications to the selection of training observations are mathematically formulated.
  • Permutation testing
    • In this module, we will learn how to determine suitable statistical tests for given machine learning tasks. As an example, we will go through the well-known Wilcoxon test for classifier evaluation. We will also learn about some of the common pitfalls we can fall into if we are not careful in model performance estimation. We see how it is possible to get a very good model performance estimations even though there is no existing pattern in the data. In addition, we will learn how careless feature selection can cause optimistically biased performance estimation in cross-validation. Lastly, we go through the permutation test which allows us to measure the statistical significance of our model performance estimate.

Taught by

Jonne Pohjankukka and Asja Kamenica

Reviews

Start your review of Problem-Dependent Resampling Techniques

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.