Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

EIT Digital

Performance measures and validation methods

EIT Digital via Coursera

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This course is ideal for data scientists, machine learning practitioners, researchers, and graduate students who want to move beyond basic metrics and develop the statistical intuition required for reliable model evaluation in production and research environments. Understanding how to reliably evaluate machine learning models is essential for building systems that perform well in real-world settings. In this course, you’ll learn modern techniques for assessing classification performance using Receiver Operating Characteristic (ROC) analysis and interpreting key metrics such as Area Under the Curve (AUC) and Concordance Index (C-index). You’ll also explore a practical framework for supervised learning, focusing on how algorithms select optimal models based on performance measures and how statistical principles support reliable decision-making. The course concludes with a real-world case study using biosignal data, where you’ll apply advanced cross-validation strategies to handle datasets with repeated measurements and ensure unbiased performance estimates. By the end of the course, you’ll be able to evaluate models rigorously, choose appropriate validation methods, and design machine learning workflows that generalize to new data.

Syllabus

  • Classification performance evaluation using receiver operator characteristic
    • In the first module, we describe how the classification performance of a machine learning model can be estimated using the receiver operating characteristic (ROC). It is explained how the ROC involves calculating model classification performance with multiple different decision thresholds, and how the ROC is a better measure of classification performance than simple classification accuracy or misclassification rate measures. Furthermore, the closely related concepts of an area under the curve (AUC) and the equivalent concordance index (C-index) values are discussed, which summarize the classifier model performance using ROC.
  • Case study: Metal ion concentration prediction
    • In this module, an interpretation of supervised machine learning methods simply as abstract mappings from a sample of data to a predictive hypothesis is presented. As an important special case that covers a surprisingly large portion learning algorithms, we consider methods that select an optimal hypothesis based on a given measure of how well hypotheses fit to a sample of data. The measure can be just a straightforward measure of prediction performance of a hypothesis on the sample, such as classification accuracy or regression error. However, it can also be something more complicated and seemingly more distant from the learning objective, such as a function measuring the distance of Voronoi partitions from the sample points as is the case with nearest neighbor methods we consider as example methods. Furthermore, resampling and cross-validation based model selection method considered in the third module are also examples of this framework. The law of large numbers concept is revisited and the so-called bounded differences conditions under which it holds for arbitrary performance measures on a sample of data are considered.
  • Case study: Pain assessment from biosignal data
    • In this module, a case study on pain assessment from biosignal data is considered in which cross-validation based model performance estimation is conducted with non-independent data sample points. The independence assumption of data samples is violated when data set consists from repeated measurements from the same subject source. Because of these independence violations, the standard leave-one-out cross-validation can not be used, since it leads to biased performance estimation. Instead, with the repeated measurement data a leave-subject-out cross-validation method is utilized, which answers the statistical question on how well the model estimates the experienced pain of new patients not seen in the model training phase.

Taught by

Jonne Pohjankukka and Asja Kamenica

Reviews

Start your review of Performance measures and validation methods

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.