Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

EIT Digital

Supervised machine learning and performance evaluation

EIT Digital via Coursera

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This course is designed for data scientists, machine learning practitioners, and graduate students who want to understand how to evaluate and select models reliably in real-world applications. It is particularly relevant for learners working with predictive models who need to ensure their results generalise beyond the training data. You’ll learn the statistical foundations behind performance estimation and gain hands-on experience with essential techniques such as cross-validation, model selection, and nested resampling. By the end of the course, you’ll be equipped to design robust evaluation workflows and make confident, evidence-based modeling decisions.

Syllabus

  • Performance evaluation on data
    • In the first module, the basic concepts of prediction performance evaluation of artificial intelligence based systems on a sample of data are described. It is explained on an intuitive level why and under what conditions the performance evaluation on a sample can be expected to work in the first place. Firstly, the fundamental assumption about the sample being independent and identically distributed is presented. Given the assumption, it is described how the performance estimate on the sample converges towards the true performance as a function of the sample size, a phenomenon referred to as the law of large numbers. Finally, it is briefly demonstrated how the speed of the aforementioned convergence depends on the properties of the data distribution and in which cases it can become impractically slow or not even take place at all.
  • Basics of supervised machine learning
    • In this module, an interpretation of supervised machine learning methods simply as abstract mappings from a sample of data to a predictive hypothesis is presented. As an important special case that covers a surprisingly large portion learning algorithms, we consider methods that select an optimal hypothesis based on a given measure of how well hypotheses fit to a sample of data. The measure can be just a straightforward measure of prediction performance of a hypothesis on the sample, such as classification accuracy or regression error. However, it can also be something more complicated and seemingly more distant from the learning objective, such as a function measuring the distance of Voronoi partitions from the sample points as is the case with nearest neighbor methods we consider as example methods. Furthermore, resampling and cross-validation based model selection method considered in the third module are also examples of this framework. The law of large numbers concept is revisited and the so-called bounded differences conditions under which it holds for arbitrary performance measures on a sample of data are considered.
  • Performance evaluation with cross-validation
    • In this module, resampling techniques for performance evaluation, such as splitting the sample into training and test set parts as well as its averaged variation known as cross-validation, are considered. Moreover, method for model selection, including selection of hyperparameter values, feature subsets or learning algorithms, based on the resampling approaches are considered. It is observed that this type of model selection is itself a learning algorithm in the same sense as the methods selecting the optimal hypothesis according to some performance measure considered in the second module, the performance measure in this case being the resampling method. Accordingly, to measure the expected prediction performance of hypotheses obtained by this type of a model selector, one has to use resampling techiques on the model selector itself resulting into nested resampling methods that include splitting the sample into training, validation and test parts as well as nested cross-validation.

Taught by

Jonne Pohjankukka and Asja Kamenica

Reviews

Start your review of Supervised machine learning and performance evaluation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.