Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Evaluation Metrics, Overfitting and Underfitting in Machine Learning Models - Episode 3.5

Donato Capitella via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn essential model evaluation techniques and common training challenges in this 16-minute educational video that explores metrics like accuracy, precision, and recall for assessing model performance. Dive into the concepts of overfitting and underfitting, understanding their impact on model training, and discover practical solutions including regularization techniques (L1/L2, Dropout) and data augmentation strategies. Follow along with a hands-on bonus lab demonstrating evaluation methods using the MNIST dataset and learn to construct confusion matrices for better model assessment. Download accompanying mindmaps and reference materials to reinforce understanding of these crucial machine learning concepts.

Syllabus

- Introduction
- Evaluating Models
- Accuracy, Precision and Recall
- Confusion Matrix
- Overfitting and Underfitting
- Regularization L1/L2, Dropout
- Data Augmentation and Improving Models
- Bonus Lab: Evaluting the MNIST Model
- Bonus Lab: Building the Confusion Matrix

Taught by

Donato Capitella

Reviews

Start your review of Evaluation Metrics, Overfitting and Underfitting in Machine Learning Models - Episode 3.5

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.