Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CodeSignal

Improving Neural Networks with PyTorch

via CodeSignal

Overview

This course walks learners through improving a weak neural network using techniques specific to deep learning, including dropout, early stopping, and batch normalization.

Syllabus

  • Unit 1: Training and Evaluating a Simple Neural Network in PyTorch
    • Learning Rate Impact on Training
    • Building a Neural Network from Scratch
    • Implementing Model Evaluation During Training
    • Detecting Overfitting in Neural Networks
  • Unit 2: Adding Dropout to Neural Networks in PyTorch
    • Adding Dropout to Prevent Overfitting
    • Parameterizing Dropout for Flexible Models
    • Fixing Dropout Mode for Stable Validation
    • Applying Varied Dropout in Deep Networks
  • Unit 3: Early Stopping in PyTorch: Preventing Overfitting During Training
    • Visualizing the Early Stopping Counter
    • Debugging Early Stopping Comparison Logic
    • Adding Minimum Delta for Early Stopping
    • Saving the Best Model with Checkpoints
  • Unit 4: Adding Batch Normalization to Neural Networks in PyTorch
    • Adding Batch Normalization to MLP
    • Fixing Batch Normalization Dimension Mismatch
    • Batch Normalization in Deeper Networks
    • Comparing Models With and Without BatchNorm

Reviews

Start your review of Improving Neural Networks with PyTorch

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.