This course dives into how neural networks learn from data. You'll implement loss functions to measure prediction errors, understand the intuition and mechanics of gradient descent, master the backpropagation algorithm to calculate gradients, and use an optimizer to update network weights.
Overview
Syllabus
- Unit 1: Mean Squared Error Loss
- Fixing the Mean Squared Error (MSE) Loss Function in an R Neural Network
- Implementing Mean Squared Error Loss with Loops in R
- Mean Squared Error Loss with Vectorized R Operations
- Handling Single Sample and Batch MSE in a Simple Neural Network
- Unit 2: Gradient Descent Fundamentals
- Implementing 1D Gradient Descent in R
- Experimenting with Learning Rate in Gradient Descent (R)
- Fixing Gradient Descent for a Quadratic Function in R
- Early Stopping in Gradient Descent
- Implementing 1D Gradient Descent in R
- Unit 3: Backpropagation in Neural Networks
- Correct Use of Activation Derivative in Backward Pass
- Fixing Backpropagation Gradient Calculation in DenseLayer (R)
- Implementing the Backward Pass for a Dense Layer in R
- Check Gradient Shapes in Dense Layer Backward Pass
- Unit 4: Backpropagation in Multilayer Networks
- Implementing the Derivative of MSE Loss in R
- Implementing Backpropagation Through All Layers of an MLP in R
- Manual Backpropagation in a Simple MLP (R Implementation)
- Unit 5: Training Neural Networks with SGD
- Implementing SGD Parameter Updates in an R Neural Network
- Extracting Mini-Batches from Shuffled Data in R
- Fixing the SGD Optimizer Update Rule in an MLP (R)
- Single Training Step for a Neural Network in R
- Implementing a Mini-Batch Training Loop for an MLP in R