This course dives into how neural networks learn from data. You'll implement loss functions to measure prediction errors, understand the intuition and mechanics of gradient descent, master the backpropagation algorithm to calculate gradients, and use an optimizer to update network weights.
Overview
Syllabus
- Unit 1: Measuring Error: Loss Functions (MSE)
- Mastering the MSE Loss Function
- MSE Loss the Manual Way
- Vectorized MSE for Neural Networks
- Comparing MSE for Single and Batch
- Unit 2: Gradient Descent: The Path to Minimizing Loss
- Taking the First Step Downhill
- Exploring the Power of Learning Rate
- Keep the Gradient Moving
- Stopping Gradient Descent at the Right Time
- Writing the Heart of Gradient Descent
- Unit 3: Backpropagation: Calculating Gradients in a Layer
- Activation Derivatives in Backpropagation
- Mastering Weight Gradients in Backpropagation
- Building Gradients for Neural Layers
- Gradient Shape Check in Backpropagation
- Unit 4: Backpropagation Through the Multi-Layer Perceptron
- Getting Gradients Going the Right Way
- Backward Pass Through the Network
- Build and Probe a Tiny MLP
- Unit 5: Updating Weights with Stochastic Gradient Descent (SGD)
- Making Neural Networks Learn
- Mini Batch Magic for Neural Networks
- Getting Gradient Steps Right
- One Step of Neural Network Learning
- Building the Neural Network Training Loop