This course builds upon single layers to construct a complete Multi-Layer Perceptron (MLP). You'll learn to stack layers, explore different activation functions like ReLU and Softmax, and understand the importance of weight initialization for effective training.
Overview
Syllabus
- Unit 1: Stacking Layers: Building a Multi-Layer Perceptron (MLP)
- Passing Data Through the MLP
- Aligning Layers in Your Neural Network
- Build Your Own MLP Class
- Deepening Your Neural Network Design
- Build a Complete Neural Network
- Unit 2: The ReLU Activation Function: Powering Modern Neural Networks
- Debugging the ReLU Activation Function
- Adding ReLU Power to DenseLayer
- Build the ReLU Activation Function
- Unit 3: Output Layer Activations: Softmax and Linear
- Making Softmax Work for Any Input
- Verifying Softmax Output Validity
- Build the Linear Activation Function
- Matching Activations to Layers
- Comparing Classification and Regression Outputs
- Unit 4: Weight Initialization Strategies
- Random Weight Initialization in Action
- Fixing Uniform Initialization for Layers
- Mastering Xavier Initialization
- He Uniform Initialization in Practice