This course builds upon single layers to construct a complete Multi-Layer Perceptron (MLP). You'll learn to stack layers, explore different activation functions like ReLU and Softmax, and understand the importance of weight initialization for effective training.
Overview
Syllabus
- Unit 1: The MLP Architecture: Activations & Initialization
- Implementing Forward Propagation in a Multi-Layer Perceptron
- Fixing Layer Dimensions in a Multi-Layer Perceptron
- Building a Multi-Layer Perceptron Function in R
- Expanding an MLP with an Additional Layer
- Building a Multi-Layer Perceptron from Scratch in R
- Unit 2: ReLU Activation and Flexible Layer Design in R MLPs
- Fixing the ReLU Activation Function for Matrix Inputs
- Implementing ReLU Activation in Your Neural Network
- Implementing the ReLU Activation Function in R
- Unit 3: Output Layer Activation Functions: Softmax and Linear in R MLPs
- Implementing Numerically Stable Softmax in R
- Verifying Softmax Outputs as Valid Probability Distributions
- Implementing the Linear Activation Function for Regression Tasks
- Debugging Output Activation Functions in Neural Networks
- Building Neural Networks with Classification and Regression Output Activations
- Unit 4: Weight Initialization Strategies for Neural Networks in R
- Implementing Random Scaled Weight Initialization for Neural Networks
- Fixing He Uniform Weight Initialization for Neural Networks
- Implementing Xavier Normal Weight Initialization for Neural Networks
- Implementing He Uniform Weight Initialization in Neural Networks