This course builds upon single layers to construct a complete Multi-Layer Perceptron (MLP). You'll learn to stack layers, explore different activation functions like ReLU and Softmax, and understand the importance of weight initialization for effective training.
Overview
Syllabus
- Unit 1: The MLP Architecture: Activations & Initialization
- Implementing Forward Propagation in a Multi-Layer Perceptron
- Fix Layer Dimensions in a Multi-Layer Perceptron
- Building a Multi-Layer Perceptron (MLP) Class from Scratch
- Adding a Fourth Layer to a Multi-Layer Perceptron
- Building a Multi-Layer Perceptron from Scratch
- Unit 2: ReLU Activation and Flexible Activation Functions in MLPs
- Fix the ReLU Activation Function for Neural Networks
- Implementing ReLU Activation Function in Neural Network
- Implement the ReLU Activation Function for Neural Networks
- Unit 3: Output Layer Activation Functions: Softmax and Linear in MLPs
- Implement Numerically Stable Softmax Function
- Verifying Softmax Outputs Sum to One
- Implementing Linear Activation Function for Neural Network Regression
- Debugging Neural Network Output Activations
- Building Classification and Regression Neural NetworksAI
- Unit 4: Weight Initialization Strategies for MLPs in JavaScript
- Implement Random Scaled Weight Initialization for Neural Network
- Fix the He Uniform Weight Initialization in Neural Network
- Implementing Xavier Normal Initialization for Neural Networks
- Implement He Uniform Weight Initialization for Neural Networks