Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CodeSignal

The MLP Architecture: Activations & Initialization

via CodeSignal

Overview

This course builds upon single layers to construct a complete Multi-Layer Perceptron (MLP). You'll learn to stack layers, explore different activation functions like ReLU and Softmax, and understand the importance of weight initialization for effective training.

Syllabus

  • Unit 1: Stacking Layers: Building a Multi-Layer Perceptron (MLP)
    • Passing Data Through the MLP
    • Aligning Layers in Your Neural Network
    • Build Your Own MLP Class
    • Deepening Your Neural Network Design
    • Build a Complete Neural Network
  • Unit 2: The ReLU Activation Function: Powering Modern Neural Networks
    • Debugging the ReLU Activation Function
    • Adding ReLU Power to DenseLayer
    • Build the ReLU Activation Function
  • Unit 3: Output Layer Activations: Softmax and Linear
    • Making Softmax Work for Any Input
    • Verifying Softmax Output Validity
    • Build the Linear Activation Function
    • Matching Activations to Layers
    • Comparing Classification and Regression Outputs
  • Unit 4: Weight Initialization Strategies
    • Random Weight Initialization in Action
    • Fixing Uniform Initialization for Layers
    • Mastering Xavier Initialization
    • He Uniform Initialization in Practice

Reviews

Start your review of The MLP Architecture: Activations & Initialization

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.