AI Engineer - Learn how to integrate AI into software applications
Get 35% Off CFI Certifications - Code CFI35
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Leverage techniques for optimizing deep learning models and implementing them using Python.
Syllabus
Introduction
- Optimizing deep learning models
- What you should know
- Using the exercise files
- The importance of optimizing deep learning models
- The bias-variance trade-off
- Lasso and ridge regularization
- Applying L1 regularization to a deep learning model
- Applying L2 regularization to a deep learning model
- Elastic Net regularization
- Dropout regularization
- Applying dropout regularization to a deep learning model
- Common loss functions in deep learning
- Batch gradient descent
- Stochastic gradient descent (SGD)
- Mini-batch gradient descent
- Adaptive Gradient Algorithm (AdaGrad)
- Root Mean Square Propagation (RMSProp)
- Adaptive Delta (AdaDelta)
- Adaptive Moment Estimation (Adam)
- Parameters versus hyperparameters
- Key hyperparameters in deep learning
- Methods for hyperparameter tuning
- Defining a tunable deep learning model in Keras
- Using KerasTuner for hyperparameter tuning
- Batch normalization
- Applying batch normalization to a deep learning model
- Gradient clipping
- Applying gradient clipping to a deep learning model
- Early stopping and checkpointing
- Learning rate scheduling
- Training a deep learning model using callbacks
- Continuing to optimize deep learning models
Taught by
Frederick Nwanganga