Why Neural Networks Are So Deep - AlexNet Explained

Why Neural Networks Are So Deep - AlexNet Explained

CodeEmporium via YouTube Direct link

21:22 Summary

16 of 16

16 of 16

21:22 Summary

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Why Neural Networks Are So Deep - AlexNet Explained

Automatically move to the next video in the Classroom when playback concludes

  1. 1 00:00 Introduction
  2. 2 00:15 Timeline of neural network research
  3. 3 01:52 Rise of GPUs and how it helped neural networks
  4. 4 02:18 ImageNet and hot it helped computer vision research
  5. 5 02:56 How AlexNet came to be
  6. 6 04:09 AlexNet architecture and training at a high level
  7. 7 05:53 ReLU activation and how to removes vanishing gradients
  8. 8 08:30 Training on multiple GPUs and how it speeds up performance
  9. 9 10:09 Local Response Normalization to mimic lateral inhibition
  10. 10 14:27 Overlapping pooling
  11. 11 15:15 Why do Deep networks overfit?
  12. 12 15:57 Data Augmentation and how it curbs overfitting
  13. 13 17:02 Dropout and how it curbs overfitting
  14. 14 19:32 Putting it all together
  15. 15 20:24 Quiz Time
  16. 16 21:22 Summary

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.