Pass the PMP® Exam on Your First Try — Expert-Led Training
MIT Sloan: Lead AI Adoption Across Your Organization — Not Just Pilot It
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This course covers the fundamentals of deep learning, including both theory and applications. Topics include neural net architectures (MLPs, CNNs, RNNs, graph nets, transformers), geometry and invariances in deep learning, backpropagation and automatic differentiation, learning theory and generalization in high dimensions, and applications to computer vision, natural language processing, and robotics.
Syllabus
- Lec 01. Introduction to Deep Learning
- Lec 02. How to Train a Neural Net
- Lec 03. Approximation Theory
- Lec 04. Architectures: Grids
- Lec 05. Architectures: Graphs
- Lec 06. Generalization Theory
- Lec 07. Scaling Rules for Optimization
- Lec 08. Architectures: Transformers
- Lec 09. Hacker's Guide to Deep Learning
- Lec 10. Architectures: Memory
- Lec 11. Representation Learning: Reconstruction-Based
- Lec 12. Representation Learning: Similarity-Based
- Lec 13. Representation Learning: Theory
- Lec 14. Generative Models: Basics
- Lec 15. Generative Models: Representation Learning Meets Generative Modeling
- Lec 16. Generative Models: Conditional Models
- Lec 17. Generalization: Out-of-Distribution (OOD)
- Lec 18. Transfer Learning: Models
- Lec 19. Transfer Learning: Data
- Lec 20. Scaling Laws
- Lec 21. Language Models
- Lec 23. Metrized Deep Learning
- Lec 24. Inference Methods for Deep Learning
- PyTorch Tutorial
Taught by
Prof. Sara Beery, Dr. Jeremy Bernstein, and Prof. Phillip Isola