Learn Backend Development Part-Time, Online
Most AI Pilots Fail to Scale. MIT Sloan Teaches You Why — and How to Fix It
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a thought-provoking analysis of deep neural networks in this 43-minute video lecture. Delve into the controversial idea that deep learning models, rather than discovering new data representations, function similarly to kernel machines by storing superpositions of training data in their weights. Learn about kernel machines, tangent kernels, and path kernels before examining the main theorem and its proof. Understand the implications of this perspective for the field of deep learning and gain insights into the interpretability of neural network weights. Enhance your understanding of machine learning concepts and challenge prevailing views on deep neural networks' inner workings.
Syllabus
- Intro & Outline
- What is a Kernel Machine?
- Kernel Machines vs Gradient Descent
- Tangent Kernels
- Path Kernels
- Main Theorem
- Proof of the Main Theorem
- Implications & My Comments
Taught by
Yannic Kilcher