EfficientNetV2 - Smaller Models and Faster Training - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube
Build the Finance Skills That Lead to Promotions — Not Just Certificates
The Most Addictive Python and SQL Courses
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a comprehensive video explanation of the EfficientNetV2 paper, which introduces smaller models and faster training techniques for image classification. Learn about progressive training, the Fused-MBConv layer, and a novel reward function for Neural Architecture Search (NAS). Dive deep into the paper's key concepts, including a high-level overview, NAS review, novel reward function, progressive training, stochastic depth regularization, and results. Gain insights into how EfficientNetV2 achieves better performance on ImageNet top-1 accuracy compared to recent models like NFNets and Vision Transformers.
Syllabus
High-level overview
NAS review
Deep dive
Novel reward
Progressive training
Stochastic depth regularization
Results
Taught by
Aleksa Gordić - The AI Epiphany