Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a novel technique called Selective-Backprop that accelerates deep neural network training by prioritizing high-loss examples. Learn how this method reduces computationally expensive backpropagation steps, leading to faster convergence rates compared to standard SGD and importance sampling approaches. Discover the evaluation results on CIFAR10, CIFAR100, and SVHN datasets across various modern image models, showing up to 3.5x faster convergence to target error rates. Understand how using stale forward pass results for selection can further accelerate training by 26%. Gain insights into this innovative approach that optimizes deep learning training efficiency by focusing on the most challenging examples.
Syllabus
Accelerating Deep Learning by Focusing on the Biggest Losers
Taught by
Yannic Kilcher