Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the historical development of backpropagation in this 27-minute educational video that traces the algorithm's origins and evolution. Discover why perceptron convergence learning became obsolete, why analytical methods proved impractical, and how numerical methods like gradient descent offered viable alternatives. Learn about gradient vectors and the direction of steepest ascent, efficient gradient computation techniques, and the meaning behind the "backward" in backpropagation. The video examines the contributions of key figures including Cauchy (who invented gradient descent in 1847), Rumelhart, Williams, Hinton, Paul Werbos, and others who developed this fundamental neural network training algorithm. Complete with extensive resources including original papers, tutorials, and supplementary materials, this comprehensive exploration concludes with a quiz and summary to reinforce understanding of backpropagation's historical development and mathematical foundations.
Syllabus
0:00 Why perceptron convergence learning could no longer be used
2:00 Why Analytical methods are infeasible
4:00 How numerical methods like gradient descent are feasible
7:00 Gradient vector and direction of steepest ascent
13:58 An efficient way to compute gradients
17:32 What is "backward" in back propagation and why
18:48 How this efficient gradient compute method is used in neural networks
20:08 Who invented backprop
23:22 Quiz Time
24:37 Summary
Taught by
CodeEmporium