Multilevel Stochastic Gradient Descent for Risk-Averse PDE Constraint Optimization
Hausdorff Center for Mathematics via YouTube
All Coursera Certificates 40% Off
Power BI Fundamentals - Create visualizations and dashboards from scratch
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a mathematical lecture presenting a multilevel stochastic gradient descent framework designed for optimizing partial differential equation-constrained systems under uncertain input conditions, with particular focus on risk-averse objectives. Learn how this advanced method utilizes parallel multilevel Monte Carlo estimators to approximate stochastic gradients while maintaining explicit control over both discretization bias and sampling errors from incomplete gradient information. Discover the optimal computational resource management strategies that enable linear convergence in optimization steps without requiring the computational expense of sample average approximation for full gradient computation. Examine numerical experiments that demonstrate significant improvements in convergence speed and accuracy compared to standard mini-batch stochastic gradient descent methods when evaluated against computational resource usage. Understand how this approach particularly excels in high-dimensional control problems by leveraging parallel computing architectures and distributed multilevel data structures, making it especially valuable for complex optimization scenarios involving uncertain parameters in PDE-constrained systems.
Syllabus
David Schneiderhan: Multilevel Stochastic Gradient Descent for Risk-Averse PDE Constraint Optimizat…
Taught by
Hausdorff Center for Mathematics