Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

New Trends in Nonlinear Optimization

International Mathematical Union via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore new trends in nonlinear optimization in this 46-minute lecture by Yu Hong Dai for the International Mathematical Union. Delve into advanced gradient methods, including steepest descent, conjugate gradient, and quasi-Newton approaches. Examine nonmonotone and efficient monotone gradient methods, with a focus on the Dai-Yuan monotone gradient method. Discover the BBQ stepsize theorem and its applications to extreme eigenvalue problems. Investigate constrained optimization techniques, including augmented Lagrangian methods, interior point techniques, and smooth barrier augmented Lagrangian (SBAL) approaches. Compare penalty methods and augmented Lagrangian methods for optimization with linear vector constraints. Gain insights into cutting-edge algorithms for solving nonlinear optimization problems and their practical applications.

Syllabus

Intro
Outline
Methods for Nonlinear Optimization
Gradient Methods Steepest descent method (Cauchy 1847). Find the best point along the
Conjugate Gradient (CG) Methods
Quasi-Newton (QN) Methods
2.1 Nonmonotone Gradient Methods
Properties and Extensions of BB Method Convergence properties for quadratic optimization
Efficiency Evidences for Nonmonotone Gradient Method
2.2 Efficient Monotone Gradient Methods?
Dai-Yuan Monatone Gradient Method A variant of Yuan stepsize (D. & Yuan 2005)
2.3 Equip BB with 2D Quadratic Termination Property?
BBQ Stepsize Theorem (2D quadratic termination)
BBQ for Extreme Eigenvalues Problems
2.4 Discussion
3.1 Hestenes Powell Augmented Lagrangian
Fletcher's Exact Penalty Function
3.2 General Constrained Optimization
Interior Paint Technique Inequality Constrained Optimization
Interior Point Methods vs Simplex Methods
3.3 Smooth Barrier Augmented Lagrangian (SBAL)
Advantages of SBAL
Discussion: SBALM for MINLP
Algorithms for Infeasible Stationary Points
Compare Penalty Method and ALM for OLVC
Some Concluding Remarks

Taught by

International Mathematical Union

Reviews

Start your review of New Trends in Nonlinear Optimization

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.