Lecture plan
Date  Topics  Reading  Slides and code 

Introduction, notion and existence of solutions  
Week 2  Overview Formulation of a minimization problem Definitions of minima Lower semicontinuity and coercivity Existence theorems for minima First order necessary conditions Second order necessary and sufficient conditions  N&W, Chapters 1, 2.1 Minimisers of Optimisation Problems  Introduction Definitions 
Week 3  Convex functions First and second order characterisations  Basic Properties of Convex Functions  Optimality conditions 
Line search methods for free optimisation  
Gradient descent method Exact line search methods Armijo conditions Wolfe conditions  N&W, Chapter 3.1 Convergence of descent methods with backtracking  Convex functions undamped_graddesc.py graddesc_backtracking.py 

Week 4  Exact line search Goldstein conditions (strong and weak) Wolfe conditions Algorithmic approach to Wolfe conditions Convergence rate of the gradient descent method Newton's method Damped Newton method Regularisation of the damped Newton method  N&W, Chapter 3.1 Convergence of descent methods with backtracking N&W, Chapter 3.3 N&W, Chapter 3.5  Backtracking gradient descent Line searches 
Week 5  Linear Conjugate Gradient method Fletcher–Reeves method Polak–Ribière method Zoutendijk's result Summary  N&W, Chapter 5 N&W, Chapter 3.2  Gradient descent and Newton CG methods 
Theory and basic methods for constrained optimisation  
Week 6  Constrained optimisation over convex sets Feasible directions First order optimality conditions for convex sets Projections on convex sets Gradient projection method Normal cones Conditions for linear constraints Tangent cones for nonconvex sets Necessary conditions for nonconvex sets Linearised feasible directions  Optimisation with convex constraints N&W, Chapter 12.1–12.4.^{1)}  Nonlinear CG methods Optimisation with convex constraints 
Week 7  Constraint qualifications LICQ Farkas' Lemma KKTconditions Second order necessary and sufficient conditions  N&W, Chapter 12.2–12.5  Linear and nonconvex constraints Constraint qualifications 
Week 8  Quadratic penalty method Nonsmooth penalty methods Augmented Lagrangian method  N&W, Chapters 17.1–17.3  
Week 9  Penalty methods for inequality constraints Logarithmic barrier methods  N&W, Chapters 17.1–17.2, 17.4 N&W, Chapter 19.1  KKT conditions and more Barrier methods 
Advanced methods for free optimisation  
QuasiNewton methods SR1method DFP and BFGS method  N&W, Chapters 6.1, 6.2, 6.4  
Week 10  BFGS method Properties of QuasiNewton methods Limited memory BFGS Trust region methods Exact solution of the trust region problem  N&W, Chapters 6.1, 6.2, 6.4, 7.2 N&W 4.1, 4.2, 4.3  SR1 and DFP methods QuasiNewton methods 
Week 11  Cauchy point Convergence of trust region methods Dogleg method Nonlinear least squares problems Gauß–Newton method Levenberg–Marquardt method  N&W 4.1, 4.2 N&W 10.1, 10.3  Trust region methods Trust region methods, II 
Advanced methods for constrained optimisation  
Lagrangian duality Primal and dual linear programmes Weak duality  N&W, Chapter 12.9  
Week 12  Strong duality for convex problems Saddle point properties Dual projected gradient method Primaldual interior point method for linear programming Central path Path following methods  N&W, Chapter 12.9 Lagrangian Duality N&W, Chapters 13.1,14.1  Nonlinear Least squares and Lagrangian duality Lagrangian duality 
Week 13  Quadratic programming Direct solution of equality constrained problems Active set methods SQP method (equality constraints) Newton's method for the KKT conditions Merit functions Maratos effect  N&W, Chapters 16.1, 16.2, 16.5, 16.7 N&W, Chapters 18.1, 18.3, 15.4, 15.5  Interior point methods Quadratic programming 
Week 14  SQP method (inequality constraints)  N&W, Chapters 18.1, 18.3  SQP method 
Summary  
Thursday: Summary of theory Friday: Summary of numerical methods Questions  Summary  
Week 15  Primal dual methods for convex problems Questions  Some poor, handwritten notes 
^{1)}
12.2 is particularly relevant here.