{"id":227,"date":"2014-06-19T09:13:22","date_gmt":"2014-06-19T17:13:22","guid":{"rendered":"http:\/\/research.engineering.ucdavis.edu\/biosport\/?page_id=227"},"modified":"2014-06-19T09:13:22","modified_gmt":"2014-06-19T17:13:22","slug":"nonlinear-programming-nlp","status":"publish","type":"page","link":"https:\/\/research.engineering.ucdavis.edu\/biosport\/sample-page\/lab-meetings\/nonlinear-programming-nlp\/","title":{"rendered":"Nonlinear Programming (NLP)"},"content":{"rendered":"<p>Lab Presentation on Nonlinear Programming (Constrained, Nonlinear Optimization)<\/p>\n<div id=\"parent-fieldname-text\">\n<h2>What is NLP?<\/h2>\n<p>Nonlinear programming refers to the an optimization scenario in which: the objective function is possibly nonlinear, and there are equality and\/or inequality constraints which are possibly nonlinear.<\/p>\n<p>My interest is in applying it to optimal control problems, which can have quite complicated nonlinear dynamics and constraints.<\/p>\n<h2>The Math:<\/h2>\n<p>The nonlinear, in\/equality constrained optimization problem can be written as:<\/p>\n<p>minxf(x)<\/p>\n<p>subject to\u00a0h(x)=0<\/p>\n<p>g(x)\u22640<\/p>\n<p>Where x is a vector of length n, f(x) returns a scalar, h(x) is a vector of length t, and g(x) is a vector of length m<\/p>\n<h2>How to solve this?<\/h2>\n<p>\u00a0First, look at a simpler problem: unconstrained optimization<\/p>\n<p>minxf(x)<\/p>\n<p>The unconstrained case has two approaches: the line search, and the trust region method.<\/p>\n<h3>Line Search<\/h3>\n<p>With the line search, we rearrange this to the 1-dimensional optimization problem:<\/p>\n<p>min\u03b1&gt;0f(x+\u03b1p), where\u00a0p\u00a0is search direction, commonly\u00a0p=\u2212\u2207f<\/p>\n<p>This is repeated until a stopping condition is met.<\/p>\n<dl>\n<dt><a href=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/nocedal-ls\" rel=\"lightbox\"><img loading=\"lazy\" decoding=\"async\" title=\"Nocedal &amp; Wright 2006, Line Search, p 42\" alt=\"Nocedal &amp; Wright 2006, Line Search, p 42\" src=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/nocedal-ls\/image_preview\" width=\"400\" height=\"248\" \/><\/a><\/dt>\n<dd><\/dd>\n<\/dl>\n<h3>Trust Region<\/h3>\n<p>With the trust region method, we approximate\u00a0f\u00a0with a quadratic model\u00a0m, then:<\/p>\n<p>minpm(x+p), where\u00a0x+p\u00a0is within the trust region\u00a0\u0394<\/p>\n<p>The model can be defined as:\u00a0m(x+p)=f+pT\u2207f+12pT\u22072p<\/p>\n<p>p\u00a0can be crudely minimized by defining\u00a0p=\u2212\u2207f\u2225\u2207f\u2225\u0394<\/p>\n<p>There are additional\u00a0subtleties\u00a0to this procedure, involving the trust region\u00a0\u0394, and the step definition\u00a0p.<\/p>\n<dl>\n<dt><a href=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/nocedal-trust\" rel=\"lightbox\"><img loading=\"lazy\" decoding=\"async\" title=\"Nocedal &amp; Wright 2006, Trust Region, p 72 \" alt=\"Nocedal &amp; Wright 2006, Trust Region, p 72 \" src=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/nocedal-trust\/image_preview\" width=\"400\" height=\"211\" \/><\/a><\/dt>\n<dd><\/dd>\n<\/dl>\n<h3>Stopping Conditions<\/h3>\n<p>For the unconstrained problem, we examine the gradient of the objective function\u00a0f(x), and when it is 0, we are at a local minimum.\u00a0<\/p>\n<p>This is a common stopping condition, although stopping conditions can be more complex.<\/p>\n<h3>Constrained Optimization<\/h3>\n<p>The next step is to add equality constraints:\u00a0h(x)=0<\/p>\n<p>We then write a new function, which is the\u00a0<em>Lagrangian<\/em>:<\/p>\n<p>L(x,\u03bb)=f(x)+\u03bbTh(x)<\/p>\n<p>where\u00a0\u03bb\u00a0is a vector of length t.<\/p>\n<p>The solution to this problem can be found with the following iterative process:<\/p>\n<p>1)\u00a0minxL(x,\u03bb)<\/p>\n<p>2) update\u00a0\u03bb\u00a0estimates<\/p>\n<p>For inequality constraints, we will consider the interior-point method:<\/p>\n<p>We add slack variables, changing the inequality constraints to equality constraints:<\/p>\n<p>g(x)\u22640\u21d2(g(x)+s)=0,s\u22650<\/p>\n<p>We then rewrite our Lagrangian in a way which incorporates this:<\/p>\n<p>L(x,s,\u03bbh,\u03bbg)=f(x)\u2212\u03bc\u2211mi=1lnsi+\u03bbThh(x)+\u03bbTg(g(x)+s)<\/p>\n<p>We have now introduced a penalty parameter,\u00a0\u03bc, which is progressively decreased.<\/p>\n<p>This allows the inequality boundaries to be reached in a controlled rate.<\/p>\n<h2>Final Algorithm<\/h2>\n<p>while\u00a0E(x,s)&gt;\u03f5TOL<\/p>\n<p>\u00a0 \u00a0 while\u00a0E(x,s;\u03bc)&gt;\u03f5\u03bc<\/p>\n<p>\u00a0 \u00a0 \u00a0 \u00a0 compute normal step<\/p>\n<p>\u00a0 \u00a0 \u00a0 \u00a0 compute new\u00a0\u03bb<\/p>\n<p>\u00a0 \u00a0 \u00a0 \u00a0 compute hessian<\/p>\n<p>\u00a0 \u00a0 \u00a0 \u00a0 compute tangential step<\/p>\n<p>\u00a0 \u00a0 \u00a0 \u00a0 if step passes merit function test<\/p>\n<p>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 take step<\/p>\n<p>\u00a0 \u00a0 \u00a0 \u00a0 else<\/p>\n<p>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 try second order correction<\/p>\n<p>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 maybe take step<\/p>\n<p>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 reduce trust region radius<\/p>\n<p>\u00a0 \u00a0 reduce\u00a0\u03bc,\u03f5\u03bc<\/p>\n<p>Done!<\/p>\n<h3>Normal Step<\/h3>\n<p>The normal step attempts to satisfy the linearized constraints in the problem.<\/p>\n<p>It does so in a manner similar to the basic trust region method; here the quadratic model to minimize is based of linearized constraints.<\/p>\n<h3>Tangential Step<\/h3>\n<p>The tangential step attempts to find optimality while moving tangentially to the gradients of the constraints.<\/p>\n<p>This is accomplished by using the Projected Conjugate Gradient method. Here the objective function does not contain the constraints.<\/p>\n<p>Instead, all steps are &#8220;projected&#8221; through a matrix which aligns them with the gradient of the constraints.<\/p>\n<h2>Test Cases<\/h2>\n<p>This is a previously covered optimal control example:<\/p>\n<p>Minimize the total energy consumed by a double integrator, from t=0 to t=1, meeting constraints.<\/p>\n<h3>State Space Equations:<\/h3>\n<p>&nbsp;<\/p>\n<div>x\u02d9=v<\/div>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<div>v\u02d9=u<\/div>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<div>w\u02d9=12u2<\/div>\n<p>&nbsp;<\/p>\n<h3>Objective Function:<\/h3>\n<p>&nbsp;<\/p>\n<div>minuw(1)<\/div>\n<p>&nbsp;<\/p>\n<h3>Equality Constraints:<\/h3>\n<p>&nbsp;<\/p>\n<div>x(0)=0,x(1)=0<\/div>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<div>v(0)=1,v(1)=\u22121<\/div>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<div>w(0)=0<\/div>\n<p>&nbsp;<\/p>\n<h3>Inequality Constraints:<\/h3>\n<p>&nbsp;<\/p>\n<div>l\u2212x(t)\u22650<\/div>\n<p>&nbsp;<\/p>\n<h3>Computation:<\/h3>\n<p>Using the Direct Collocation method, with 40 time points we get:<\/p>\n<p>160 variables to optimize<\/p>\n<p>122 equality constraints<\/p>\n<p>40 inequality constraints<\/p>\n<p>When solving, hit an iteration limit of 250 in 20 seconds.<\/p>\n<p>The error function at this point is less than 1.e-4<\/p>\n<p>The error function is basically the maximum violation of KKT conditions.<\/p>\n<p>2500 iterations of 160 variables takens around 2.5 minutes, and reduces error function to ~1. e-5<\/p>\n<p>1000 iterations of 320 variables takes around 3.5 minutes<\/p>\n<h3>Results:<\/h3>\n<dl>\n<dt><a href=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/opt-test-x\" rel=\"lightbox\"><img loading=\"lazy\" decoding=\"async\" title=\"Optimization Test Case, Position\" alt=\"Optimization Test Case, Position\" src=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/opt-test-x\/image_preview\" width=\"400\" height=\"301\" \/><\/a><\/dt>\n<dd><\/dd>\n<\/dl>\n<dl>\n<dt><a href=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/opt-test-y\" rel=\"lightbox\"><img loading=\"lazy\" decoding=\"async\" title=\"Optimization Test Case, Velocity\" alt=\"Optimization Test Case, Velocity\" src=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/opt-test-y\/image_preview\" width=\"400\" height=\"301\" \/><\/a><\/dt>\n<dd><\/dd>\n<\/dl>\n<dl>\n<dt><a href=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/opt-test-w\" rel=\"lightbox\"><img loading=\"lazy\" decoding=\"async\" title=\"Optimization Test Case, Energy Output\" alt=\"Optimization Test Case, Energy Output\" src=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/opt-test-w\/image_preview\" width=\"400\" height=\"301\" \/><\/a><\/dt>\n<dd><\/dd>\n<\/dl>\n<dl>\n<dt><a href=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/opt-test-u\" rel=\"lightbox\"><img loading=\"lazy\" decoding=\"async\" title=\"Optimization Test Case, Thrust\" alt=\"Optimization Test Case, Thrust\" src=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/opt-test-u\/image_preview\" width=\"400\" height=\"301\" \/><\/a><\/dt>\n<dd><\/dd>\n<\/dl>\n<p>Previously computed results using MATLAB&#8217;s solvers:<\/p>\n<dl>\n<dt><a href=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/opt-test-matlab\" rel=\"lightbox\"><img loading=\"lazy\" decoding=\"async\" title=\"Optimization Test Case, Old Matlab Example\" alt=\"Optimization Test Case, Old Matlab Example\" src=\"http:\/\/biosport.ucdavis.edu\/lab-meetings\/nlp-images\/opt-test-matlab\/image_preview\" width=\"400\" height=\"312\" \/><\/a><\/dt>\n<dd><\/dd>\n<\/dl>\n<h2>References:<\/h2>\n<p>1.\u00a0Byrd, R. H., Hribar, M. E., &amp; Nocedal, J. (1999). An interior point algorithm for large-scale nonlinear programming \u2217.\u00a0<em>Society<\/em>,\u00a0<em>9<\/em>(4), 877-900.<\/p>\n<p>2.\u00a0Nocedal, J., &amp; Wright, S. (2006).\u00a0<em>Numerical Optimization<\/em>.\u00a0<\/p>\n<p>3.\u00a0Von Stryk, O. (1991).\u00a0<em>Numerical solution of optimal control problems by direct collocation<\/em>.\u00a0<em>International Series of Numerical Mathematics<\/em>\u00a0(Vol. 111, pp. 1-13). Citeseer. Retrieved from http:\/\/citeseerx.ist.psu.edu\/viewdoc\/download?doi=10.1.1.53.9817&amp;amp;rep=rep1&amp;amp;type=pdf<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Lab Presentation on Nonlinear Programming (Constrained, Nonlinear Optimization) What is NLP? Nonlinear programming refers to the an optimization scenario in which: the objective function is possibly nonlinear, and there are equality and\/or inequality constraints which are possibly nonlinear. My interest is in applying it to optimal control problems, which can \u2026 <a class=\"continue-reading-link\" href=\"https:\/\/research.engineering.ucdavis.edu\/biosport\/sample-page\/lab-meetings\/nonlinear-programming-nlp\/\"> Continue reading <span class=\"meta-nav\">&rarr; <\/span><\/a><\/p>\n","protected":false},"author":10,"featured_media":0,"parent":29,"menu_order":1,"comment_status":"closed","ping_status":"closed","template":"","meta":{"ngg_post_thumbnail":0,"footnotes":""},"class_list":["post-227","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/research.engineering.ucdavis.edu\/biosport\/wp-json\/wp\/v2\/pages\/227","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/research.engineering.ucdavis.edu\/biosport\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/research.engineering.ucdavis.edu\/biosport\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/research.engineering.ucdavis.edu\/biosport\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/research.engineering.ucdavis.edu\/biosport\/wp-json\/wp\/v2\/comments?post=227"}],"version-history":[{"count":0,"href":"https:\/\/research.engineering.ucdavis.edu\/biosport\/wp-json\/wp\/v2\/pages\/227\/revisions"}],"up":[{"embeddable":true,"href":"https:\/\/research.engineering.ucdavis.edu\/biosport\/wp-json\/wp\/v2\/pages\/29"}],"wp:attachment":[{"href":"https:\/\/research.engineering.ucdavis.edu\/biosport\/wp-json\/wp\/v2\/media?parent=227"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}