matlab中的非线性规划求解fmincon函数(Nonlinear programming fmincon function in MATLAB).doc

上传人:rrsccc 文档编号:8953746 上传时间:2021-01-27 格式:DOC 页数:9 大小:31.50KB
返回 下载 相关 举报
matlab中的非线性规划求解fmincon函数(Nonlinear programming fmincon function in MATLAB).doc_第1页
第1页 / 共9页
matlab中的非线性规划求解fmincon函数(Nonlinear programming fmincon function in MATLAB).doc_第2页
第2页 / 共9页
matlab中的非线性规划求解fmincon函数(Nonlinear programming fmincon function in MATLAB).doc_第3页
第3页 / 共9页
matlab中的非线性规划求解fmincon函数(Nonlinear programming fmincon function in MATLAB).doc_第4页
第4页 / 共9页
matlab中的非线性规划求解fmincon函数(Nonlinear programming fmincon function in MATLAB).doc_第5页
第5页 / 共9页
点击查看更多>>
资源描述

《matlab中的非线性规划求解fmincon函数(Nonlinear programming fmincon function in MATLAB).doc》由会员分享,可在线阅读,更多相关《matlab中的非线性规划求解fmincon函数(Nonlinear programming fmincon function in MATLAB).doc(9页珍藏版)》请在三一文库上搜索。

1、matlab中的非线性规划求解fmincon函数(Nonlinear programming fmincon function in MATLAB)The basic form of this function isX = fmincon (fun, x0, A, B, Aeq, BEQ, LB, UB, nonlcon, options)Where fun is the minimum value you need to function, you can write a single file settings function, such as the examples given ab

2、ove.1., if there are N variables in fun, such as X, y, Z, or X1, X2, X3, and so on, and in the order of their own, unified in fun is expressed in X (1), X (2) and.X (n).2. x0, which represents the initial guess value, the same as the number of variables3. A B linear constraints, A*x = B, A should be

3、 n*n order matrix, learning linear algebra should not be difficult to write A and B4 Aeq BEQ is linearly equal constraint, Aeq*x = beq. Aeq BEQ, dittoThe 5 lb UB is the upper and lower bounds of the variables. The positive and negative infinity is represented by -Inf and Inf, and the LB UB should be

4、 a N order array6 nonlcon for nonlinear constraints can be divided into two parts, nonlinear inequality constraints C, nonlinear equality constraints, and CEQYou can set it as followsFunction, c, ce = nonlcon1 (x)C = -x (1) +x (2) 2-4;CE = ; no nonlinear equality constraints%7, the last is options,

5、can be set using the OPTIMSET function, see aboveSpecific help files for the OPTIMSET function can be seen.For optimal control, the MATLAB provides 18 parameters, which are of the specific meaning:Options (1) - parameter display control (default 0). Show some results when equal to 1.options (2) - pr

6、ecision control of the optimization point x (default value is 1e-4). Options = optimset (TolX, 1e-8)options (3) - precision control of the optimization function F (default value is 1e-4). Options = optimset (TolFun, 1e-10)options (4) - violation of the end of the standard (default is 1e-6).options (

7、5) - algorithm selection is not commonly used.options (6) - optimizer method selection; 0 for BFCG algorithm; 1 for DFP algorithm.options (7) - linear interpolation algorithm selection; 0 for the mixed interpolation algorithm; for 1, the use of cubic interpolation algorithm.options (8) - function va

8、lue display (target - Lambda in the problem)options (9) - if you want to detect the gradient provided by the user, set to 1.options (10) - the number of functions and constraints evaluated.options; () - the number of function gradient estimates.options; (12) - the number of bound valuations.options

9、(13) - the number of constraints.options (14) - the maximum number of function valuations (default is 100 x variables)options (15) - for target - achieve special goals in the problem.options (16) - the minimum finite difference gradient of variables in the optimization process.options (17) - the lar

10、gest finite difference gradient of variables in the optimization process.options (18) - step size setting (default is 1 or less).Foptions has been replaced by optimset and optimget. For details, check function optimset and optimget.Ps: above, x = fmincon (fun, x0, A, B, Aeq, BEQ, LB, UB, nonlcon, op

11、tions)The arguments in parentheses are left to right, and can be given only to parts.If it can be written as x = fmincon (fun, x0, A, b), x = fmincon (fun, x0, A, B, Aeq, BEQ), x = fmincon (fun, x0, A, B, Aeq),BEQ, LB, UB)Some constraints such as the intermediate is empty, can , if it can be written

12、 as x = fmincon (fun, x0, A, B, , , LB, UB)Fmincon functionAnalysis of fmincon function (Reprint)Command format:x, Fval, exitflag, output, lambda, grad, hessian = fmincon (fun, x0, A, B, Aeq, options, BEQ, LB, UB, nonlcon, c)As described in the MATLAB help documentation, the algorithms used by the f

13、mincon command are distinguished from the large-scale optimization problem and the medium problem:Large-Scale OptimizationThe large-scale algorithm is a subspace trust region method and is based on the interior-reflective Newton method described in 1 and 2. Each iteration involves the approximate so

14、lution of a large linear system using the method of preconditioned conjugate gradients (PCG).Medium-Scale OptimizationFmincon uses a sequential quadratic programming (SQP) method. In this method, the function solves a quadratic programming (QP) subproblem at each iteration. An estimate of the Hessia

15、n of the Lagrangian is updated at each iteration using the BFGS formula. A line search is performed The QP subproblem is solved using an active set strategy.Here I try to answer three questions:OneWhat, Large-Scale, Optimization, what is Medium-Scale Optimization?TwoThe principles of subspace, trust

16、, region, and sequential quadratic programming methods provided by fimcon?ThreeWhat are the BFGS formulas and the linear search?Question 1The so-called large-scale problem refers to the emergence of a large number of optimization variables in engineering, chemistry and other fields. Since the dimens

17、ions of the independent variables are very high, such problems are solved by decomposing them into several low dimensional subproblems. Medium Scale optimization problem is actually a concept corresponding to Matlab put forward and large-scale problems, optimization algorithm is generally, such as t

18、he Newton method, the steepest descent method such as optimization variable is not a lot of problems.Question 2For large-scale problems, fmincon uses the subspace trust region optimization algorithm. This algorithm is the objective function in the neighborhood of X Taylor (x can think is to provide

19、the initial guess), the expansion of the neighborhood is called the trust region, the Taylor expansion to two order so far:Q (x) = 1/2* + (1)At this point, the target function can be seen in a local feature. In such a neighborhood, we seek a new point X1 that reduces the value of the objective funct

20、ion. This problem is simpler than the original one. However, in fact, the solution of this subproblem is still intolerable for the problem of the existence of very large scale optimization variables.At the same time, we note that due to the second Taylor expansion, this requires us to provide the fu

21、nction of first order calculations. If we cannot provide a first-order derivative expression, the two order derivative (Hessian matrix) matlab cannot be computed. So if we use the fmincon command instead of a first-order expression, fmincon will give up the use of large-scale algorithms.As mentioned

22、 earlier, the direct solution of the original problem transformation is still intolerable. By further approximating the subspace trust region, the problem is confined to the two-dimensional subspace of trust region.The sequential two programming method transforms a nonlinear optimization problem wit

23、h equality and inequality constraints (which can be nonlinear) into a two programming problem, and the two programming problem is similar to the formula (1).Specific transformation process can be referred to:Http:/www.caam.rice.edu/adpadu/talks/sqp1.pdfQuestion 3For Mediumscale problem solving two q

24、uadratic programming problems involving matrix Hessian. The approximate Hessian matrix is calculated by quasi Newton method, quasi Newton method provides two formulas can be used for Hessian matrix (or its inverse iteration): BFGS formula and DFP formula), and the initial Hessian matrix is arbitrary

25、 to, such as a unit of array I.The BFGS formula is as follows:H (k+1) = H (k) + / /s (k), H (k), (k), s ()(3)Sum up:Fmincon run first check whether the gradient expression, such as there is large scale selection algorithm (subspace trust region), which relates to the approximate Hessian matrix calcu

26、lation, because has provided the gradient formula, Hessian matrix can be calculated directly by finite difference. However, if the user provides the Hessian formula directly, it is calculated directly.If there is no gradient expression provided, fmincon selects the SQP algorithm. In the algorithm, the Hessian matrix can be given by the BFGS iteration and the initial Hessian array. Note that the Q term in the BFGS formula is required to compute the gradient of the objective function. Therefore, the finite difference method is needed for the approximate calculation of Hessian matrices.

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 社会民生


经营许可证编号:宁ICP备18001539号-1