首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We present an algorithm for very large-scale linearly constrained nonlinear programming (LCNP) based on a Limited-Storage Quasi-newton method. In large-scale programming solving the reduced Newton equation at each iteration can be expensive and may not be justified when far from a local solution; besides, the amount of storage required by the reduced Hessian matrix, and even the computing time for its Quasi-Newton approximation, may be prohibitive. An alternative based on the reduced Truncated-Newton methodology, that has proved to be satisfactory for large-scale problems, is not recommended for very large-scale problems since it requires an additional gradient evaluation and the solving of two systems of linear equations per each minor iteration. We recommend a 2-step BFGS approximation of the inverse of the reduced Hessian matrix that does not require to store any matrix since the product matrix-vector is the vector to be approximated; it uses the reduced gradient and information from two previous iterations and the so-termed restart iteration. A diagonal direct BFGS preconditioning is used.  相似文献   

2.
A new algorithm is presented for carrying out large-scale unconstrained optimization required in variational data assimilation using the Newton method. The algorithm is referred to as the adjoint Newton algorithm. The adjoint Newton algorithm is based on the first- and second-order adjoint techniques allowing us to obtain the Newton line search direction by integrating a tangent linear equations model backwards in time (starting from a final condition with negative time steps). The error present in approximating the Hessian (the matrix of second-order derivatives) of the cost function with respect to the control variables in the quasi-Newton type algorithm is thus completely eliminated, while the storage problem related to the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The adjoint Newton algorithm is applied to three one-dimensional models and to a two-dimensional limited-area shallow water equations model with both model generated and First Global Geophysical Experiment data. We compare the performance of the adjoint Newton algorithm with that of truncated Newton, adjoint truncated Newton, and LBFGS methods. Our numerical tests indicate that the adjoint Newton algorithm is very efficient and could find the minima within three or four iterations for problems tested here. In the case of the two-dimensional shallow water equations model, the adjoint Newton algorithm improves upon the efficiencies of the truncated Newton and LBFGS methods by a factor of at least 14 in terms of the CPU time required to satisfy the same convergence criterion.The Newton, truncated Newton and LBFGS methods are general purpose unconstrained minimization methods. The adjoint Newton algorithm is only useful for optimal control problems where the model equations serve as strong constraints and their corresponding tangent linear model may be integrated backwards in time. When the backwards integration of the tangent linear model is ill-posed in the sense of Hadamard, the adjoint Newton algorithm may not work. Thus, the adjoint Newton algorithm must be used with some caution. A possible solution to avoid the current weakness of the adjoint Newton algorithm is proposed.  相似文献   

3.
The truncated Newton algorithm was devised by Dembo and Steihaug (Ref. 1) for solving large sparse unconstrained optimization problems. When far from a minimum, an accurate solution to the Newton equations may not be justified. Dembo's method solves these equations by the conjugate direction method, but truncates the iteration when a required degree of accuracy has been obtained. We present favorable numerical results obtained with the algorithm and compare them with existing codes for large-scale optimization.  相似文献   

4.
The FEAST eigenvalue algorithm is a subspace iteration algorithm that uses contour integration to obtain the eigenvectors of a matrix for the eigenvalues that are located in any user‐defined region in the complex plane. By computing small numbers of eigenvalues in specific regions of the complex plane, FEAST is able to naturally parallelize the solution of eigenvalue problems by solving for multiple eigenpairs simultaneously. The traditional FEAST algorithm is implemented by directly solving collections of shifted linear systems of equations; in this paper, we describe a variation of the FEAST algorithm that uses iterative Krylov subspace algorithms for solving the shifted linear systems inexactly. We show that this iterative FEAST algorithm (which we call IFEAST) is mathematically equivalent to a block Krylov subspace method for solving eigenvalue problems. By using Krylov subspaces indirectly through solving shifted linear systems, rather than directly using them in projecting the eigenvalue problem, it becomes possible to use IFEAST to solve eigenvalue problems using very large dimension Krylov subspaces without ever having to store a basis for those subspaces. IFEAST thus combines the flexibility and power of Krylov methods, requiring only matrix–vector multiplication for solving eigenvalue problems, with the natural parallelism of the traditional FEAST algorithm. We discuss the relationship between IFEAST and more traditional Krylov methods and provide numerical examples illustrating its behavior.  相似文献   

5.
To efficiently solve a large scale unconstrained minimization problem with a dense Hessian matrix, this paper proposes to use an incomplete Hessian matrix to define a new modified Newton method, called the incomplete Hessian Newton method (IHN). A theoretical analysis shows that IHN is convergent globally, and has a linear rate of convergence with a properly selected symmetric, positive definite incomplete Hessian matrix. It also shows that the Wolfe conditions hold in IHN with a line search step length of one. As an important application, an effective IHN and a modified IHN, called the truncated-IHN method (T-IHN), are constructed for solving a large scale chemical database optimal projection mapping problem. T-IHN is shown to work well even with indefinite incomplete Hessian matrices. Numerical results confirm the theoretical results of IHN, and demonstrate the promising potential of T-IHN as an efficient minimization algorithm.  相似文献   

6.
A class of generalized variable penalty formulations for solving nonlinear programming problems is presented. The method poses a sequence of unconstrained optimization problems with mechanisms to control the quality of the approximation for the Hessian matrix, which is expressed in terms of the constraint functions and their first derivatives. The unconstrained problems are solved using a modified Newton's algorithm. The method is particularly applicable to solution techniques where an approximate analysis step has to be used (e.g., constraint approximations, etc.), which often results in the violation of the constraints. The generalized penalty formulation contains two floating parameters, which are used to meet the penalty requirements and to control the errors in the approximation of the Hessian matrix. A third parameter is used to vary the class of standard barrier or quasibarrier functions, forming a branch of the variable penalty formulation. Several possibilities for choosing such floating parameters are discussed. The numerical effectiveness of this algorithm is demonstrated on a relatively large set of test examples.The author is thankful for the constructive suggestions of the referees.  相似文献   

7.
A technique for deriving formulas for the second derivatives of a composite function with constrained variables is proposed. The original system of constraint equations is associated with a linear system of equations, whose solution is used to determine the Hessian of the function. The resulting formulas are applied to discrete problems obtained by approximating optimal control problems with the use of Runge-Kutta methods of various orders. For a particular optimal control problem, the numerical results obtained by the gradient method and Newton’s method with the resulting formulas are described and analyzed in detail.  相似文献   

8.
Many optimization problems can be reformulated as a system of equations. One may use the generalized Newton method or the smoothing Newton method to solve the reformulated equations so that a solution of the original problem can be found. Such methods have been powerful tools to solve many optimization problems in the literature. In this paper, we propose a Newton-type algorithm for solving a class of monotone affine variational inequality problems (AVIPs for short). In the proposed algorithm, the techniques based on both the generalized Newton method and the smoothing Newton method are used. In particular, we show that the algorithm can find an exact solution of the AVIP in a finite number of iterations under an assumption that the solution set of the AVIP is nonempty. Preliminary numerical results are reported.  相似文献   

9.
A modified version of the truncated-Newton algorithm of Nash ([24], [25], [29]) is presented differing from it only in the use of an exact Hessian vector product for carrying out the large-scale unconstrained optimization required in variational data assimilation. The exact Hessian vector products is obtained by solving an optimal control problem of distributed parameters. (i.e. the system under study occupies a certain spatial and temporal domain and is modeled by partial differential equations) The algorithm is referred to as the adjoint truncated-Newton algorithm. The adjoint truncated-Newton algorithm is based on the first and the second order adjoint techniques allowing to obtain a better approximation to the Newton line search direction for the problem tested here. The adjoint truncated-Newton algorithm is applied here to a limited-area shallow water equations model with model generated data where the initial conditions serve as control variables. We compare the performance of the adjoint truncated-Newton algorithm with that of the original truncated-Newton method [29] and the LBFGS (Limited Memory BFGS) method of Liu and Nocedal [23]. Our numerical tests yield results which are twice as fast as these obtained by the truncated-Newton algorithm and are faster than the LBFGS method both in terms of number of iterations as well as in terms of CPU time.  相似文献   

10.
We describe an enhanced version of the primal-dual interior point algorithm in Lasdon, Plummer, and Yu (ORSA Journal on Computing, vol. 7, no. 3, pp. 321–332, 1995), designed to improve convergence with minimal loss of efficiency, and designed to solve large sparse nonlinear problems which may not be convex. New features include (a) a backtracking linesearch using an L1 exact penalty function, (b) ensuring that search directions are downhill for this function by increasing Lagrangian Hessian diagonal elements when necessary, (c) a quasi-Newton option, where the Lagrangian Hessian is replaced by a positive definite approximation (d) inexact solution of each barrier subproblem, in order to approach the central trajectory as the barrier parameter approaches zero, and (e) solution of the symmetric indefinite linear Newton equations using a multifrontal sparse Gaussian elimination procedure, as implemented in the MA47 subroutine from the Harwell Library (Rutherford Appleton Laboratory Report RAL-95-001, Oxfordshire, UK, Jan. 1995). Second derivatives of all problem functions are required when the true Hessian option is used. A Fortran implementation is briefly described. Computational results are presented for 34 smaller models coded in Fortran, where first and second derivatives are approximated by differencing, and for 89 larger GAMS models, where analytic first derivatives are available and finite differencing is used for second partials. The GAMS results are, to our knowledge, the first to show the performance of this promising class of algorithms on large sparse NLP's. For both small and large problems, both true Hessian and quasi- Newton options are quite reliable and converge rapidly. Using the true Hessian, INTOPT is as reliable as MINOS on the GAMS models, although not as reliable as CONOPT. Computation times are considerably longer than for the other 2 solvers. However, interior point methods should be considerably faster than they are here when analytic second derivatives are available, and algorithmic improvements and problem preprocessing should further narrow the gap.  相似文献   

11.
This paper concerns developing a numerical method of the Newton type to solve systems of nonlinear equations described by nonsmooth continuous functions. We propose and justify a new generalized Newton algorithm based on graphical derivatives, which have never been used to derive a Newton-type method for solving nonsmooth equations. Based on advanced techniques of variational analysis and generalized differentiation, we establish the well-posedness of the algorithm, its local superlinear convergence, and its global convergence of the Kantorovich type. Our convergence results hold with no semismoothness and Lipschitzian assumptions, which is illustrated by examples. The algorithm and main results obtained in the paper are compared with well-recognized semismooth and B-differentiable versions of Newton’s method for nonsmooth Lipschitzian equations.  相似文献   

12.
We present an algorithm, partitioning group correction (PGC) algorithm based on trust region and conjugate gradient method, for large-scale sparse unconstrained optimization. In large sparse optimization, computing the whole Hessian matrix and solving the Newton-like equations at each iteration can be considerably expensive when a trust region method is adopted. The method depends on a symmetric consistent partition of the columns of the Hessian matrix and an inaccurate solution to the Newton-like equations by conjugate gradient method. And we allow that the current direction exceeds the trust region bound if it is a good descent direction. Besides, we studies a method dealing with some sparse matrices having a dense structure part. Some good convergence properties are kept and we contrast the computational behavior of our method with that of other algorithms. Our numerical tests show that the algorithm is promising and quite effective, and that its performance is comparable to or better than that of other algorithms available.  相似文献   

13.
The quadratic programming aspects of a full space successive quadratic programming (SQP) method are described. In particular, fill-in, matrix factor and active set updating, numerical stability, and indefiniteness of the Hessian matrix are discussed in conjunction with a sparse modification of Bunch and Parlett factorization of symmetric indefinite (Kuhn-Tucker) matrices of the type often encountered in optimization. A new pivoting strategy, called constrained pivoting, is proposed to reduce fill-in and compared with complete, partial and threshold pivoting. It is shown that constrained pivoting often significantly reduces fill-in and thus the iterative computational burdens associated with the factorization and solution of Kuhn-Tucker conditions within the QP subproblem. A numerical algorithm for updating the lower triangular and diagonal factors is presented and shown to be very fast, usually requiring only about 5% of the cost of refactorization. Two active set strategies are also presented. These include the options of adding inequalities either one or several at a time. In either case, the effects on matrix factor updating is shown to be small. Finally, a simple test is used to maintain iterative descent directions in the quadratic program. Our sparse symmetric indefinite QP algorithm is tested in the context of a family of SQP algorithms that include a full space Newton method with analytical derivatives, a full space BFGS method and a Range and Null space Decomposition (RND) method in which the projected Hessian is calculated from either analytical second derivatives or the BFGS update. Several chemical process optimization problems, with small and large degrees of freedom, are used as test problems. These include minimum work calculations for multistage isothermal compression, minimum area targeting for heat exchanger networks, and distillation optimizations involving some azeotropic and extractive distillations. Numerical results show uniformly that both the proposed QP and SQP algorithms, particularly the full space Newton method, are reliable and efficient. No failures were experienced at either level.  相似文献   

14.
For exact Newton method for solving monotone semidefinite complementarity problems (SDCP), one needs to exactly solve a linear system of equations at each iteration. For problems of large size, solving the linear system of equations exactly can be very expensive. In this paper, we propose a new inexact smoothing/continuation algorithm for solution of large-scale monotone SDCP. At each iteration the corresponding linear system of equations is solved only approximately. Under mild assumptions, the algorithm is shown to be both globally and superlinearly convergent.  相似文献   

15.
We show that, for an unconstrained optimization problem, the long-term optimal trajectory consists of a sequence of greatest descent directions and a Newton method in the final iteration. The greatest descent direction can be computed approximately by using a Levenberg-Marquardt like formula. This implies the view that the Newton method approximates a Levenberg-Marquardt like formula at a finite distance from the minimum point, instead of the standard view that the Levenberg-Marquadt formula is a way to approximate the Newton method. With the insight gained from this analysis, we develop a two-dimensional version of a Levenberg-Marquardt like formula. We make use of the two numerically largest components of the gradient vector to define here new search directions. In this way, we avoid the need of inverting a high-dimensional matrix. This reduces also the storage requirements for the full Hessian matrix in problems with a large number of variables. The author thanks Mark Wu, Professors Sanyang Liu, Junmin Li, Shuisheng Zhou and Feng Ye for support and help in this research as well as the referees for helpful comments.  相似文献   

16.
Recently, a modification of the Newton method for finding a zero of a univariate function with local cubic convergence has been introduced. Here, we extend this modification to the multi-dimensional case, i.e., we introduce a modified Newton method for vector functions that converges locally cubically, without the need to compute higher derivatives. The case of multiple roots is not treated. Per iteration the method requires one evaluation of the function vector and solving two linear systems with the Jacobian as coefficient matrix, where the Jacobian has to be evaluated twice. Since the additional computational effort is nearly that of an additional Newton step, the proposed method is useful especially in difficult cases where the number of iterations can be reduced by a factor of two in comparison to the Newton method. This much better convergence is indeed possible as shown by a numerical example. Also, the modified Newton method can be advantageous in cases where the evaluation of the function is more expensive than solving a linear system with the Jacobian as coefficient matrix. An example for this is given where numerical quadrature is involved. Finally, we discuss shortly possible extensions of the method to make it globally convergent.  相似文献   

17.
For the algebraic Riccati equation whose four coefficient matrices form a nonsingular M-matrix or an irreducible singular M-matrix K, the minimal nonnegative solution can be found by Newton’s method and the doubling algorithm. When the two diagonal blocks of the matrix K have both large and small diagonal entries, the doubling algorithm often requires many more iterations than Newton’s method. In those cases, Newton’s method may be more efficient than the doubling algorithm. This has motivated us to study Newton-like methods that have higher-order convergence and are not much more expensive each iteration. We find that the Chebyshev method of order three and a two-step modified Chebyshev method of order four can be more efficient than Newton’s method. For the Riccati equation, these two Newton-like methods are actually special cases of the Newton–Shamanskii method. We show that, starting with zero initial guess or some other suitable initial guess, the sequence generated by the Newton–Shamanskii method converges monotonically to the minimal nonnegative solution.We also explain that the Newton-like methods can be used to great advantage when solving some Riccati equations involving a parameter.  相似文献   

18.
主要借鉴吴消元法,研究带约束动力学中多项式类型Lagrange方程和Hamilton方程,提出了一种求约束的新算法.与以前算法相比,新算法无需求Hessian矩阵的秩,无需判定方程的线性相关性,从而大为减少了计算步骤,且计算更为简单.此外,计算过程中膨胀较小,且多数情形下无膨胀.利用符号计算软件,新算法可在计算机上实现.  相似文献   

19.
This paper is devoted to globally convergent methods for solving large sparse systems of nonlinear equations with an inexact approximation of the Jacobian matrix. These methods include difference versions of the Newton method and various quasi-Newton methods. We propose a class of trust region methods together with a proof of their global convergence and describe an implementable globally convergent algorithm which can be used as a realization of these methods. Considerable attention is concentrated on the application of conjugate gradient-type iterative methods to the solution of linear subproblems. We prove that both the GMRES and the smoothed COS well-preconditioned methods can be used for the construction of globally convergent trust region methods. The efficiency of our algorithm is demonstrated computationally by using a large collection of sparse test problems.  相似文献   

20.
In this paper, we derived the shifted Jacobi operational matrix (JOM) of fractional derivatives which is applied together with spectral tau method for numerical solution of general linear multi-term fractional differential equations (FDEs). A new approach implementing shifted Jacobi operational matrix in combination with the shifted Jacobi collocation technique is introduced for the numerical solution of nonlinear multi-term FDEs. The main characteristic behind this approach is that it reduces such problems to those of solving a system of algebraic equations which greatly simplifying the problem. The proposed methods are applied for solving linear and nonlinear multi-term FDEs subject to initial or boundary conditions, and the exact solutions are obtained for some tested problems. Special attention is given to the comparison of the numerical results obtained by the new algorithm with those found by other known methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号