首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we first present an adaptive nonmonotone term to improve the efficiency of nonmonotone line search, and then an active set identification technique is suggested to get more efficient descent direction such that it improves the local convergence behavior of algorithm and decreases the computation cost. By means of the adaptive nonmonotone line search and the active set identification technique, we put forward a global convergent gradient-based method to solve the nonnegative matrix factorization (NMF) based on the alternating nonnegative least squares framework, in which we introduce a modified Barzilai-Borwein (BB) step size. The new modified BB step size and the larger step size strategy are exploited to accelerate convergence. Finally, the results of extensive numerical experiments using both synthetic and image datasets show that our proposed method is efficient in terms of computational speed.  相似文献   

2.
The paper analyzes the rate of local convergence of the augmented Lagrangian method for nonlinear second-order cone optimization problems. Under the constraint nondegeneracy condition and the strong second order sufficient condition, we demonstrate that the sequence of iterate points generated by the augmented Lagrangian method locally converges to a local minimizer at a linear rate, whose ratio constant is proportional to 1/τ with penalty parameter τ not less than a threshold . Importantly and interestingly enough, the analysis does not require the strict complementarity condition. Supported by the National Natural Science Foundation of China under Project 10771026 and by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry.  相似文献   

3.
The spectral gradient method has proved to be effective for solving large-scale unconstrained optimization problems. It has been recently extended and combined with the projected gradient method for solving optimization problems on convex sets. This combination includes the use of nonmonotone line search techniques to preserve the fast local convergence. In this work we further extend the spectral choice of steplength to accept preconditioned directions when a good preconditioner is available. We present an algorithmthat combines the spectral projected gradient method with preconditioning strategies toincrease the local speed of convergence while keeping the global properties. We discuss implementation details for solving large-scale problems.  相似文献   

4.
Augmented Lagrangian algorithms are very popular tools for solving nonlinear programming problems. At each outer iteration of these methods a simpler optimization problem is solved, for which efficient algorithms can be used, especially when the problems are large. The most famous Augmented Lagrangian algorithm for minimization with inequality constraints is known as Powell-Hestenes-Rockafellar (PHR) method. The main drawback of PHR is that the objective function of the subproblems is not twice continuously differentiable. This is the main motivation for the introduction of many alternative Augmented Lagrangian methods. Most of them have interesting interpretations as proximal point methods for solving the dual problem, when the original nonlinear programming problem is convex. In this paper a numerical comparison between many of these methods is performed using all the suitable problems of the CUTE collection.This author was supported by ProNEx MCT/CNPq/FAPERJ 171.164/2003, FAPESP (Grants 2001/04597-4 and 2002/00094-0 and 2003/09169-6) and CNPq (Grant 302266/2002-0).This author was partially supported by CNPq-Brasil and CDCHT-Venezuela.This author was supported by ProNEx MCT/CNPq/FAPERJ 171.164/2003, FAPESP (Grant 2001/04597-4) and CNPq.  相似文献   

5.
In this paper, a Gauss-Newton method is proposed for the solution of large-scale nonlinear least-squares problems, by introducing a truncation strategy in the method presented in [9]. First, sufficient conditions are established for ensuring the convergence of an iterative method employing a truncation scheme for computing the search direction, as approximate solution of a Gauss-Newton type equation. Then, a specific truncated Gauss-Newton algorithm is described, whose global convergence is ensured under standard assumptions, together with the superlinear convergence rate in the zero-residual case. The results of a computational experimentation on a set of standard test problems are reported.  相似文献   

6.
鲁其辉  朱道立 《应用数学》2005,18(4):644-653
本文考虑带约束的变分不等式系统.提出一个基于增广Lagrangian对偶的分解算法,本文给出了算法的收敛性分析.  相似文献   

7.
The spectral gradient method is a nonmonotone gradient method for large-scale unconstrained minimization. We strengthen the algorithm by modifications which globalize the method and present strategies to apply preconditioning techniques. The modified algorithm replaces a condition of uniform positive definitness of the preconditioning matrices, with mild conditions on the search directions. The result is a robust algorithm which is effective on very large problems. Encouraging numerical experiments are presented for a variety of standard test problems, for solving nonlinear Poisson-type equations, an also for finding molecular conformations by distance geometry.  相似文献   

8.
Linearly constrained optimization problems with simple bounds are considered in the present work. First, a preconditioned spectral gradient method is defined for the case in which no simple bounds are present. This algorithm can be viewed as a quasi-Newton method in which the approximate Hessians satisfy a weak secant equation. The spectral choice of steplength is embedded into the Hessian approximation and the whole process is combined with a nonmonotone line search strategy. The simple bounds are then taken into account by placing them in an exponential penalty term that modifies the objective function. The exponential penalty scheme defines the outer iterations of the process. Each outer iteration involves the application of the previously defined preconditioned spectral gradient method for linear equality constrained problems. Therefore, an equality constrained convex quadratic programming problem needs to be solved at every inner iteration. The associated extended KKT matrix remains constant unless the process is reinitiated. In ordinary inner iterations, only the right-hand side of the KKT system changes. Therefore, suitable sparse factorization techniques can be applied and exploited effectively. Encouraging numerical experiments are presented.This research was supported by FAPESP Grant 2001-04597-4 and Grant 903724-6, FINEP and FAEP-UNICAMP, and the Scientific Computing Center of UCV. The authors thank two anonymous referees whose comments helped us to improve the final version of this paper.  相似文献   

9.
In this paper, we investigate the numerical identification of the diffusion parameters in a linear parabolic problem. The identification is formulated as a constrained minimization problem. By using the augmented Lagrangian method, the inverse problem is reduced to a coupled nonlinear algebraic system, which can be solved efficiently with the preconditioned conjugate gradient method. Finally, we present some numerical experiments to show the efficiency of the proposed methods, even for identifying highly discontinuous parameters.This work was partially supported by the Research Council of Norway, Grant NFR-128224/431.  相似文献   

10.
In this paper we propose new globalization strategies for the Barzilai and Borwein gradient method, based on suitable relaxations of the monotonicity requirements. In particular, we define a class of algorithms that combine nonmonotone watchdog techniques with nonmonotone linesearch rules and we prove the global convergence of these schemes. Then we perform an extensive computational study, which shows the effectiveness of the proposed approach in the solution of large dimensional unconstrained optimization problems.  相似文献   

11.
Vector optimization problems are a significant extension of multiobjective optimization, which has a large number of real life applications. In vector optimization the preference order is related to an arbitrary closed and convex cone, rather than the nonnegative orthant. We consider extensions of the projected gradient gradient method to vector optimization, which work directly with vector-valued functions, without using scalar-valued objectives. We provide a direction which adequately substitutes for the projected gradient, and establish results which mirror those available for the scalar-valued case, namely stationarity of the cluster points (if any) without convexity assumptions, and convergence of the full sequence generated by the algorithm to a weakly efficient optimum in the convex case, under mild assumptions. We also prove that our results still hold when the search direction is only approximately computed.  相似文献   

12.
Active set strategies for two-dimensional and three-dimensional, unilateral and bilateral obstacle problems are described. Emphasis is given to algorithms resulting from the augmented Lagrangian (i.e., primal-dual formulation of the discretized obstacle problems), for which convergence and rate of convergence are considered. For the bilateral case, modifications of the basic primal-dual algorithm are also introduced and analyzed. Finally, efficient computer realizations that are based on multigrid and multilevel methods are suggested and different aspects of the proposed techniques are investigated through numerical experiments.  相似文献   

13.
We consider a linesearch globalization of the local primal-dual interior-point Newton method for nonlinear programming introduced by El-Bakry, Tapia, Tsuchiya, and Zhang. The linesearch uses a new merit function that incorporates a modification of the standard augmented Lagrangian function and a weak notion of centrality. We establish a global convergence theory and present promising numerical experimentation.  相似文献   

14.
《Optimization》2012,61(6):1107-1130
ABSTRACT

We develop three algorithms to solve the subproblems generated by the augmented Lagrangian methods introduced by Iusem-Nasri (2010) for the equilibrium problem. The first algorithm that we propose incorporates the Newton method and the other two are instances of the subgradient projection method. One of our algorithms is also capable of solving nondifferentiable equilibrium problems. Using well-known test problems, all algorithms introduced here are implemented and numerical results are reported to compare their performances.  相似文献   

15.
本文对用无约束极小化方法求解等式约束非线性规划问题的Hestenes-Powell 增广拉格朗日函数作了进一步研究.在适当的条件下,我们建立了Hestenes-Powell增广拉格朗日函数在原问题变量空间上的无约束极小与原约束问题的解之间的关系,并且也给出了Hestenes-Powell增广拉格朗日函数在原问题变量和乘子变量的积空间上的无约束极小与原约束问题的解之间的一个关系.因此,从理论的观点来看,原约束问题的解和对应的拉格朗日乘子值不仅可以用众所周知的乘子法求得,而且可以通过对Hestenes-Powell 增广拉格朗日函数在原问题变量和乘子变量的积空间上执行一个单一的无约束极小化来获得.  相似文献   

16.
An active set subspace Barzilai-Borwein gradient algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction consists of two parts: some of the components are simply defined; the other components are determined by the Barzilai-Borwein gradient method. In this work, a nonmonotone line search strategy that guarantees global convergence is used. Preliminary numerical results show that the proposed method is promising, and competitive with the well-known method SPG on a subset of bound constrained problems from CUTEr collection. This work was supported by the 973 project granted 2004CB719402 and the NSF project of China granted 10471036.  相似文献   

17.
对求解带有不等式约束的非线性非凸规划问题的一个精确增广Lagrange函数进行了研究.在适当的假设下,给出了原约束问题的局部极小点与增广Lagrange函数,在原问题变量空间上的无约束局部极小点之间的对应关系.进一步地,在对全局解的一定假设下,还提供了原约束问题的全局最优解与增广Lagrange函数,在原问题变量空间的一个紧子集上的全局最优解之间的一些对应关系.因此,从理论上讲,采用该文给出的增广Lagrange函数作为辅助函数的乘子法,可以求得不等式约束非线性规划问题的最优解和对应的Lagrange乘子.  相似文献   

18.
A fully derivative-free spectral residual method for solving large-scale nonlinear systems of equations is presented. It uses in a systematic way the residual vector as a search direction, a spectral steplength that produces a nonmonotone process and a globalization strategy that allows for this nonmonotone behavior. The global convergence analysis of the combined scheme is presented. An extensive set of numerical experiments that indicate that the new combination is competitive and frequently better than well-known Newton-Krylov methods for large-scale problems is also presented.

  相似文献   


19.
The infinite-dimensional gradient method is applied to the iterative solution of quasilinear elliptic boundary value problems. Earlier results on uniformly monotone problems are extended to a general case within the scope of Hilbert space well-posedness. Linear convergence is proved in Sobolev norm. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

20.
本文提出了一种求解约束优化问题的新算法—投影梯度型中心方法.在连续可微和非退化的假设条件下,证明了其全局收敛性.本文算法计算简单且形式灵活.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号