首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We generalize the subgradient optimization method for nondifferentiable convex programming to utilize conditional subgradients. Firstly, we derive the new method and establish its convergence by generalizing convergence results for traditional subgradient optimization. Secondly, we consider a particular choice of conditional subgradients, obtained by projections, which leads to an easily implementable modification of traditional subgradient optimization schemes. To evaluate the subgradient projection method we consider its use in three applications: uncapacitated facility location, two-person zero-sum matrix games, and multicommodity network flows. Computational experiments show that the subgradient projection method performs better than traditional subgradient optimization; in some cases the difference is considerable. These results suggest that our simply modification may improve subgradient optimization schemes significantly. This finding is important as such schemes are very popular, especially in the context of Lagrangean relaxation.  相似文献   

2.
Subgradient projectors play an important role in optimization and for solving convex feasibility problems. For every locally Lipschitz function, we can define a subgradient projector via generalized subgradients even if the function is not convex. The paper consists of three parts. In the first part, we study basic properties of subgradient projectors and give characterizations when a subgradient projector is a cutter, a local cutter, or a quasi-nonexpansive mapping. We present global and local convergence analyses of subgradent projectors. Many examples are provided to illustrate the theory. In the second part, we investigate the relationship between the subgradient projector of a prox-regular function and the subgradient projector of its Moreau envelope. We also characterize when a mapping is the subgradient projector of a convex function. In the third part, we focus on linearity properties of subgradient projectors. We show that, under appropriate conditions, a linear operator is a subgradient projector of a convex function if and only if it is a convex combination of the identity operator and a projection operator onto a subspace. In general, neither a convex combination nor a composition of subgradient projectors of convex functions is a subgradient projector of a convex function.  相似文献   

3.
A generalized subgradient method is considered which uses the subgradients at previous iterations as well as the subgradient at current point. This method is a direct generalization of the usual subgradient method. We provide two sets of convergence conditions of the generalized subgradient method. Our results provide a larger class of sequences which converge to a minimum point and more freedom of adjustment to accelerate the speed of convergence.Most of this research was performed when the first author was visiting Decision and Information Systems Department, College of Business, Arizona State University.  相似文献   

4.
Using a general approach which provides sequential optimality conditions for a general convex optimization problem, we derive necessary and sufficient optimality conditions for composed convex optimization problems. Further, we give sequential characterizations for a subgradient of the precomposition of a K-increasing lower semicontinuous convex function with a K-convex and K-epi-closed (continuous) function, where K is a nonempty convex cone. We prove that several results from the literature dealing with sequential characterizations of subgradients are obtained as particular cases of our results. We also improve the above mentioned statements.  相似文献   

5.
In this paper, we study the influence of noise on subgradient methods for convex constrained optimization. The noise may be due to various sources, and is manifested in inexact computation of the subgradients and function values. Assuming that the noise is deterministic and bounded, we discuss the convergence properties for two cases: the case where the constraint set is compact, and the case where this set need not be compact but the objective function has a sharp set of minima (for example the function is polyhedral). In both cases, using several different stepsize rules, we prove convergence to the optimal value within some tolerance that is given explicitly in terms of the errors. In the first case, the tolerance is nonzero, but in the second case, the optimal value can be obtained exactly, provided the size of the error in the subgradient computation is below some threshold. We then extend these results to objective functions that are the sum of a large number of convex functions, in which case an incremental subgradient method can be used.  相似文献   

6.
Subgradient methods are popular for solving nondifferentiable optimization problems because of their relative ease in implementation, but are not always robust and require a careful design of strategies in order to yield an effective procedure for any given class of problems. In this paper, we present an approach for solving the Euclidean distance multifacility location problem (EMFLP) using conjugate or deflected subgradient based algorithms along with suitable line-search strategies. The subgradient deflection method considered is the Average Direction Strategy (ADS) imbedded within the Variable Target Value Method (VTVM). We also investigate the generation of two types of subgradients to be employed in conjunction with ADS. The first type is a simple valid subgradient that assigns zero values to contributions corresponding to the nondifferentiable terms in the objective function, and so, the subgradient is composed by summing the contributions corresponding to the differentiable terms alone. The second type expends more effort to derive a low-norm member of the subdifferential in order to enhance the prospect of obtaining a descent direction. Furthermore, a special Newton-based line-search that exploits the nondifferentiability of the problem is also designed to be implemented in the developed algorithm in order to study its impact on the convergence behavior. Various combinations of the above strategies are composed and evaluated on a set of test problems. The results show that a modification of the VTVM method along with the first or a certain combination of the two subgradient generation strategies, and the use of a suitable line-search technique, provides promising results. An alternative block-halving step-size strategy used within VTVM in conjunction with the proposed line-search method yields a competitive second choice performance.  相似文献   

7.
A readily implementable algorithm is given for minimizing a (possibly nondifferentiable and nonconvex) locally Lipschitz continuous functionf subject to linear constraints. At each iteration a polyhedral approximation tof is constructed from a few previously computed subgradients and an aggregate subgradient, which accumulates the past subgradient information. This aproximation and the linear constraints generate constraints in the search direction finding subproblem that is a quadratic programming problem. Then a stepsize is found by an approximate line search. All the algorithm's accumulation points are stationary. Moreover, the algorithm converges whenf happens to be convex.  相似文献   

8.
An aggregate subgradient method for nonsmooth convex minimization   总被引:2,自引:0,他引:2  
A class of implementable algorithms is described for minimizing any convex, not necessarily differentiable, functionf of several variables. The methods require only the calculation off and one subgradient off at designated points. They generalize Lemarechal's bundle method. More specifically, instead of using all previously computed subgradients in search direction finding subproblems that are quadratic programming problems, the methods use an aggregate subgradient which is recursively updated as the algorithms proceed. Each algorithm yields a minimizing sequence of points, and iff has any minimizers, then this sequence converges to a solution of the problem. Particular members of this algorithm class terminate whenf is piecewise linear. The methods are easy to implement and have flexible storage requirements and computational effort per iteration that can be controlled by a user. Research sponsored by the Institute of Automatic Control, Technical University of Warsaw, Poland, under Project R.I.21.  相似文献   

9.
In this paper, we propose a new algorithm for solving a bilevel equilibrium problem in a real Hilbert space. In contrast to most other projection-type algorithms, which require to solve subproblems at each iteration, the subgradient method proposed in this paper requires only to calculate, at each iteration, two subgradients of convex functions and one projection onto a convex set. Hence, our algorithm has a low computational cost. We prove a strong convergence theorem for the proposed algorithm and apply it for solving the equilibrium problem over the fixed point set of a nonexpansive mapping. Some numerical experiments and comparisons are given to illustrate our results. Also, an application to Nash–Cournot equilibrium models of a semioligopolistic market is presented.  相似文献   

10.
A readily implementable algorithm is proposed for minimizing any convex, not necessarily differentiable, function f of several variables subject to a finite number of linear constraints. The algorithm requires only the calculation of f at designated feasible points. At each iteration a lower polyhedral approximation to f is constructed from a few previously calculated subgradients and an aggregate subgradient. The recursively updated aggregate subgradient accumulates the subgradient information to deal with nondifferentiability of f. The polyhedral approximation and the linear constraints generate constraints in the search direction finding subproblem that is a quadratic programming problem. Then a step length is found by approximate line search. It is shown that the algorithm converges to a solution, if any.  相似文献   

11.
N. Karmitsa 《Optimization》2016,65(8):1599-1614
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases, the usual subgradient-based optimization methods cannot be used. However, the derivative free methods are applicable since they do not use explicit computation of subgradients. In this paper, we propose an efficient diagonal discrete gradient bundle method for derivative-free, possibly nonconvex, nonsmooth minimization. The convergence of the proposed method is proved for semismooth functions, which are not necessarily differentiable or convex. The method is implemented using Fortran 95, and the numerical experiments confirm the usability and efficiency of the method especially in case of large-scale problems.  相似文献   

12.
In this paper, results on the strong convergence of subgradients of convex functions along a given direction are presented; that is, the relative compactness (with respect to the norm) of the union of subdifferentials of a convex function along a given direction is investigated.  相似文献   

13.
In this note, some questions concerning the strong convergence of subgradients of convex functions along a given direction are recalled and posed. It is shown that some open problems in literature are linked to that of the existence of limits of subgradients from subdifferentials along a given segment.  相似文献   

14.
In many instances, the exact evaluation of an objective function and its subgradients can be computationally demanding. By way of example, we cite problems that arise within the context of stochastic optimization, where the objective function is typically defined via multi-dimensional integration. In this paper, we address the solution of such optimization problems by exploring the use of successive approximation schemes within subgradient optimization methods. We refer to this new class of methods as inexact subgradient algorithms. With relatively mild conditions imposed on the approximations, we show that the inexact subgradient algorithms inherit properties associated with their traditional (i.e., exact) counterparts. Within the context of stochastic optimization, the conditions that we impose allow a relaxation of requirements traditionally imposed on steplengths in stochastic quasi-gradient methods. Additionally, we study methods in which steplengths may be defined adaptively, in a manner that reflects the improvement in the objective function approximations as the iterations proceed. We illustrate the applicability of our approach by proposing an inexact subgradient optimization method for the solution of stochastic linear programs.This work was supported by Grant Nos. NSF-DDM-89-10046 and NSF-DDM-9114352 from the National Science Foundation.  相似文献   

15.
We study subgradient projection type methods for solving non-differentiable convex minimization problems and monotone variational inequalities. The methods can be viewed as a natural extension of subgradient projection type algorithms, and are based on using non-Euclidean projection-like maps, which generate interior trajectories. The resulting algorithms are easy to implement and rely on a single projection per iteration. We prove several convergence results and establish rate of convergence estimates under various and mild assumptions on the problem’s data and the corresponding step-sizes. We dedicate this paper to Boris Polyak on the occasion of his 70th birthday.  相似文献   

16.
For optimization problems with computationally demanding objective functions and subgradients, inexact subgradient methods (IXS) have been introduced by using successive approximation schemes within subgradient optimization methods (Au et al., 1994). In this paper, we develop alternative solution procedures when the primal-dual information of IXS is utilized. This approach is especially useful when the projection operation onto the feasible set is difficult. We also demonstrate its applicability to stochastic linear programs.  相似文献   

17.
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.  相似文献   

18.
An algorithm is described for finding the minimum of any convex, not necessarily differentiable, functionf of several variables. The algorithm yields a sequence of points tending to the solution of the problem, if any; it requires the calculation off and one subgradient off at designated points. Its rate of convergence is estimated for convex and also for twice differentiable convex functions. It is an extension of the method of conjugate gradients, and terminates whenf is quadratic. Editor's note. The communications of Wolfe and Lemarechal which follow — received almost simultaneously — display different points of view, but deal with the same problem and use similar techniques. They are preliminary versions of promising attacks on the problem of minimizing a convex, but not necessarily differentiable, function of many variables. MATHEMATICAL PROGRAMMING STUDY 3 entitledNondifferentiable optimization is to be devoted to this subject.  相似文献   

19.
We study subgradient methods for computing the saddle points of a convex-concave function. Our motivation comes from networking applications where dual and primal-dual subgradient methods have attracted much attention in the design of decentralized network protocols. We first present a subgradient algorithm for generating approximate saddle points and provide per-iteration convergence rate estimates on the constructed solutions. We then focus on Lagrangian duality, where we consider a convex primal optimization problem and its Lagrangian dual problem, and generate approximate primal-dual optimal solutions as approximate saddle points of the Lagrangian function. We present a variation of our subgradient method under the Slater constraint qualification and provide stronger estimates on the convergence rate of the generated primal sequences. In particular, we provide bounds on the amount of feasibility violation and on the primal objective function values at the approximate solutions. Our algorithm is particularly well-suited for problems where the subgradient of the dual function cannot be evaluated easily (equivalently, the minimum of the Lagrangian function at a dual solution cannot be computed efficiently), thus impeding the use of dual subgradient methods.  相似文献   

20.
In this paper we present a new approach for constructing subgradient schemes for different types of nonsmooth problems with convex structure. Our methods are primal-dual since they are always able to generate a feasible approximation to the optimum of an appropriately formulated dual problem. Besides other advantages, this useful feature provides the methods with a reliable stopping criterion. The proposed schemes differ from the classical approaches (divergent series methods, mirror descent methods) by presence of two control sequences. The first sequence is responsible for aggregating the support functions in the dual space, and the second one establishes a dynamically updated scale between the primal and dual spaces. This additional flexibility allows to guarantee a boundedness of the sequence of primal test points even in the case of unbounded feasible set (however, we always assume the uniform boundedness of subgradients). We present the variants of subgradient schemes for nonsmooth convex minimization, minimax problems, saddle point problems, variational inequalities, and stochastic optimization. In all situations our methods are proved to be optimal from the view point of worst-case black-box lower complexity bounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号