共查询到17条相似文献,搜索用时 875 毫秒
1.
对于不可微的"极大值"形式的函数,可以利用凝聚函数对其进行光滑逼近.借助这个技术,给出了求解线性互补问题的光滑方程组算法.首先是将互补问题转化为等价的非光滑方程组,再利用凝聚函数进行光滑逼近,从而转化为光滑方程组的求解问题.通过一些考题对这个算法进行了数值试验,结果显示了该算法的有效性和稳定性. 相似文献
2.
针对极大值函数的一类光滑逼近——凝聚函数,对其作进一步研究.指出凝聚函数的一阶导数对光滑参数取极限时恰好得到极大值函数的一个次梯度,从而凝聚函数不仅可以一致逼近极大值函数,而且该函数富含极大值函数的一阶信息,可很好的刻画极大值函数的一阶特征.进一步,对光滑逼近函数的光滑参数做简单分析,得到的结果揭示了光滑参数的变动对凝聚函数的影响.并分别以正值函数及绝对值函数为例,对所得到的结果给出几何说明. 相似文献
3.
4.
借助于一种新的微分 - -微分 ,本文给出极大值函数及其光滑复合的非光滑方程组的牛顿法 .最后证明了该牛顿法具有全局收敛性 . 相似文献
5.
本文基于分段二次多项式方程,构造了一种积极集策略的光滑化max函数.通过给出与光滑化max函数相关的分量函数指标集的直接计算方法,将分段二次多项式方程转化为一般二次多项式方程.利用二次多项式方程根的性质,给出了该光滑化max函数的稳定计算策略,证明了其具有一阶光滑性,其梯度函数具有局部Lipschitz连续性和强半光滑性.该光滑化max函数仅与函数值较大的分量函数相关,适用于含分量函数较多且复杂的max函数的问题.为了验证其效率,本文基于该函数构造了一种解含多个复杂分量函数的无约束minimax问题的光滑化算法,数值实验表明了该光滑化max函数的可行性及有效性. 相似文献
6.
7.
8.
9.
Hager和Zhang[4]提出了一种新的非线性共轭梯度法(简称 HZ 方法), 并证明了该方法在 Wolfe搜索和 Goldstein 搜索下求解强凸问题的全局收敛性.但是HZ方法在标准Armijo 搜索下求解非凸问题是否全局收敛尚不清楚.该文提出了一种保守的HZ共轭梯度法,并且证明了这种方法在 Armijo 线性搜索下求解非凸优化问题的全局收敛性.此外,作者给出了一些 数值结果以检验该方法的有效性. 相似文献
10.
通过引入广义梯度,将求解含n个未知量方程的方向牛顿法推广到非光滑的情形.证明了该方法在半光滑条件下的收敛性定理,给出了解的存在性以及先验误差界. 相似文献
11.
本文提出了一类与HS方法相关的新的共轭梯度法.在强Wolfe线搜索的条件下,该方法能够保证搜索方向的充分下降性,并且在不需要假设目标函数为凸的情况下,证明了该方法的全局收敛性.同时,给出了这类新共轭梯度法的一种特殊形式,通过调整参数ρ,验证了它对给定测试函数的有效性. 相似文献
12.
Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameter in the search directions. In this note, conditions are given on the parameter in the conjugate gradient directions to ensure the descent property of the search directions. Global convergence of such a class of methods is discussed. It is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continuously differentiable function with a modification of the Curry-Altman‘s step-size rule and a bounded level set. Combining PR method with our new method, PR method is modified to have global convergence property.Numerical experiments show that the new methods are efficient by comparing with FR conjugate gradient method. 相似文献
13.
We capitalize upon the known relationship between pairs of orthogonal and minimal residual methods (or, biorthogonal and quasi-minimal residual methods) in order to estimate how much smaller the residuals or quasi-residuals of the minimizing methods can be compared to those of the corresponding Galerkin or Petrov–Galerkin method. Examples of such pairs are the conjugate gradient (CG) and the conjugate residual (CR) methods, the full orthogonalization method (FOM) and the generalized minimal residual (GMRES) method, the CGNE and BiCG versions of applying CG to the normal equations, as well as the biconjugate gradient (BiCG) and the quasi-minimal residual (QMR) methods. Also the pairs consisting of the (bi)conjugate gradient squared (CGS) and the transpose-free QMR (TFQMR) methods can be added to this list if the residuals at half-steps are included, and further examples can be created easily.The analysis is more generally applicable to the minimal residual (MR) and quasi-minimal residual (QMR) smoothing processes, which are known to provide the transition from the results of the first method of such a pair to those of the second one. By an interpretation of these smoothing processes in coordinate space we deepen the understanding of some of the underlying relationships and introduce a unifying framework for minimal residual and quasi-minimal residual smoothing. This framework includes the general notion of QMR-type methods. 相似文献
14.
1.IntroductionConsidersmoothcompositionsofmax-typefunctionsoftheform:f(x)=g(x,aestfij(x),'',,T?:fmj(x)),(1.1)wherexER",Ji,i~1,'',marefiniteindexsets,gandfij,jEJi,i=1,'',marecontinuouslydifferentiableonRill 71andR;'respectively.Thisclassofnonsmoothfunct… 相似文献
15.
16.
17.
Saman Babaie-Kafaki 《4OR: A Quarterly Journal of Operations Research》2013,11(4):361-374
In order to propose a scaled conjugate gradient method, the memoryless BFGS preconditioned conjugate gradient method suggested by Shanno and the spectral conjugate gradient method suggested by Birgin and Martínez are hybridized following Andrei’s approach. Since the proposed method is designed based on a revised form of a modified secant equation suggested by Zhang et al., one of its interesting features is applying the available function values in addition to the gradient values. It is shown that, for the uniformly convex objective functions, search directions of the method fulfill the sufficient descent condition which leads to the global convergence. Numerical comparisons of the implementations of the method and an efficient scaled conjugate gradient method proposed by Andrei, made on a set of unconstrained optimization test problems of the CUTEr collection, show the efficiency of the proposed modified scaled conjugate gradient method in the sense of the performance profile introduced by Dolan and Moré. 相似文献