首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 78 毫秒
1.
In this paper, we propose a new trust region affine scaling method for nonlinear programming with simple bounds. Our new method is an interior-point trust region method with a new scaling technique. The scaling matrix depends on the distances of the current iterate to the boundaries, the gradient of the objective function and the trust region radius. This scaling technique is different from the existing ones. It is motivated by our analysis of the linear programming case. The trial step is obtained by minimizing the quadratic approximation to the objective function in the scaled trust region. It is proved that our algorithm guarantees that at least one accumulation point of the iterates is a stationary point. Preliminary numerical experience on problems with simple bounds from the CUTEr collection is also reported. The numerical performance reveals that our method is effective and competitive with the famous algorithm LANCELOT. It also indicates that the new scaling technique is very effective and might be a good alternative to that used in the subroutine fmincon from Matlab optimization toolbox.  相似文献   

2.
3.
ABSTRACT

In this paper, a derivative-free trust region methods based on probabilistic models with new nonmonotone line search technique is considered for nonlinear programming with linear inequality constraints. The proposed algorithm is designed to build probabilistic polynomial interpolation models for the objective function. We build the affine scaling trust region methods which use probabilistic or random models within a classical trust region framework. The new backtracking linear search technique guarantee the descent of the objective function, and new iterative points are in the feasible region. In order to overcome the strict complementarity hypothesis, under some reasonable conditions which are weaker than strong second order sufficient condition, we give the new and more simple identification function to structure the affine matrix. The global and local fast convergence of the algorithm are shown and the results of numerical experiments are reported to show the effectiveness of the proposed algorithm.  相似文献   

4.
5.
6.
7.
Preconditioning techniques are important in solving linear problems, as they improve their computational properties. Scaling is the most widely used preconditioning technique in linear optimization algorithms and is used to reduce the condition number of the constraint matrix, to improve the numerical behavior of the algorithms and to reduce the number of iterations required to solve linear problems. Graphical processing units (GPUs) have gained a lot of popularity in the recent years and have been applied for the solution of linear optimization problems. In this paper, we review and implement ten scaling techniques with a focus on the parallel implementation of them on GPUs. All these techniques have been implemented under the MATLAB and CUDA environment. Finally, a computational study on the Netlib set is presented to establish the practical value of GPU-based implementations. On average the speedup gained from the GPU implementations of all scaling methods is about 7×.  相似文献   

8.
In this paper, we use a spectral scaled structured BFGS formula for approximating projected Hessian matrices in an exact penalty approach for solving constrained nonlinear least-squares problems. We show this spectral scaling formula has a good self-correcting property. The reported numerical results show that the use of the spectral scaling structured BFGS method outperforms the standard structured BFGS method.  相似文献   

9.
In this paper, we present an interior point method for nonlinear programming that avoids the use of penalty function or filter. We use an adaptively perturbed primal dual interior point framework to computer trial steps and a central path technique is used to keep the iterate bounded away from 0 and not to deviate too much from the central path. A trust-funnel-like strategy is adopted to drive convergence. We also use second-order correction (SOC) steps to achieve fast local convergence by avoiding Maratos effect. Furthermore, the presented algorithm can avoid the blocking effect. It also does not suffer the blocking of productive steps that other trust-funnel-like algorithm may suffer. We show that, under second-order sufficient conditions and strict complementarity, the full Newton step (combined with an SOC step) will be accepted by the algorithm near the solution, and hence the algorithm is superlinearly local convergent. Numerical experiments results, which are encouraging, are reported.  相似文献   

10.
PSO算法本身是线性时变离散系统,现有的PSO算法收敛性条件的研究都是通过一定的假设将其转化为线性定常离散系统,线性定常离散系统的数学模型与求解线性方程组的单步定常线性迭代法的数学模型完全一致,这样对线性定常离散系统的稳定性分析就转化为对单步定常线性迭代格式的收敛性分析,为PSO算法的收敛性研究提供了一种新的思路和方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号