首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 147 毫秒
1.
BP神经网络可以有效地对非线性系统进行逼近,但是传统的最速下降搜索方法存在收敛速度慢的问题。本文提出把BP神经网络转化为最优化问题,用一种共轭梯度算法代替最速下降法进行搜索迭代,极大地提高了收敛速度。  相似文献   

2.
基于改进BP神经网络的PID控制方法研究   总被引:8,自引:1,他引:8  
史春朝  张国山 《计算机仿真》2006,23(12):156-159
针对最速下降法收敛速度慢和易陷入局部极小的缺点,提出一种新型的基于改进BP神经网络的PID控制方法,该方法将神经网络和PID控制策略相结合,既具有神经网络自学习、自适应及逼近任意函数的能力。又具有常规PID控制器结构简单的特点。该控制器的算法采用Fletcher—Reeves共轭梯度法,它可以避免网络陷入局部极小点,同时加快网络的训练速度。并用这种改进的共轭梯度法对神经网络PID控制器参数实现在线修正。最后给出了在Matlab平台上的实现算法。仿真结果表明该控制方法是有效的。  相似文献   

3.
在DY共轭梯度法的基础上对解决无约束最优化问题提出一种改进的共轭梯度法.该方法在标准wolfe线搜索下具有充分下降性,且算法全局收敛.数值结果表明了该算法的有效性.最后将算法用于SO2氧化反应动力学模型的非线性参数估计,获得满意效果.  相似文献   

4.
最陡下降恒模算法是恒模算法的一种,由于其计算简单因而取得广泛应用。为了分析收敛因子、阵元间距、阵元个数等阵列系统因素是如何对最陡下降恒模算法产生影响。运用MATLAB仿真对最陡下降恒模算法的系统因素(既上面提到的收敛因子、阵元间距、阵元个数等系统因素)进行常规的分析与研究.并从实验数据得出这些因素能够对最陡下降恒模自适应波束形成有着明显的影响,为进一步优化自适应波束形成算法提供了一些参考。  相似文献   

5.
本文主要对数据挖掘中分类判别方法进行了研究。在对支撑向量机理论研究的基础上。提出了正交校正共轭梯度法-SVM(CGM-OC-SVM)。该算法是在吸取PRP-SVM算法的优点并改进其缺点基础上提出来的。解决较大规模的随机凸二次规划问题,同时克服了最速下降法-SVM收敛速度慢的特点。并且该算法使用径向基内积函数分类器作为Keme1函数,使算法更具有通用性。并通过程序对该算法进行了实现。  相似文献   

6.
为了解决实时系统应用中,通用秩信号模型的稳健自适应波束形成算法受计算复杂度高的限制这一问题,基于子空间跟踪理论,运用梯度下降法提出了一种递推稳健自适应波束形成算法,有效降低了计算量,提高了系统性能,改善了阵列输出的信干噪比。仿真结果表明,该算法具有快速收敛性与优越的稳健性。  相似文献   

7.
针对当前人工神经网络学习算法存在的问题,使用变步伐最速下降法和共轭梯度法的混合算法来进行神经网络的训练,并建立了负荷预测的人工神经网络模型。介绍了基于Delphi下的短期电力负荷预测系统。该系统由负荷预测数据查询模块、预测方法模块、结果查询模块和图表输出模块四部分组成。事实说明,混合算法在全局收敛性和收敛速度上要好于传统的算法,所基于此的短期负荷预测系统能达到令人满意的精度。  相似文献   

8.
基于梯度动力学的协同神经网络学习算法的改进   总被引:3,自引:0,他引:3       下载免费PDF全文
本文在研究协同神经网络梯度动力学过程的基础上,针对学习过程收敛速度缓慢的缺点,提出了一种改进的、基于梯度动力学的协同神经网络学习算法。该算法分析了非平衡注意参数对学习过程的影响,简化了初始伴随向量的选取;并引入最优化理论,将该问题归结为求解非线性最优化问题,用共轭梯度法代替梯度下降法,加快了学习过程的收敛。通过对汉字图像库和人脸图像库的图像识别实验表明,该算法比其他学习算法的识别率高,并能较快地收敛到极小值。  相似文献   

9.
本文在研究协同神经网络梯度动力学过程的基础上,针对学习过程收敛速度缓慢的缺点,提出了一种改进的基于梯度动力学的协同神经网络学习算法.该算法分析了非平衡注意参数对学习过程的影响,简化了初始伴随向量的选取;并引入最优化理论,将该问题归结为求解非线性最优化问题,用共轭梯度法代替梯度下降法,加快了学习过程的收敛.通过对汉字图象库和人脸图象库的图象识别实验表明该算法较之其他学习算法有较高的识别率,并能较快的收敛到极小值.  相似文献   

10.
在雷达系统抗干扰优化问题的研究中,自适应数字滤波为主要手段。应用二维面阵子阵划分方法与部分自适应数字波束形成算法中,子阵划分是部分自适应数字波束形成技术为基础。针对二维面阵子阵划分,对降低子阵划分工程实现复杂度大,提出了等功率噪声法与遗传算法(GA)相结合的子阵划分方法,可使同一子阵单元紧密相邻同时避免栅瓣出现。为改善子阵级部分自适应数字波束形成波束旁瓣电平,采用归一法的部分自适应数字波束形成算法,可有效地对干扰进行抑制,显著改善波束旁瓣电平。通过建立二维面阵信号模型,对所提方法进行了计算机仿真,仿真结果证明了所提方法的有效性,可为雷达系统性能优化提供了依据。  相似文献   

11.
Algorithms for accelerated convergence of adaptive PCA   总被引:3,自引:0,他引:3  
We derive and discuss adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja and Karhunen (1985), Sanger (1989), and Xu (1993). It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: (1) gradient descent; (2) steepest descent; (3) conjugate direction; and (4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods. We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.  相似文献   

12.
A two-stage algorithm combining the advantages of adaptive genetic algorithm and modified Newton method is developed for effective training in feedforward neural networks. The genetic algorithm with adaptive reproduction, crossover, and mutation operators is to search for initial weight and bias of the neural network, while the modified Newton method, similar to BFGS algorithm, is to increase network training performance. The benchmark tests show that the two-stage algorithm is superior to many conventional ones: steepest descent, steepest descent with adaptive learning rate, conjugate gradient, and Newton-based methods and is suitable to small network in engineering applications. In addition to numerical simulation, the effectiveness of the two-stage algorithm is validated by experiments of system identification and vibration suppression.  相似文献   

13.
In this article, we introduce accelerated algorithms for linear discriminant analysis (LDA) and feature extraction from unimodal multiclass Gaussian data. Current adaptive methods based on the gradient descent optimization technique use a fixed or a monotonically decreasing step size in each iteration, which results in a slow convergence rate. Here, we use a variable step size, optimally computed in each iteration using the steepest descent method, in order to accelerate the convergence of the algorithm. Based on the new adaptive algorithm, we present a self-organizing neural network for adaptive computation of the square root of the inverse covariance matrix (Σ−1/2) and use it (i) in a network for optimal feature extraction from Gaussian data and (ii) in cascaded form with a principal component analysis network for LDA. Experimental results demonstrate fast convergence and high stability of the algorithm and justify its advantages for on-line pattern recognition applications with stationary and non-stationary input data.  相似文献   

14.
BP神经网络算法的改进及收敛性分析   总被引:1,自引:0,他引:1  
研究BP神经网络的数学理论,详细分析几种流行的BP神经网络学习算法的优缺点.针对一般BP算法收敛速度慢,易陷入局部极小值的缺陷,受Fletcher-Reeves线性搜索方法的指引,提出基于改进共轭梯度法的BP算法.从理论方面对算法进行深入的分析,介绍算法的详细思路和具体过程.并将算法训练后的BP神经网络运用到函数逼近中去.仿真结果表明,这种改进方案确实能够改善算法在训练过程中的收敛特性,而且提高收敛速度,取得令人满意的逼近效果.  相似文献   

15.
Optimization for training neural nets   总被引:14,自引:0,他引:14  
Various techniques of optimizing criterion functions to train neural-net classifiers are investigated. These techniques include three standard deterministic techniques (variable metric, conjugate gradient, and steepest descent), and a new stochastic technique. It is found that the stochastic technique is preferable on problems with large training sets and that the convergence rates of the variable metric and conjugate gradient techniques are similar.  相似文献   

16.
This paper presents a numerical investigation of the spectral conjugate directions formulation for optimizing unconstrained problems. A novel modified algorithm is proposed based on the conjugate gradient coefficient method. The algorithm employs the Wolfe inexact line search conditions to determine the optimum step length at each iteration and selects the appropriate conjugate gradient coefficient accordingly. The algorithm is evaluated through several numerical experiments using various unconstrained functions. The results indicate that the algorithm is highly stable, regardless of the starting point, and has better convergence rates and efficiency compared to classical methods in certain cases. Overall, this research provides a promising approach to solving unconstrained optimization problems.  相似文献   

17.
Proposed in this paper is a new conjugate gradient method with smoothing \(L_{1/2} \) regularization based on a modified secant equation for training neural networks, where a descent search direction is generated by selecting an adaptive learning rate based on the strong Wolfe conditions. Two adaptive parameters are introduced such that the new training method possesses both quasi-Newton property and sufficient descent property. As shown in the numerical experiments for five benchmark classification problems from UCI repository, compared with the other conjugate gradient training algorithms, the new training algorithm has roughly the same or even better learning capacity, but significantly better generalization capacity and network sparsity. Under mild assumptions, a global convergence result of the proposed training method is also proved.  相似文献   

18.
The conjugate gradient method is an effective method for large-scale unconstrained optimization problems. Recent research has proposed conjugate gradient methods based on secant conditions to establish fast convergence of the methods. However, these methods do not always generate a descent search direction. In contrast, Y. Narushima, H. Yabe, and J.A. Ford [A three-term conjugate gradient method with sufficient descent property for unconstrained optimization, SIAM J. Optim. 21 (2011), pp. 212–230] proposed a three-term conjugate gradient method which always satisfies the sufficient descent condition. This paper makes use of both ideas to propose descent three-term conjugate gradient methods based on particular secant conditions, and then shows their global convergence properties. Finally, numerical results are given.  相似文献   

19.
陇盛  陶蔚  张泽东  陶卿 《软件学报》2022,33(4):1231-1243
与梯度下降法相比,自适应梯度下降方法(AdaGrad)利用过往平方梯度的算数平均保存了历史数据的几何信息,在处理稀疏数据时获得了更紧的收敛界.另一方面,Nesterov加速梯度方法(Nesterov's accelerated gradient,NAG)在梯度下降法的基础上添加了动量运算,在求解光滑凸优化问题时具有数量...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号