首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
Pi-sigma神经网络的乘子法随机单点在线梯度算法*   总被引:1,自引:0,他引:1  
喻昕  邓飞  唐利霞 《计算机应用研究》2011,28(11):4074-4077
在利用梯度算法训练Pi-sigma神经网络时,存在因权值选取过小导致收敛速度过慢的问题,而采用一般罚函数法虽然可以克服这个缺点,但要求罚因子必须趋近于∞且惩罚项绝对值不可微,从而导致数值求解困难。为克服以上缺点,提出了一种基于乘子法的随机单点在线梯度算法。利用最优化理论方法,将有约束问题转换为无约束问题,利用乘子法来求解网络误差函数。从理论上分析了算法的收敛速度和稳定性,仿真实验结果验证了算法的有效性。  相似文献   

2.
李刚  吴潮  赵建平 《测控技术》2018,37(7):23-26
针对当前风电预测模型计算量过大、收敛速度过慢、预测精度不够等问题,提出通过果蝇优化算法(FOA)对神经网络的初始参数进行动态调整,由于自适应果蝇算法本身具有计算简单、收敛速度快等特点,通过与Elman神经网络的结合,能够降低模型的预测误差、提高模型收敛速度.最后,通过仿真实验与传统预测模型进行对比,结果表明所提出的方法是有效的.  相似文献   

3.
针对传统BP算法存在的收敛速度过慢、易陷入局部极小、缺乏统一的理论指导网络结构设计的缺点,分析了一般的改进算法在神经网络优化过程中存在的问题,从蚁群算法和BP算法融合的角度上,并引入了放大因子,提出一种综合改进的BP算法。该算法引入放大因子改善BP算法易陷入局部极小的情况,结合蚁群算法用于指导网络结构设计,并极大地改善了收敛速度过慢的问题。最后,将改进的BP算法与传统BP算法进行应用于煤矿瓦斯预测。通过对实验结果的分析,从时间和正确率上都表明改进的BP算法要优于传统的BP算法。  相似文献   

4.
一种改进的 BP 神经网络算法与应用   总被引:3,自引:0,他引:3  
针对传统 BP 算法存在的收敛速度过慢、易陷入局部极小、缺乏统一的理论指导网络结构设计的缺点,分析了一般的改进算法在神经网络优化过程中存在的问题,从蚁群算法和 BP 算法融合的角度上,并引入了放大因子,提出一种综合改进的 BP 算法.该算法引入放大因子改善 BP 算法易陷入局部极小的情况,结合蚁群算法用于指导网络结构设计,并极大地改善了收敛速度过慢的问题.最后,将改进的 BP 算法与传统 BP 算法进行应用于煤矿瓦斯预测.通过对实验结果的分析,从时间和正确率上都表明改进的 BP 算法要优于传统的 BP 算法  相似文献   

5.
基于进化策略的非线性方程组求解   总被引:1,自引:0,他引:1  
基于在求解非线性方程组过程中传统算法存在着对于初始点敏感和串行运行速度过慢的问题,提出一种求解非线性方程组的进化策略算法.该算法充分发挥了进化策略的群体搜索和全局收敛的特性,能够快速求得非线性方程组的根,有效地克服了经典算法的初始点敏感和速度过慢的问题.仿真计算表明,该算法比传统的经典算法、改进的遗传算法和神经网络算法具有更高的求解质量和求解效率,为求解非线性方程组提供了一条比较有效的途径.  相似文献   

6.
基于混沌优化的非线性预测控制器   总被引:2,自引:2,他引:2  
针对非线性系统的控制问题,本文将神经网络辨识、混沌优化和预测控制思想有机结合,提出了一种新型非线性预测控制器.该控制器以神经网络作为预测模型,混沌优化算法作为滚动优化策略,避免了非线性预测控制中复杂的梯度计算和矩阵求逆问题.另外在训练神经网络过程中,采用了带混沌机制的自适应学习率的BP算法,以提高神经网络的收敛能力和收敛速度.仿真研究说明了该非线性预测控制器的有效性及实时性.  相似文献   

7.
梯度下降算法作为卷积神经网络训练常用优化算法,其性能的优劣直接影响网络训练收敛性.本文主要分析了目前梯度优化算法中存在超调而影响收敛性问题以及学习率自适应性问题,提出了一种带微分项的自适应梯度优化算法,旨在改善网络优化过程收敛性的同时提高收敛速率.首先,针对优化过程存在较大超调量的问题,通过对迭代算法的重整合以及结合传统控制学原理引入微分项等方式来克服权重更新滞后于实际梯度改变的问题;然后,引入自适应机制来应对因学习率的不适应性导致的收敛率差和收敛速率慢等问题;紧接着,基于柯西-施瓦茨和杨氏不等式等证明了新算法的最差性能上界(悔界)为■.最后,通过在包括MNIST数据集以及CIFAR-10基准数据集上的仿真实验来验证新算法的有效性,结果表明新算法引入的微分项和自适应机制的联合模式能够有效地改善梯度下降算算法的收敛性能,从而实现算法性能的明显改善.  相似文献   

8.
张星航  郭艳  李宁  孙保明 《计算机科学》2017,44(10):99-102, 133
应用传统的压缩感知理论对天线阵列信号的波达方向(Direction-of-arrival,DOA) 进行估计,存在基的失配问题。基于交替方向乘子法 (Alternative Direction Method of Multiplier,ADMM) 的无网格压缩感知(Grid-less Compressive Sensing) 技术能够解决该问题,但仍存在收敛速度慢的缺陷。针对该缺陷, 提出带自适应惩罚项的ADMM (ADMM with adaptive penalty,AP-ADMM)算法,即根据输入信号的噪声功率,自适应地选择惩罚项的初始值;同时在算法迭代求解的过程中,自适应地对目标函数的惩罚项进行调整。与传统算法相比,在保证收敛精度和DOA的恢复成功概率的条件下,带自适应惩罚项的ADMM算法收敛速率明显加快。仿真结果验证了新算法的有效性。  相似文献   

9.
喻昕  唐利霞  于琰 《计算机科学》2013,40(12):116-121
将动量项引入到Ridge Polynomial神经网络异步梯度训练算法的误差函数中,有效地改善了算法的收敛效率,并从理论上分析了Ridge Polynomial神经网络的带动量项的异步梯度算法的收敛性,给出了算法的单调性和收敛性(包括强收敛性和弱收敛性)。算法的这些收敛性质对于如何选取学习率和初始权值来进行高效的网络训练是非常重要的。最后通过计算机仿真实验验证了带动量项的异步梯度算法的高效性和理论分析的正确性。  相似文献   

10.
BP算法(误差反向传播算法)是前馈神经网络中最常用的算法之一.在对前馈神经网络和传统的BP算法研究的基础上,发现了传统算法中存在的问题.通过引入网络复杂性的量,提出了一种新的改进算法,命名为基于网络复杂性的BP算法.该算法能够删除掉冗余的连接甚至节点,通过对网络学习步长的动态调整,避免了算法收敛速度过慢和反复震荡的问题.最后通过实验说明该算法在一定程度上比传统BP算法有一些优越性.  相似文献   

11.
In this paper, the deterministic convergence of an online gradient method with penalty and momentum is investigated for training two-layer feedforward neural networks. The monotonicity of the new error function with the penalty term in the training iteration is firstly proved. Under this conclusion, we show that the weights are uniformly bounded during the training process and the algorithm is deterministically convergent. Sufficient conditions are also provided for both weak and strong convergence results.  相似文献   

12.
Xiong Y  Wu W  Kang X  Zhang C 《Neural computation》2007,19(12):3356-3368
A pi-sigma network is a class of feedforward neural networks with product units in the output layer. An online gradient algorithm is the simplest and most often used training method for feedforward neural networks. But there arises a problem when the online gradient algorithm is used for pi-sigma networks in that the update increment of the weights may become very small, especially early in training, resulting in a very slow convergence. To overcome this difficulty, we introduce an adaptive penalty term into the error function, so as to increase the magnitude of the update increment of the weights when it is too small. This strategy brings about faster convergence as shown by the numerical experiments carried out in this letter.  相似文献   

13.
This paper investigates an online gradient method with penalty for training feedforward neural networks with linear output. A usual penalty is considered, which is a term proportional to the norm of the weights. The main contribution of this paper is to theoretically prove the boundedness of the weights in the network training process. This boundedness is then used to prove an almost sure convergence of the algorithm to the zero set of the gradient of the error function.  相似文献   

14.
In this paper, we study the convergence of an online gradient method with inner-product penalty and adaptive momentum for feedforward neural networks, assuming that the training samples are permuted stochastically in each cycle of iteration. Both two-layer and three-layer neural network models are considered, and two convergence theorems are established. Sufficient conditions are proposed to prove weak and strong convergence results. The algorithm is applied to the classical two-spiral problem and identification of Gabor function problem to support these theoretical findings.  相似文献   

15.
This paper investigates the split-complex back-propagation algorithm with momentum and penalty for training complex-valued neural networks. Here the momentum are used to accelerate the convergence of the algorithm and the penalty are used to control the magnitude of the network weights. The sufficient conditions for the learning rate, the momentum factor, the penalty coefficient, and the activation functions are proposed to establish the theoretical results of the algorithm. We theoretically prove the boundedness of the network weights during the training process, which is usually used as a precondition for convergence analysis in literatures. The monotonicity of the error function and the convergence of the algorithm are also guaranteed.  相似文献   

16.
A class of neural networks that solve linear programming problems is analyzed. The neural networks considered are modeled by dynamic gradient systems that are constructed using a parametric family of exact (nondifferentiable) penalty functions. It is proved that for a given linear programming problem and sufficiently large penalty parameters, any trajectory of the neural network converges in finite time to its solution set. For the analysis, Lyapunov-type theorems are developed for finite time convergence of nonsmooth sliding mode dynamic systems to invariant sets. The results are illustrated via numerical simulation examples  相似文献   

17.
In this brief, we consider an online gradient method with penalty for training feedforward neural networks. Specifically, the penalty is a term proportional to the norm of the weights. Its roles in the method are to control the magnitude of the weights and to improve the generalization performance of the network. By proving that the weights are automatically bounded in the network training with penalty, we simplify the conditions that are required for convergence of online gradient method in literature. A numerical example is given to support the theoretical analysis.   相似文献   

18.
A new algorithm, mean field annealing (MFA), is applied to the graph-partitioning problem. The MFA algorithm combines characteristics of the simulated-annealing algorithm and the Hopfield neural network. MFA exhibits the rapid convergence of the neural network while preserving the solution quality afforded by simulated annealing (SA). The rate of convergence of MFA on graph bipartitioning problems is 10-100 times that of SA, with nearly equal quality of solutions. A new modification to mean-field annealing is also presented which supports partitioning graphs into three or more bins, a problem which has previously shown resistance to solution by neural networks. The temperature-behavior of MFA during graph partitioning is analyzed approximately and shown to possess a critical temperature at which most of the optimization occurs. This temperature is analogous to the gain of the neurons in a neural network and can be used to tune such networks for better performance. The value of the repulsion penalty needed to force MFA (or a neural network) to divide a graph into equal-sized pieces is also estimated.  相似文献   

19.
提出一种前馈神经网络盲多用户检测算法,利用前馈神经网络替代原有检测器中的滤波器,通过惩罚函数对约束恒模代价函数进行求解,获得前馈神经网络权值和参数的迭代公式,实现了盲多用户检测。Matlab仿真结果表明,该算法改善了系统的误码率性能,加快了算法的收敛速度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号