首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 47 毫秒
1.
In this paper, the deterministic convergence of an online gradient method with penalty and momentum is investigated for training two-layer feedforward neural networks. The monotonicity of the new error function with the penalty term in the training iteration is firstly proved. Under this conclusion, we show that the weights are uniformly bounded during the training process and the algorithm is deterministically convergent. Sufficient conditions are also provided for both weak and strong convergence results.  相似文献   

2.
This paper investigates an online gradient method with penalty for training feedforward neural networks with linear output. A usual penalty is considered, which is a term proportional to the norm of the weights. The main contribution of this paper is to theoretically prove the boundedness of the weights in the network training process. This boundedness is then used to prove an almost sure convergence of the algorithm to the zero set of the gradient of the error function.  相似文献   

3.
基于权值动量的RBM加速学习算法研究   总被引:1,自引:0,他引:1  
李飞  高晓光  万开方 《自动化学报》2017,43(7):1142-1159
动量算法理论上可以加速受限玻尔兹曼机(Restricted Boltzmann machine,RBM)网络的训练速度.本文通过对现有动量算法进行仿真研究,发现现有动量算法在受限玻尔兹曼机网络训练中加速效果较差,且在训练后期逐渐失去了加速性能.针对以上问题,本文首先基于Gibbs采样收敛性定理对现有动量算法进行了理论分析,证明了现有动量算法的加速效果是以牺牲网络权值为代价的;然后,本文进一步对网络权值进行研究,发现网络权值中包含大量真实梯度的方向信息,这些方向信息可以用来对网络进行训练;基于此,本文提出了基于网络权值的权值动量算法,最后给出了仿真实验.实验结果表明,本文提出的动量算法具有更好的加速效果,并且在训练后期仍然能够保持较好的加速性能,可以很好地弥补现有动量算法的不足.  相似文献   

4.
In this paper, we study the convergence of an online gradient method with inner-product penalty and adaptive momentum for feedforward neural networks, assuming that the training samples are permuted stochastically in each cycle of iteration. Both two-layer and three-layer neural network models are considered, and two convergence theorems are established. Sufficient conditions are proposed to prove weak and strong convergence results. The algorithm is applied to the classical two-spiral problem and identification of Gabor function problem to support these theoretical findings.  相似文献   

5.
In this brief, we consider an online gradient method with penalty for training feedforward neural networks. Specifically, the penalty is a term proportional to the norm of the weights. Its roles in the method are to control the magnitude of the weights and to improve the generalization performance of the network. By proving that the weights are automatically bounded in the network training with penalty, we simplify the conditions that are required for convergence of online gradient method in literature. A numerical example is given to support the theoretical analysis.   相似文献   

6.
Pi-sigma神经网络的乘子法随机单点在线梯度算法*   总被引:1,自引:0,他引:1  
喻昕  邓飞  唐利霞 《计算机应用研究》2011,28(11):4074-4077
在利用梯度算法训练Pi-sigma神经网络时,存在因权值选取过小导致收敛速度过慢的问题,而采用一般罚函数法虽然可以克服这个缺点,但要求罚因子必须趋近于∞且惩罚项绝对值不可微,从而导致数值求解困难。为克服以上缺点,提出了一种基于乘子法的随机单点在线梯度算法。利用最优化理论方法,将有约束问题转换为无约束问题,利用乘子法来求解网络误差函数。从理论上分析了算法的收敛速度和稳定性,仿真实验结果验证了算法的有效性。  相似文献   

7.
Xiong Y  Wu W  Kang X  Zhang C 《Neural computation》2007,19(12):3356-3368
A pi-sigma network is a class of feedforward neural networks with product units in the output layer. An online gradient algorithm is the simplest and most often used training method for feedforward neural networks. But there arises a problem when the online gradient algorithm is used for pi-sigma networks in that the update increment of the weights may become very small, especially early in training, resulting in a very slow convergence. To overcome this difficulty, we introduce an adaptive penalty term into the error function, so as to increase the magnitude of the update increment of the weights when it is too small. This strategy brings about faster convergence as shown by the numerical experiments carried out in this letter.  相似文献   

8.
神经网络的两种结构优化算法研究   总被引:6,自引:0,他引:6  
提出了一种基于权值拟熵的“剪枝算法”与权值敏感度相结合的新方法,在“剪枝算法”中将权值拟熵作为惩罚项加入目标函数中,使多层前向神经网络在学习过程中自动约束权值分布,并以权值敏感度作为简化标准,避免了单纯依赖权值大小剪枝的随机性.同时,又针对剪枝算法在优化多输入多输出网络过程中计算量大、效率不高的问题,提出了一种在级联—相关(cascade correlation, CC)算法的基础上从适当的网络结构开始对网络进行构建的快速“构造算法”.仿真结果表明这种快速构造算法在收敛速度、运行效率乃至泛化性能上都更胜一筹.  相似文献   

9.
准确预测教育资源网格的下行流量有助于网格的负载均衡和信息安全管理。小波神经网络适合于对具有随机性和不确定性特征的网格下行流量进行建模和非线性预测。针对一般小波神经网络预测模型存在收敛速度较慢,误差较大,稳定性较差等不足,在基于梯度下降法的网络权值和参数修正方案中增加了动量项,同时,提出了一种对预测的中间结果引入随机样本替换机制的改进算法。实验结果表明,该算法能有效降低网络训练的收敛时间,提高网络预测的准确性和稳定性。  相似文献   

10.
传统的梯度算法存在收敛速度过慢的问题,针对这个问题,提出一种将惩罚项加到传统误差函数的梯度算法以训练递归pi-sigma神经网络,算法不仅提高了神经网络的泛化能力,而且克服了因网络初始权值选取过小而导致的收敛速度过慢的问题,相比不带惩罚项的梯度算法提高了收敛速度。从理论上分析了带惩罚项的梯度算法的收敛性,并通过实验验证了算法的有效性。  相似文献   

11.
该文提出一种新的改进激励函数的量子神经网络模型。首先为了提高学习速率,在网络权值训练过程中引入了动量项。然后为了有效实现相邻类之间具有覆盖和不确定边界的分类问题,新网络采用区分度更大的双曲正切函数的叠加作为其隐层激励函数。最后将该算法用于字符识别,将双曲正切激励函数的量子神经网络应用于数字、字母和汉字样本的多次实验,并且与原多层激励函数量子神经网络和BP网络的实验效果进行比较,发现改进后量子神经网络不仅具有较高的识别率,而且在样本训练次数上相对原多层激励函数量子神经网络有明显减少。仿真结果证明该方法的优越性。  相似文献   

12.
In the conventional backpropagation (BP) learning algorithm used for the training of the connecting weights of the artificial neural network (ANN), a fixed slope−based sigmoidal activation function is used. This limitation leads to slower training of the network because only the weights of different layers are adjusted using the conventional BP algorithm. To accelerate the rate of convergence during the training phase of the ANN, in addition to updates of weights, the slope of the sigmoid function associated with artificial neuron can also be adjusted by using a newly developed learning rule. To achieve this objective, in this paper, new BP learning rules for slope adjustment of the activation function associated with the neurons have been derived. The combined rules both for connecting weights and slopes of sigmoid functions are then applied to the ANN structure to achieve faster training. In addition, two benchmark problems: classification and nonlinear system identification are solved using the trained ANN. The results of simulation-based experiments demonstrate that, in general, the proposed new BP learning rules for slope and weight adjustments of ANN provide superior convergence performance during the training phase as well as improved performance in terms of root mean square error and mean absolute deviation for classification and nonlinear system identification problems.  相似文献   

13.
This paper describes a fast training algorithm for feedforward neural nets, as applied to a two-layer neural network to classify segments of speech as voiced, unvoiced, or silence. The speech classification method is based on five features computed for each speech segment and used as input to the network. The network weights are trained using a new fast training algorithm which minimizes the total least squares error between the actual output of the network and the corresponding desired output. The iterative training algorithm uses a quasi-Newtonian error-minimization method and employs a positive-definite approximation of the Hessian matrix to quickly converge to a locally optimal set of weights. Convergence is fast, with a local minimum typically reached within ten iterations; in terms of convergence speed, the algorithm compares favorably with other training techniques. When used for voiced-unvoiced-silence classification of speech frames, the network performance compares favorably with current approaches. Moreover, the approach used has the advantage of requiring no assumption of a particular probability distribution for the input features.  相似文献   

14.
为克服BP算法易陷入局部最小的缺点,同时为减少样本数据维数,提出一种基于主成分分析(PCA)的遗传神经网络方法。通过降维和去相关加快收敛速度,采用改进的遗传算法优化神经网络权值,利用自适应学习速率动量梯度下降算法对神经网络进行训练。MATLAB仿真实验结果表明,该方法在准确性和收敛性方面都优于BP算法,应用于入侵检测系统中的检测率和误报率明显优于传统方法。  相似文献   

15.
量子门线路神经网络(QGCNN)是一种直接利用量子理论设计神经网络拓扑结构或训练算法的量子神经网络模型。动量更新是在神经网络的权值更新中加入动量,在改变权值向量的同时提供一个特定的惯量,从而避免权值向量在网络训练过程中持续振荡。在基本的量子门线路神经网络的学习算法中引入动量更新原理,提出了一种具有动量更新的量子门线路网络算法(QGCMA)。研究表明,QGCMA保持了网络100%的收敛率,同时,相对于基本算法,在具有相同学习速率的情况下,提高了网络的收敛速度。  相似文献   

16.
提出一种前馈神经网络盲多用户检测算法,利用前馈神经网络替代原有检测器中的滤波器,通过惩罚函数对约束恒模代价函数进行求解,获得前馈神经网络权值和参数的迭代公式,实现了盲多用户检测。Matlab仿真结果表明,该算法改善了系统的误码率性能,加快了算法的收敛速度。  相似文献   

17.
姜雷  李新 《计算机时代》2010,(12):29-30
在标准BP神经网络的训练中,将误差函数作为权值调整的依据,使用固定学习率计算权值,这样的结果往往使网络的学习速度过慢甚至无法收敛。对此,从网络收敛的稳定性和速度的角度出发,分析了误差函数和权值修改函数,对算法中学习率的作用进行了具体的讨论,提出了一种根据误差变化对学习率进行动态调整的方法。该方法简单实用,能有效防止网络训练时出现发散,提高网络的收敛速度和稳定性。  相似文献   

18.
周娜  周燕屏 《计算机仿真》2004,21(9):117-119
提出用遗传算法和BP算法相结合的改进神经网络模型来进行径流预报。即先通过遗传算法对初始权值分布进行优化,在解空间定位出一个较好的搜索空间,然后采用BP算法,在这个较小的解空间中搜索出最优解。使网络收敛速度加快和避免局部极小。作为实例,以清江鸭子口的实测径流资料为样本进行训练并用以预测该水文站的日径流量。结果表明,该方法具有收敛速度快和预测精度高的特点。  相似文献   

19.
针对通用BP网络对于高纬度、大数据量训练收敛困难的问题,在使用动量因子、自适应调整学习速率等方法的基础,引入约束聚类,构造集成神经网络,以提高网络的训练速度及诊断效果;首先,采用约束聚类算法将训练样本集划分为若干个规模相当的子样本集,分别训练生成相应子网络;此外,在诊断过程中除各子网络的输出变量外,还加入了诊断数据相对各子训练样本集的隶属度因子;最后通过一个实际电路板25维采样数据、38类故障的BP网络诊断实例验证了算法的可行性。  相似文献   

20.
提高BP网络收敛速率的又一种算法   总被引:3,自引:1,他引:3  
陈玉芳  雷霖 《计算机仿真》2004,21(11):74-77
提高BP网络的训练速率是改善BP网络性能的一项重要任务。该文在误差反向传播算法(BP算法)的基础上提出了一种新的训练算法,该算法对BP网络的传统动量法进行了修改,采用动态权值调整以减少训练时间。文章提供了改进算法的仿真实例,仿真结果表明用该方法解决某些问题时,其相对于BP网络传统算法的优越性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号