首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
传统的梯度算法存在收敛速度过慢的问题,针对这个问题,提出一种将惩罚项加到传统误差函数的梯度算法以训练递归pi-sigma神经网络,算法不仅提高了神经网络的泛化能力,而且克服了因网络初始权值选取过小而导致的收敛速度过慢的问题,相比不带惩罚项的梯度算法提高了收敛速度。从理论上分析了带惩罚项的梯度算法的收敛性,并通过实验验证了算法的有效性。  相似文献   

2.
This paper investigates an online gradient method with penalty for training feedforward neural networks with linear output. A usual penalty is considered, which is a term proportional to the norm of the weights. The main contribution of this paper is to theoretically prove the boundedness of the weights in the network training process. This boundedness is then used to prove an almost sure convergence of the algorithm to the zero set of the gradient of the error function.  相似文献   

3.
In this paper, the deterministic convergence of an online gradient method with penalty and momentum is investigated for training two-layer feedforward neural networks. The monotonicity of the new error function with the penalty term in the training iteration is firstly proved. Under this conclusion, we show that the weights are uniformly bounded during the training process and the algorithm is deterministically convergent. Sufficient conditions are also provided for both weak and strong convergence results.  相似文献   

4.
为了解决前馈神经网络训练收敛速度慢、易陷入局部极值及对初始权值依赖性强等缺点, 提出了一种基于反传的无限折叠迭代混沌粒子群优化(ICMICPSO)算法训练前馈神经网络(FNNs)参数。该方法在充分利用BP算法的误差反传信息和梯度信息的基础上, 引入了ICMIC混沌粒子群的概念, 将ICMIC粒子群(ICMICPS)作为全局搜索器, 梯度下降信息作为局部搜索器来调整网络的权值和阈值, 使得粒子能够在全局寻优的基础上对整个空间进行搜索。通过仿真实验与多种算法进行对比, 结果表明在训练和泛化能力上ICMICPSO-BPNN方法明显优于其他算法。  相似文献   

5.
In this brief, we consider an online gradient method with penalty for training feedforward neural networks. Specifically, the penalty is a term proportional to the norm of the weights. Its roles in the method are to control the magnitude of the weights and to improve the generalization performance of the network. By proving that the weights are automatically bounded in the network training with penalty, we simplify the conditions that are required for convergence of online gradient method in literature. A numerical example is given to support the theoretical analysis.   相似文献   

6.
Learning and convergence analysis of neural-type structurednetworks   总被引:6,自引:0,他引:6  
A class of feedforward neural networks, structured networks, has recently been introduced as a method for solving matrix algebra problems in an inherently parallel formulation. A convergence analysis for the training of structured networks is presented. Since the learning techniques used in structured networks are also employed in the training of neural networks, the issue of convergence is discussed not only from a numerical algebra perspective but also as a means of deriving insight into connectionist learning. Bounds on the learning rate are developed under which exponential convergence of the weights to their correct values is proved for a class of matrix algebra problems that includes linear equation solving, matrix inversion, and Lyapunov equation solving. For a special class of problems, the orthogonalized back-propagation algorithm, an optimal recursive update law for minimizing a least-squares cost functional, is introduced. It guarantees exact convergence in one epoch. Several learning issues are investigated.  相似文献   

7.
Online gradient methods are widely used for training feedforward neural networks. We prove in this paper a convergence theorem for an online gradient method with variable step size for backward propagation (BP) neural networks with a hidden layer. Unlike most of the convergence results that are of probabilistic and nonmonotone nature, the convergence result that we establish here has a deterministic and monotone nature.  相似文献   

8.
In this paper, we study the convergence of an online gradient method with inner-product penalty and adaptive momentum for feedforward neural networks, assuming that the training samples are permuted stochastically in each cycle of iteration. Both two-layer and three-layer neural network models are considered, and two convergence theorems are established. Sufficient conditions are proposed to prove weak and strong convergence results. The algorithm is applied to the classical two-spiral problem and identification of Gabor function problem to support these theoretical findings.  相似文献   

9.
In this paper, a batch gradient algorithm with adaptive momentum is considered and a convergence theorem is presented when it is used for two-layer feedforward neural networks training. Simple but necessary sufficient conditions are offered to guarantee both weak and strong convergence. Compared with existing general requirements, we do not restrict the error function to be quadratic or uniformly convex. A numerical example is supplied to illustrate the performance of the algorithm and support our theoretical finding.  相似文献   

10.
In the training of feedforward neural networks,it is usually suggested that the initial weights should be small in magnitude in order to prevent premature saturation.The aim of this paper is to point out the other side of the story:In some cases,the gradient of the error functions is zero not only for infinitely large weights but also for zero weights.Slow convergence in the beginning of the training procedure is often the result of sufficiently small initial weights.Therefore,we suggest that,in these cases,the initial values of the weights should be neither too large,nor too small.For instance,a typical range of choices of the initial weights might be something like(0.4,0.1) ∪(0.1,0.4),rather than(0.1,0.1) as suggested by the usual strategy.Our theory that medium size weights should be used has also been extended to a few commonly used transfer functions and error functions.Numerical experiments are carried out to support our theoretical findings.  相似文献   

11.
一种前馈神经网的快速算法   总被引:2,自引:0,他引:2  
前馈神经网已经被大量用于非线性信号处理. 经典反向传播算法是一种标准的前馈网络学习算法,但是,对许多应用,反向传播算法的收 敛速度却很慢.本文根据对网络的非线性单元进行线性化而提出一种新的算法,该算法在非 线性信号处理中在精度和收敛速度方面都优于传统的反向传播算法.  相似文献   

12.
强化学习是解决自适应问题的重要方法,被广泛地应用于连续状态下的学习控制,然而存在效率不高和收敛速度较慢的问题.在运用反向传播(back propagation,BP)神经网络基础上,结合资格迹方法提出一种算法,实现了强化学习过程的多步更新.解决了输出层的局部梯度向隐层节点的反向传播问题,从而实现了神经网络隐层权值的快速更新,并提供一个算法描述.提出了一种改进的残差法,在神经网络的训练过程中将各层权值进行线性优化加权,既获得了梯度下降法的学习速度又获得了残差梯度法的收敛性能,将其应用于神经网络隐层的权值更新,改善了值函数的收敛性能.通过一个倒立摆平衡系统仿真实验,对算法进行了验证和分析.结果显示,经过较短时间的学习,本方法能成功地控制倒立摆,显著提高了学习效率.  相似文献   

13.
Multifeedback-Layer Neural Network   总被引:1,自引:0,他引:1  
The architecture and training procedure of a novel recurrent neural network (RNN), referred to as the multifeedback-layer neural network (MFLNN), is described in this paper. The main difference of the proposed network compared to the available RNNs is that the temporal relations are provided by means of neurons arranged in three feedback layers, not by simple feedback elements, in order to enrich the representation capabilities of the recurrent networks. The feedback layers provide local and global recurrences via nonlinear processing elements. In these feedback layers, weighted sums of the delayed outputs of the hidden and of the output layers are passed through certain activation functions and applied to the feedforward neurons via adjustable weights. Both online and offline training procedures based on the backpropagation through time (BPTT) algorithm are developed. The adjoint model of the MFLNN is built to compute the derivatives with respect to the MFLNN weights which are then used in the training procedures. The Levenberg-Marquardt (LM) method with a trust region approach is used to update the MFLNN weights. The performance of the MFLNN is demonstrated by applying to several illustrative temporal problems including chaotic time series prediction and nonlinear dynamic system identification, and it performed better than several networks available in the literature  相似文献   

14.
李享梅  赵天昀 《计算机应用》2005,25(12):2789-2791
针对BP神经网络中采用的梯度下降法局部搜索能力强、全局搜索能力差和遗传神经网络中采用的遗传算法全局搜索能力强、局部搜索能力差的特点,提出了一种集梯度下降法和遗传算法优点为一体的混合智能学习法(Hybrid Intelligence learning algorithm),简称HI算法,并将其应用到优化多层前馈型神经网络连接权问题。对该算法进行了设计和实现,从理论和实际两方面证明混合智能学习法神经网络与BP神经网络和基于遗传算法的神经网络相比有更好的运算性能、更快的收敛速度和更高的精度。  相似文献   

15.
Training feedforward networks with the Marquardt algorithm   总被引:160,自引:0,他引:160  
The Marquardt algorithm for nonlinear least squares is presented and is incorporated into the backpropagation algorithm for training feedforward neural networks. The algorithm is tested on several function approximation problems, and is compared with a conjugate gradient algorithm and a variable learning rate algorithm. It is found that the Marquardt algorithm is much more efficient than either of the other techniques when the network contains no more than a few hundred weights.  相似文献   

16.
In this work we present a new hybrid algorithm for feedforward neural networks, which combines unsupervised and supervised learning. In this approach, we use a Kohonen algorithm with a fuzzy neighborhood for training the weights of the hidden layers and gradient descent method for training the weights of the output layer. The goal of this method is to assist the existing variable learning rate algorithms. Simulation results show the effectiveness of the proposed algorithm compared with other well-known learning methods.  相似文献   

17.
Pi-sigma神经网络的乘子法随机单点在线梯度算法*   总被引:1,自引:0,他引:1  
喻昕  邓飞  唐利霞 《计算机应用研究》2011,28(11):4074-4077
在利用梯度算法训练Pi-sigma神经网络时,存在因权值选取过小导致收敛速度过慢的问题,而采用一般罚函数法虽然可以克服这个缺点,但要求罚因子必须趋近于∞且惩罚项绝对值不可微,从而导致数值求解困难。为克服以上缺点,提出了一种基于乘子法的随机单点在线梯度算法。利用最优化理论方法,将有约束问题转换为无约束问题,利用乘子法来求解网络误差函数。从理论上分析了算法的收敛速度和稳定性,仿真实验结果验证了算法的有效性。  相似文献   

18.
提出了一种新的演化神经网络算法GTEANN,该算法基于高效的郭涛算法,同时完成在网络结构空间和权值空间的搜索,以实现前馈神经网络的自动化设计。本方法采用的编码方案直观有效,基于该编码表示,神经网络的学习过程是一个复杂的混合整实数非线性规划问题,例如杂交操作包括网络的同构和规整处理。初步实验结果表明该方法收敛,能够达到根据训练样本自动优化设计多层前馈神经网络的目的。  相似文献   

19.
This paper describes concepts that optimize an on-chip learning algorithm for implementation of VLSI neural networks with conventional technologies. The network considered comprises an analog feedforward network with digital weights and update circuitry, although many of the concepts are also valid for analog weights. A general, semi-parallel form of perturbation learning is used to accelerate hidden-layer update while the infinity-norm error measure greatly simplifies error detection. Dynamic gain adaption, coupled with an annealed learning rate, produces consistent convergence and maximizes the effective resolution of the bounded weights. The use of logarithmic analog-to-digital conversion, during the backpropagation phase, obviates the need for digital multipliers in the update circuitry without compromising learning quality. These concepts have been validated through network simulations of continuous mapping problems.  相似文献   

20.
The blind equalizers based on complex valued feedforward neural networks, for linear and nonlinear communication channels, yield better performance as compared to linear equalizers. The learning algorithms are, generally, based on stochastic gradient descent, as they are simple to implement. However, these algorithms show a slow convergence rate. In the blind equalization problem, the unavailability of the desired output signal and the presence of nonlinear activation functions make the application of recursive least squares algorithm difficult. In this letter, a new scheme using recursive least squares algorithm is proposed for blind equalization. The learning of weights of the output layer is obtained by using a modified version of constant modulus algorithm cost function. For the learning of weights of hidden layer neuron space adaptation approach is used. The proposed scheme results in faster convergence of the equalizer.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号