首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper investigates an online gradient method with penalty for training feedforward neural networks with linear output. A usual penalty is considered, which is a term proportional to the norm of the weights. The main contribution of this paper is to theoretically prove the boundedness of the weights in the network training process. This boundedness is then used to prove an almost sure convergence of the algorithm to the zero set of the gradient of the error function.  相似文献   

2.
基于互补遗传算子的前馈神经网络三阶段学习方法   总被引:1,自引:0,他引:1  
论文提出了一种新的基于互补遗传算子的前馈神经网络三阶段学习方法。该方法把神经网络的学习过程分为三个阶段。第一阶段为结构辨识阶段,采用遗传算法进行神经网络隐层节点数目的选择和初始参数的设定,并基于发现的遗传算子的互补效应设计高效互补遗传算子。第二阶段为参数辨识阶段,采用效率较高的神经网络算法如L-M算法进行神经网络参数的进一步学习。第三阶段为剪枝阶段,通过获得最小结构的神经网络以提高其泛化能力。在整个学习过程中,学习过程的可控性以及神经网络的逼近精度、复杂度和泛化能力之间得到了满意平衡。仿真试验结果证明了该方法的有效性。  相似文献   

3.
The notion of counterclockwise (ccw) input-output (I-O) dynamics, introduced by Angeli (2006) to deal with questions of multistability in interconnected dynamical systems, is applied and further developed in order to analyze convergence and stability of neural networks. By pursuing a modular approach, we interpret a cellular nonlinear network (CNN) as a positive feedback of a parallel block of single-input-single-output (SISO) dynamical systems, the neurons, and a static multiple-input-multiple-output (MIMO) system that couples them (typically the so-called interconnection matrix). The analysis extends previously known results by enlarging the class of allowed neural dynamics to higher order neurons.  相似文献   

4.
多元多项式函数的三层前向神经网络逼近方法   总被引:4,自引:0,他引:4  
该文首先用构造性方法证明:对任意r阶多元多项式,存在确定权值和确定隐元个数的三层前向神经网络.它能以任意精度逼近该多项式.其中权值由所给多元多项式的系数和激活函数确定,而隐元个数由r与输入变量维数确定.作者给出算法和算例,说明基于文中所构造的神经网络可非常高效地逼近多元多项式函数.具体化到一元多项式的情形,文中结果比曹飞龙等所提出的网络和算法更为简单、高效;所获结果对前向神经网络逼近多元多项式函数类的网络构造以及逼近等具有重要的理论与应用意义,为神经网络逼近任意函数的网络构造的理论与方法提供了一条途径.  相似文献   

5.
本文介绍一种新的前馈神经网络的随机学习方法,着重讨论该算法的实现,并讨论了将它和BP算法相结合,从而得到一种非常实用的神经网络学习算法。  相似文献   

6.
In this paper, a batch gradient algorithm with adaptive momentum is considered and a convergence theorem is presented when it is used for two-layer feedforward neural networks training. Simple but necessary sufficient conditions are offered to guarantee both weak and strong convergence. Compared with existing general requirements, we do not restrict the error function to be quadratic or uniformly convex. A numerical example is supplied to illustrate the performance of the algorithm and support our theoretical finding.  相似文献   

7.
In this paper, based on Wirtinger calculus, we introduce a quasi-Newton method for training complex-valued neural networks with analytic activation functions. Using the duality between Wirtinger calculus and multivariate real calculus, we prove a convergence theorem of the proposed method for the minimization of real-valued complex functions. This lays the theoretical foundation for the complex quasi-Newton method and generalizes Powell’s well-known result for the real-valued case. The simulation results are given to show the effectiveness of the method.  相似文献   

8.
针对时变和/或非线性输入的前向神经网络提出了一种感知自适应算法。其本质是迫使输出的实际值和期望值之间的误差满足一个渐近稳定的差分方程,而不是用后向传播方法使误差函数极小化。通过适当排列扩张输入可以避免算法的奇异性。  相似文献   

9.
Fast Learning Algorithms for Feedforward Neural Networks   总被引:7,自引:0,他引:7  
In order to improve the training speed of multilayer feedforward neural networks (MLFNN), we propose and explore two new fast backpropagation (BP) algorithms obtained: (1) by changing the error functions, in case using the exponent attenuation (or bell impulse) function and the Fourier kernel function as alternative functions; and (2) by introducing the hybrid conjugate-gradient algorithm of global optimization for dynamic learning rate to overcome the conventional BP learning problems of getting stuck into local minima or slow convergence. Our experimental results demonstrate the effectiveness of the modified error functions since the training speed is faster than that of existing fast methods. In addition, our hybrid algorithm has a higher recognition rate than the Polak-Ribieve conjugate gradient and conventional BP algorithms, and has less training time, less complication and stronger robustness than the Fletcher-Reeves conjugate-gradient and conventional BP algorithms for real speech data.  相似文献   

10.
Sigma-Pi (Σ-Π) neural networks (SPNNs) are known to provide more powerful mapping capability than traditional feed-forward neural networks. A unified convergence analysis for the batch gradient algorithm for SPNN learning is presented, covering three classes of SPNNs: Σ-Π-Σ, Σ-Σ-Π and Σ-Π-Σ-Π. The monotonicity of the error function in the iteration is also guaranteed.
  相似文献   

11.
前向网络在用于模式分类时,其网络的有效训练一直是一个受到关注的问题,本文首先提出模式的可逆线性变换不改变网络的结构,而组成可逆线性变换的错切变换会在一定程度上改变网络训练的难度,从而提出了对模式进行适当错切变换可以有铲改变网络训练难度提高网络训练效率的方法,文末对所提出方法的实验结果也证明了这一点。  相似文献   

12.
A novel multistage feedforward network is proposed for efficient solving of difficult classification tasks. The standard Radial Basis Functions (RBF) architecture is modified in order to alleviate two potential drawbacks, namely the curse of dimensionality and the limited discriminatory capacity of the linear output layer. The first goal is accomplished by feeding the hidden layer output to the input of a module performing Principal Component Analysis (PCA). The second one is met by substituting the simple linear combiner in the standard architecture by a Multilayer Perceptron (MLP). Simulation results for the 2-spirals problem and Peterson-Barney vowel classification are reported, showing high classification accuracy using less parameters than existing solutions.  相似文献   

13.
This paper investigates the split-complex back-propagation algorithm with momentum and penalty for training complex-valued neural networks. Here the momentum are used to accelerate the convergence of the algorithm and the penalty are used to control the magnitude of the network weights. The sufficient conditions for the learning rate, the momentum factor, the penalty coefficient, and the activation functions are proposed to establish the theoretical results of the algorithm. We theoretically prove the boundedness of the network weights during the training process, which is usually used as a precondition for convergence analysis in literatures. The monotonicity of the error function and the convergence of the algorithm are also guaranteed.  相似文献   

14.
Feedforward neural network structures have extensively been considered in the literature. In a significant volume of research and development studies hyperbolic tangent type of a neuronal nonlinearity has been utilized. This paper dwells on the widely used neuronal activation functions as well as two new ones composed of sines and cosines, and a sinc function characterizing the firing of a neuron. The viewpoint here is to consider the hidden layer(s) as transforming blocks composed of nonlinear basis functions, which may assume different forms. This paper considers 8 different activation functions which are differentiable and utilizes Levenberg-Marquardt algorithm for parameter tuning purposes. The studies carried out have a guiding quality based on empirical results on several training data sets.  相似文献   

15.
Feedforward neural networks (FNN) have been proposed to solve complex problems in pattern recognition, classification and function approximation. Despite the general success of learning methods for FNN, such as the backpropagation (BP) algorithm, second-order algorithms, long learning time for convergence remains a problem to be overcome. In this paper, we propose a new hybrid algorithm for a FNN that combines unsupervised training for the hidden neurons (Kohonen algorithm) and supervised training for the output neurons (gradient descent method). Simulation results show the effectiveness of the proposed algorithm compared with other well-known learning methods.  相似文献   

16.
前馈神经网络的新算法及其收敛性   总被引:1,自引:0,他引:1  
从一般前馈神经网络模型出发,构造出一组关于权重的非线性方程组,给出不同于传统BP算法的新型神经元算法。理论证明了该算法的收敛性,从而避免了BP算法的局限性。  相似文献   

17.
The role of activation functions in feedforward artificial neural networks has not been investigated to the desired extent. The commonly used sigmoidal functions appear as discrete points in the sigmoidal functional space. This makes comparison difficult. Moreover, these functions can be interpreted as the (suitably scaled) integral of some probability density function (generally taken to be symmetric/bell shaped). Two parameterization methods are proposed that allow us to construct classes of sigmoidal functions based on any given sigmoidal function. The suitability of the members of the proposed class is investigated. It is demonstrated that all members of the proposed class(es) satisfy the requirements to act as an activation function in feedforward artificial neural networks.  相似文献   

18.
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance  相似文献   

19.
一种改进BP神经网络的算法   总被引:2,自引:0,他引:2  
传统的神经网络BP算法存在收敛速度慢、存在局部极小点等问题,这种算法收敛慢的主要原因是它利用的是性能函数的一阶信息,递推最小二乘算法利用了二阶信息,但是需要计算输入信号的自相关矩阵的逆,计算量大,不易实现。本文提出一种梯度递推BP算法,它基于最小二乘准则,利用改进的梯度来实现BP算法,这种算法不用计算输入信号的自相关矩阵,并通过仿真证明了该算法的有效性。  相似文献   

20.
前馈神经网络采用有教师的学习方法.根据实际输出与希望输出之间差异的函数(即误差函数)修改网络权值和阈值,不断反复训练使误差函数达到最小.讨论了误差函数的结构形式问题,给出了误差函数在条件对数均值处取得极小值的充分必要条件.该条件实际上给出了误差函数的结构形式.进一步的分析表明,误差函数结构形式推广了已有的结果,同时,获得的结构形式具有一定的抗干扰能力.之后进一步讨论了误差函数在第1次α分位点处取得极小值的结构形式.这个结论具有更广泛的意义.结果为人工神经网络的进一步研究提供了良好的基础.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号

京公网安备 11010802026262号