首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
有理式多层前馈神经网络   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了有理式多层前馈神经网络的数学模型,给出了有理式多层神经网络的学习算法,就计算复杂度而言,有理式神经网络的学习算法与传统的多层神经网络反传播算法是同阶的,函数逼近应用实例结果表明,将有理式多层神经网络用于解决传统问题是有效的。  相似文献   

2.
任何连接方式的神经网络总可以归结为跨越连接网络。在传统多层前馈神经网络算法的基础上,提出了完全全连接神经网络的概念,给出了基于跨越连接的多层前馈神经网络算法。通过分析多层前馈神经网络的误差函数,从理论上证明了:相对于无跨越连接网络,基于跨越连接的多层前馈神经网络能以更加简洁的结构逼近理想状态。最后,用一个隐层神经元解决了XOR问题。  相似文献   

3.
多项式函数型回归神经网络模型及应用   总被引:2,自引:1,他引:2  
周永权 《计算机学报》2003,26(9):1196-1200
文中利用回归神经网络既有前馈通路又有反馈通路的特点,将网络隐层中神经元的激活函数设置为可调多项式函数序列,提出了多项式函数型回归神经网络新模型,它不但具有传统回归神经网络的特点,而且具有较强的函数逼近能力,针对递归计算问题,提出了多项式函数型回归神经网络学习算法,并将该网络模型应用于多元多项式近似因式分解,其学习算法在多元多项式近似分解中体现了较强的优越性,通过算例分析表明,该算法十分有效,收敛速度快,计算精度高,可适用于递归计算问题领域,该文所提出的多项式函数型回归神经网络模型及学习算法对于代数符号近似计算有重要的指导意义。  相似文献   

4.
为提高神经网络的逼近能力,通过在普通BP网络中引入量子旋转门,提出了一种新颖的量子衍生神经网络模型. 该模型隐层由量子神经元组成,每个量子神经元携带一组量子旋转门,用于更新隐层的量子权值,输入层和输出层均为普通神经元. 基于误差反传播算法设计了该模型的学习算法. 模式识别和函数逼近的实验结果验证了提出模型及算法的有效性.  相似文献   

5.
基于RBF神经网络的时间序列预测   总被引:17,自引:0,他引:17  
前馈神经网络在时间序列预测中的应用已得到充分地认可,一些模型已经提出,例如多层感知器(MLP),误差反向传播(BP)和径向基函数(RBF)网络等等。相对于其他前馈神经网络,RBF网络学习速度快,函数逼近能力强,因而在时间序列预测方面具有很好的应用前景。  相似文献   

6.
神经网络增强学习的梯度算法研究   总被引:11,自引:1,他引:11  
徐昕  贺汉根 《计算机学报》2003,26(2):227-233
针对具有连续状态和离散行为空间的Markov决策问题,提出了一种新的采用多层前馈神经网络进行值函数逼近的梯度下降增强学习算法,该算法采用了近似贪心且连续可微的Boltzmann分布行为选择策略,通过极小化具有非平稳行为策略的Bellman残差平方和性能指标,以实现对Markov决策过程最优值函数的逼近,对算法的收敛性和近似最优策略的性能进行了理论分析,通过Mountain-Car学习控制问题的仿真研究进一步验证了算法的学习效率和泛化性能。  相似文献   

7.
该文利用凸函数共轭性质中的Young不等式构造前馈神经网络优化目标函数。这个优化目标函数若固定权值,对隐层输出来说为凸函数;若固定隐层输出,对权值来说为凸函数。因此,此目标函数不存在局部最小。此目标函数的优化速度快,大大提高了前馈神经网络的学习效率。仿真试验表明,与传统算法如误差反向传播算法或BP算法和含势态因子(Momentum factor)的BP算法及现有的分层优化算法相比,新算法能加快收敛速度,并降低学习误差。利用这种快速算法对矿体进行仿真预测,取得了良好效果。  相似文献   

8.
多层随机神经网络em算法   总被引:3,自引:1,他引:2  
本文讨论了基于微分流形框架随机神经网络学习算法,称为em学习算法;对于多层随机神经网络模型,我们从微分流形的角度分析它的对偶平坦流形结构,描述em算法对于多层前馈随机神经网络模型学习算法实现和加速技术。  相似文献   

9.
前向代数神经网络的函数逼近理论及学习算法   总被引:12,自引:0,他引:12  
文中对MP神经元模型进行了推广,定义了多项代数神经元、多项式代数神经网络,将多项式代数融入代数神经网络,分析了前向多项式代数神经网络函数逼近能力及理论依据,设计出了一类双输入单输出的前向4层多层式代数神经网络模型,由该模型构成的网络能够逼近于给定的二元多项式到预定的精度。给出了在P-adic意义下的多项式代数神经网络函数逼近整体学习算法,在学习的过程中,不存在局部极小,通过实例表明,该算法有效,最  相似文献   

10.
一种用于神经网络训练的隐节点校正算法   总被引:3,自引:0,他引:3  
误差反传算法被广泛用于多层前馈神经网络的训练,但该算法的收敛性问题并没有解决,这导致训练后的网络泛化能力一般很差,本文研究了这一问题,并基于神经网络映射定理提出了一种用于训练网络逼近单输出函数的隐节点校正(HNR)算法,这在神经辨识领域是有用的,因为大多数工业对象都是输入单输出的,我们对HNR算法的收敛性和泛化能力作了理论上的研究,仿真实验和催化重整过程建模中的应用实例表明该算法在一定条件下具有很  相似文献   

11.
一种分式过程神经元网络及其应用研究   总被引:3,自引:0,他引:3  
针对带有奇异值复杂时变信号的模式分类和系统建模问题,提出了一种分式过程神经元网络.该模型是基于有理式函数具有的对复杂过程信号的逼近性质和过程神经元网络对时变信息的非线性变换机制构建的。其基本信息处理单元由两个过程神经元成对偶组成。逻辑上构成一个分式过程神经元,是人工神经网络在结构和信息处理机制上的一种扩展.分析了分式过程神经元网络的连续性和泛函数逼近能力,给出了基于函数正交基展开的学习算法.实验结果表明,分式过程神经元网络对于带有奇异值时变函数样本的学习性质和泛化性质要优于BP网络和一般过程神经元网络。网络隐层数和节点数可较大减少,且算法的学习性质与传统BP算法相同.  相似文献   

12.
In this paper, we propose the approximate transformable technique, which includes the direct transformation and indirect transformation, to obtain a Chebyshev-Polynomials-Based (CPB) unified model neural networks for feedforward/recurrent neural networks via Chebyshev polynomials approximation. Based on this approximate transformable technique, we have derived the relationship between the single-layer neural networks and multilayer perceptron neural networks. It is shown that the CPB unified model neural networks can be represented as a functional link networks that are based on Chebyshev polynomials, and those networks use the recursive least square method with forgetting factor as learning algorithm. It turns out that the CPB unified model neural networks not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural networks. Furthermore, we have also derived the condition such that the unified model generating by Chebyshev polynomials is optimal in the sense of error least square approximation in the single variable ease. Computer simulations show that the proposed method does have the capability of universal approximator in some functional approximation with considerable reduction in learning time.  相似文献   

13.
A study on effectiveness of extreme learning machine   总被引:7,自引:0,他引:7  
Extreme learning machine (ELM), proposed by Huang et al., has been shown a promising learning algorithm for single-hidden layer feedforward neural networks (SLFNs). Nevertheless, because of the random choice of input weights and biases, the ELM algorithm sometimes makes the hidden layer output matrix H of SLFN not full column rank, which lowers the effectiveness of ELM. This paper discusses the effectiveness of ELM and proposes an improved algorithm called EELM that makes a proper selection of the input weights and bias before calculating the output weights, which ensures the full column rank of H in theory. This improves to some extend the learning rate (testing accuracy, prediction accuracy, learning time) and the robustness property of the networks. The experimental results based on both the benchmark function approximation and real-world problems including classification and regression applications show the good performances of EELM.  相似文献   

14.
In this paper, a new learning algorithm which encodes a priori information into feedforward neural networks is proposed for function approximation problem. The new algorithm considers two kinds of constraints, which are architectural constraints and connection weight constraints, from a priori information of function approximation problem. On one hand, the activation functions of the hidden neurons are specific polynomial functions. On the other hand, the connection weight constraints are obtained from the first-order derivative of the approximated function. The new learning algorithm has been shown by theoretical justifications and experimental results to have better generalization performance and faster convergent rate than other algorithms.  相似文献   

15.
从理论上提出了子空间信息量(SIQ)及其准则(SIQC)的概念;在此基础上阐述了基于上述准则的前向神经网络设计的相关理论,包括前向神经网络隐含层信息量(HLIQ)、存在性和逼近定理,给出了选择隐含层神经元数、权值向量集和隐含层激励函数的指导方向;提出了基于上述理论的一种可行的次优网络设计算法;最后,详细分析了网络性能指标及其影响因素,上述理论和方法完全克服了传统学习算法的各种弊端,丰富了前向神经网络设计领域的理论依据,具有较大的理论指导和实际应用价值,文中通过具体实例验证了上述理论和方法的可行性和优越性.  相似文献   

16.
This paper presents a wavelet-based recurrent fuzzy neural network (WRFNN) for prediction and identification of nonlinear dynamic systems. The proposed WRFNN model combines the traditional Takagi-Sugeno-Kang (TSK) fuzzy model and the wavelet neural networks (WNN). This paper adopts the nonorthogonal and compactly supported functions as wavelet neural network bases. Temporal relations embedded in the network are caused by adding some feedback connections representing the memory units into the second layer of the feedforward wavelet-based fuzzy neural networks (WFNN). An online learning algorithm, which consists of structure learning and parameter learning, is also presented. The structure learning depends on the degree measure to obtain the number of fuzzy rules and wavelet functions. Meanwhile, the parameter learning is based on the gradient descent method for adjusting the shape of the membership function and the connection weights of WNN. Finally, computer simulations have demonstrated that the proposed WRFNN model requires fewer adjustable parameters and obtains a smaller rms error than other methods.  相似文献   

17.
提出一种基于Zernike矩和多级前馈神经网络的图像配准算法。利用低阶Zernike矩表征图像的全局几何特征,通过多级前馈神经网络学习图像所经历的旋转、缩放和平移等仿射变换参数,在一级前馈神经网络的基础上添加二级前馈网络,以提高参数估计精度。仿真结果表明,与基于DCT系数的神经网络算法相比,该算法旋转、缩放和平移估计精度较高,对噪声的鲁棒性较强。  相似文献   

18.
This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.  相似文献   

19.
基于集成神经网络入侵检测系统的研究与实现   总被引:1,自引:8,他引:1  
为解决传统入侵检测模型所存在的检测效率低,对未知的入侵行为检测困难等问题,对集成学习进行了研究与探讨,提出一种采用遗传算法的集成神经网络入侵检测模型,阐述了模型的工作原理和各模块的主要功能.模型通过遗传算法寻找那些经过训练后差异较大的神经网络进行集成.实验表明,集成神经网络与检测率最好的单个神经网络相比检测率有所提高.同时,该模型采用机器学习方法,可使系统能动态地适应环境,不仅对已知的入侵具有较好的识别能力,而且能识别未知的入侵行为,从而实现入侵检测的智能化.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号