首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该模型中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,分别作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

2.
Median radial basis function neural network   总被引:3,自引:0,他引:3  
Radial basis functions (RBFs) consist of a two-layer neural network, where each hidden unit implements a kernel function. Each kernel is associated with an activation region from the input space and its output is fed to an output unit. In order to find the parameters of a neural network which embeds this structure we take into consideration two different statistical approaches. The first approach uses classical estimation in the learning stage and it is based on the learning vector quantization algorithm and its second-order statistics extension. After the presentation of this approach, we introduce the median radial basis function (MRBF) algorithm based on robust estimation of the hidden unit parameters. The proposed algorithm employs the marginal median for kernel location estimation and the median of the absolute deviations for the scale parameter estimation. A histogram-based fast implementation is provided for the MRBF algorithm. The theoretical performance of the two training algorithms is comparatively evaluated when estimating the network weights. The network is applied in pattern classification problems and in optical flow segmentation.  相似文献   

3.
权值初始化与激励函数调整相结合的学习算法   总被引:2,自引:0,他引:2  
提出了一种基于独立元分析(ICA)方法的权值初始化方法和动态调整S型激励函数的斜率相结合的神经网络学习算法。该方法利用ICA从输入数据中提取显著的特征信息来初始化输入层到隐含层权值。而且通过使神经网络的输出位于激励函数的活动区域,对隐含层到输出层的权值进行初始化。在学习过程中,再对每个隐单元和输出单元的激励函数的斜率进行自动调整。最后通过计算机仿真实际的基准问题,验证了论文提出的方法的有效性。实验结果表明,所提出的方法能有效地加快多层前向神经网络的训练过程。  相似文献   

4.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该网络中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

5.
BP神经网络的联合优化算法   总被引:5,自引:1,他引:4       下载免费PDF全文
针对BP神经网络存在收敛速度慢、易陷入局部极小等缺陷,提出了一种自适应调节学习率和动态调整S型激励函数相结合的改进BP算法。该算法将学习率与误差函数相关联,再对每个隐单元和输出单元的激励函数的斜率进行自动调整。通过实例仿真,将改进算法与标准BP算法、加动量项法和自适应学习率法进行比较,来验证所提出方法的有效性。实验结果表明,联合优化的BP算法能有效加快网络的收敛过程,并具有较强的泛化能力。  相似文献   

6.
B样条神经网络的构造理论   总被引:11,自引:0,他引:11  
文中首先讨论了B样条基函数的特性,在此基础上采用构造性的方法从理论上了B样条神经网络能够以任意精度逼近任意定义在致密区间上的连续实函数,最后给出了构造性算法,使用此算法,能在满足误差要求的条件下,构造出几乎最小的B样条基函数。  相似文献   

7.
Recent advances in FPGA technology have permitted the implementation of neurocomputational models, making them an interesting alternative to standard PCs in order to speed up the computations involved taking advantage of the intrinsic FPGA parallelism. In this work, we analyse and compare the FPGA implementation of two neural network learning algorithms: the standard and well known Back-Propagation algorithm and C-Mantec, a constructive neural network algorithm that generates compact one hidden layer architectures with good predictive capabilities. One of the main differences between both algorithms is the fact that while Back-Propagation needs a predefined architecture, C-Mantec constructs its network while learning the input patterns. Several aspects of the FPGA implementation of both algorithms are analyzed, focusing in features like logic and memory resources needed, transfer function implementation, computation time, etc. The advantages and disadvantages of both methods in relationship to their hardware implementations are discussed.  相似文献   

8.
According to conventional neural network theories, single-hidden-layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implementations, tuning all the parameters of the networks may cause learning complicated and inefficient, and it may be difficult to train networks with nondifferential activation functions such as threshold networks. Unlike conventional neural network theories, this paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions g:R/spl rarr/R and the activation functions for RBF nodes can be any integrable piecewise continuous functions g:R/spl rarr/R and /spl int//sub R/g(x)dx/spl ne/0. The proposed incremental method is efficient not only for SFLNs with continuous (including nondifferentiable) activation functions but also for SLFNs with piecewise continuous (such as threshold) activation functions. Compared to other popular methods such a new network is fully automatic and users need not intervene the learning process by manually tuning control parameters.  相似文献   

9.
为降低神经网络的冗余连接及不必要的计算代价,将量子免疫克隆算法应用于神经网络的优化过程,通过产生具有稀疏度的权值来优化神经网络结构。算法能够有效删除神经网络中的冗余连接和隐层节点,并同时提高神经网络的学习效率、函数逼近精度和泛化能力。该算法已应用于秦始皇帝陵博物院野外文物安防系统。经实际检验,算法提高了目标分类概率,降低了误报率。  相似文献   

10.
多项式函数型回归神经网络模型及应用   总被引:2,自引:1,他引:2  
周永权 《计算机学报》2003,26(9):1196-1200
文中利用回归神经网络既有前馈通路又有反馈通路的特点,将网络隐层中神经元的激活函数设置为可调多项式函数序列,提出了多项式函数型回归神经网络新模型,它不但具有传统回归神经网络的特点,而且具有较强的函数逼近能力,针对递归计算问题,提出了多项式函数型回归神经网络学习算法,并将该网络模型应用于多元多项式近似因式分解,其学习算法在多元多项式近似分解中体现了较强的优越性,通过算例分析表明,该算法十分有效,收敛速度快,计算精度高,可适用于递归计算问题领域,该文所提出的多项式函数型回归神经网络模型及学习算法对于代数符号近似计算有重要的指导意义。  相似文献   

11.
为降低神经网络的冗余连接及不必要的计算代价,将量子免疫克隆算法应用于神经网络的优化过程,通过产生具有稀疏度的权值来优化神经网络结构。算法能够有效删除神经网络中的冗余连接和隐层节点,并同时提高神经网络的学习效率、函数逼近精度和泛化能力。该算法已应用于秦始皇帝陵博物院野外文物安防系统。经实际检验,算法提高了目标分类概率,降低了误报率。  相似文献   

12.
13.
针对传统神经网络收敛速度慢,收敛精度低,以及用于模式识别泛化能力差的问题。提出了将量子神经网络与小波理论相结合的量子小波神经网络模型。该模型隐层量子神经元采用小波基函数的线性叠加作为激励函数,称之为多层小波激励函数,这样隐层神经元既能表示更多的状态和量级,又能提高网络收敛精度和速度。给出了网络学习算法。并以之在漏钢预报波形识别中的应用验证了该模型和学习算法的有效性。  相似文献   

14.
The purpose of this paper is to propose a compound cosine function neural network with continuous learning algorithm for the velocity and orientation angle tracking control of a nonholonomic mobile robot with nonlinear disturbances. Herein, two neural network (NN) controllers embedded in the closed-loop control system have the simple continuous learning and rapid convergence capability without the dynamics information of the mobile robot to realize the adaptive control of the mobile robot. The neuron function of the hidden layer in the three-layer feed-forward network structure is on the basis of combining a cosine function with a unipolar sigmoid function. The developed neural network controllers have simple algorithm and fast learning convergence because the weight values are only adjusted between the nodes in hidden layer and the output nodes, while the weight values between the input layer and the hidden layer are one, i.e. constant, without the weight adjustment. Therefore, the main advantages of this control system are the real-time control capability and the robustness by use of the proposed neural network controllers for a nonholonomic mobile robot with nonlinear disturbances. Through simulation experiments applied to the nonholonomic mobile robot with the nonlinear disturbances which are considered as dynamics uncertainty and external disturbances, the simulation results show that the proposed NN control system of nonholonomic mobile robots has real-time control capability, better robustness and higher control precision. The compound cosine function neural network provides us with a new way to solve tracking control problems for mobile robots.  相似文献   

15.
16.
相比径向基(RBF)神经网络,极限学习机(ELM)训练速度更快,泛化能力更强.同时,近邻传播聚类算法(AP)可以自动确定聚类个数.因此,文中提出融合AP聚类、多标签RBF(ML-RBF)和正则化ELM(RELM)的多标签学习模型(ML-AP-RBF-RELM).首先,在该模型中输入层使用ML-RBF进行映射,且通过AP聚类算法自动确定每一类标签的聚类个数,计算隐层节点个数.然后,利用每类标签的聚类个数通过K均值聚类确定隐层节点RBF函数的中心.最后,通过RELM快速求解隐层到输出层的连接权值.实验表明,ML-AP-RBF-RELM效果较好.  相似文献   

17.
In the proposed work, two types of artificial neural networks are proposed by using well-known advantages and valuable features of wavelets and sigmoidal activation functions. Two neurons are derived by adding and multiplying the outputs of the wavelet and the sigmoidal activation functions. These neurons in a feed-forward single hidden layer network result summation wavelet neural network (SWNN) and multiplication wavelet neural network (MWNN). An algorithm is introduced for structure determination of the proposed networks. Approximation properties of SWNN and MWNN have been evaluated with different wavelet functions. The above networks in the consequent part of the neuro-fuzzy model result summation wavelet neuro-fuzzy (SWNF) and multiplication wavelet neuro-fuzzy (MWNF) models. Different types of wavelet function are tested with the proposed networks and fuzzy models on four different dynamical examples. Convergence of the learning process is also guaranteed by adaptive learning rate and performing stability analysis using Lyapunov function.  相似文献   

18.
针对输入和输出均为时变函数或过程的实际系统建模和仿真问题,提出一种输入和输出均为时变函数的反馈过程神经网络模型,该模型的第1隐层对来自输入层的时变信号进行空间加权聚合和激励运算,并在将其输出传送至第2隐层的同时反馈至输入层;第2隐层完成对其时变输入的空间加权聚合、时间累积聚合和激励运算,并将其输出传送至输出层.给出了相应的学习算法,并以实例验证了该模型及其学习算法的有效性.  相似文献   

19.
A new learning algorithm is proposed for training single hidden layer feedforward neural network. In each epoch, the connection weights are updated by simultaneous perturbation. Tunneling using perturbation technique is applied to detrap the local minima. The proposed technique is shown to give better convergence results for the selected problems, namely neuro-controller, XOR, L-T character recognition, two spirals, simple interaction function, harmonic function and complicated interaction function. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

20.
In this paper, a constructive one-hidden-layer network is introduced where each hidden unit employs a polynomial function for its activation function that is different from other units. Specifically, both a structure level as well as a function level adaptation methodologies are utilized in constructing the network. The functional level adaptation scheme ensures that the "growing" or constructive network has different activation functions for each neuron such that the network may be able to capture the underlying input-output map more effectively. The activation functions considered consist of orthonormal Hermite polynomials. It is shown through extensive simulations that the proposed network yields improved performance when compared to networks having identical sigmoidal activation functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号