首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
传统的神经网络集成中各子网络之间的相关性较大,从而影响集成的泛化能力.为此,提出用负相关学习算法来训练神经网络集成,以增加子网络间的差异度,从而提高集成的泛化能力.并将基于负相关学习法的神经网络集成应用于中医舌诊诊断,以肝病病证诊断进行仿真.实验结果表明:基于负相关学习法的神经网络集成比单个子网和传统神经网络集成更能有效地提高其泛化能力.因此,基于负相关神经网络集成算法的研究是可行的、有效的.  相似文献   

2.
巩文科  李心广  赵洁 《计算机工程》2007,33(8):152-153,156
针对目前入侵检测中存在的误检率高、对新的入侵方法不敏感等问题,提出了一种基于神经网络集成的入侵检测方法。使用负相关法训练神经网络集成,采用tf×idf的系统调用编码方式作为输入。实验结果表明,与单神经网络方法相比,神经网络集成弥补了神经网络方法在检测数据上的不足,在保证较高的入侵检测率的前提下,保持了较低的误检率。  相似文献   

3.
神经网络BP学习算法动力学分析   总被引:2,自引:0,他引:2  
研究神经网络BP学习算法与微分动力系统的关系.指出BP学习算法的迭代式与相 应的微分动力系统数值解Euler方法在一定条件下等价,且二者在解的渐近性方面是一致的. 给出了神经网络BP学习算法与相应的微分动力系统解的存在性、唯一性定理和微分动力系统 的零解稳定性定理.从理论上证明了神经网络的学习在一定条件下与微分动力系统的数值方法 所得的数值解在渐近意义下是等价的,从而借助于微分动力系统的数值方法可以解决神经网络 的学习问题.最后给出了用改进Euler方法训练BP网的例子.  相似文献   

4.
一种改进BP网络学习算法   总被引:6,自引:3,他引:3  
针对BP神经网络的原始算法收敛速率慢、学习精度低、训练过程易陷入局部极小值问题,为解决上述问题,提出一种以变学习率BP算法为基础的改进算法,通过区分隐层和输出层的学习率,并用交叉熵作性能函数,提高算法的学习精度和训练速度,并经过数学推导,得到改进箅法的实现公式.将改进算法应用于奇偶数判别问题进行仿真,仿真实验结果与其它类似的方法进行比较后,发现改进算法大大降低了网络迭代次数,缩短了网络的训练时间,提高了训练精度,验证了该算法的有效性.  相似文献   

5.
研究用微分方程数值解法--线性多步法替代神经网络的学习算法,指出在一定条件下神经网络的BP学习问题与求解一个相应的微分系统在渐近意义下是等价的,从而求解微分动力系统的数值解法也可用于神经网络的学习,给出了训练神经网络的Milne方法和BP-Milne结合算法以及Hamming方法和BP-Hamming结合算法,并以9点两类模式、随机模式识别和石油地质中沉积微相模式识别等3个问题为例进行了实验,实验结果表明利用微分动力系统的数值解法进行神经网络的学习是可行的。  相似文献   

6.
基于快速神经网络算法的非特定人语音识别   总被引:4,自引:0,他引:4       下载免费PDF全文
提出一种用于语音识别的改进的快速神经网络算法 ,即动态不等步长的误差分段学习算法。将步长看作误差和网络节点输出的函数 ,对各权值按不同步长进行动态调整 ,并将其应用于一个基于前馈神经网络模型的非特定人语音识别系统。实验表明 ,该算法比传统 BP算法在训练速度上可提高十几倍 ,训练出的语音识别网络系统具有较高的识别率  相似文献   

7.
基于样本期望训练数的BP神经网络改进研究   总被引:1,自引:1,他引:0  
BP算法是神经网络中最常用的算法之一.标准BP算法存在的最主要问题就是易于陷入局部极小、收敛速度慢等问题.针对BP算法的这些问题,出现了许多改进的措施,如引入变步长法、加动量项法等.提出了一种基于样本期望训练数的改进BP算法,仿真实验说明了该算法可以明显提高BP网络学习速度,并且具有简单通用性,可以和其他方法结合,进一步提高算法的收敛速度.  相似文献   

8.
基于最优化理论,提出了基于新拟牛顿方程的改进拟牛顿算法训练BP神经网络.改进算法使用了一组新型的Hesse矩阵校正方程,使得改进拟牛顿算法具有全局收敛性和局部超线性收敛性.该文将改进的拟牛顿算法与BP神经网络权值的训练结合,得到一种新的BP神经网络权值的训练算法.与传统的神经网络权值学习的拟牛顿算法比较而言,采用改进算法的神经网络的收敛速度明显加快.改进算法能有效解决BP神经网络收敛速度慢的缺陷,显著提高了BP神经网络的学习训练收敛速度和学习精度.  相似文献   

9.
蔡自兴  孙国荣  李枚毅 《计算机应用》2005,25(10):2387-2389
多示例神经网络是一类用于求解多示例学习问题的神经网络,但由于其中有不可微函数,使用反向传播训练方法时需要采用近似方法,因此多示例神经网络的预测准确性不高。〖BP)〗为了提高预测准确性,构造了一类优化多示例神经网络参数的改进遗传算法, 借助基于反向传播训练的局部搜索算子、排挤操作和适应性操作概率计算方式来提高收敛速度和防止早熟收敛。通过公认的数据集上实验结果的分析和对比,证实了这个改进的遗传算法能够明显地提高多示例神经网络的预测准确性,同时还具有比其他算法更快的收敛速度。  相似文献   

10.
强化学习是解决自适应问题的重要方法,被广泛地应用于连续状态下的学习控制,然而存在效率不高和收敛速度较慢的问题.在运用反向传播(back propagation,BP)神经网络基础上,结合资格迹方法提出一种算法,实现了强化学习过程的多步更新.解决了输出层的局部梯度向隐层节点的反向传播问题,从而实现了神经网络隐层权值的快速更新,并提供一个算法描述.提出了一种改进的残差法,在神经网络的训练过程中将各层权值进行线性优化加权,既获得了梯度下降法的学习速度又获得了残差梯度法的收敛性能,将其应用于神经网络隐层的权值更新,改善了值函数的收敛性能.通过一个倒立摆平衡系统仿真实验,对算法进行了验证和分析.结果显示,经过较短时间的学习,本方法能成功地控制倒立摆,显著提高了学习效率.  相似文献   

11.
Ke  Minlong  Fernanda L.  Xin   《Neurocomputing》2009,72(13-15):2796
Negative correlation learning (NCL) is a successful approach to constructing neural network ensembles. In batch learning mode, NCL outperforms many other ensemble learning approaches. Recently, NCL has also shown to be a potentially powerful approach to incremental learning, while the advantages of NCL have not yet been fully exploited. In this paper, we propose a selective NCL (SNCL) algorithm for incremental learning. Concretely, every time a new training data set is presented, the previously trained neural network ensemble is cloned. Then the cloned ensemble is trained on the new data set. After that, the new ensemble is combined with the previous ensemble and a selection process is applied to prune the whole ensemble to a fixed size. This paper is an extended version of our preliminary paper on SNCL. Compared to the previous work, this paper presents a deeper investigation into SNCL, considering different objective functions for the selection process and comparing SNCL to other NCL-based incremental learning algorithms on two more real world bioinformatics data sets. Experimental results demonstrate the advantage of SNCL. Further, comparisons between SNCL and other existing incremental learning algorithms, such Learn++ and ARTMAP, are also presented.  相似文献   

12.
Negative correlation learning (NCL) is a neural network ensemble learning algorithm that introduces a correlation penalty term to the cost function of each individual network so that each neural network minimizes its mean square error (MSE) together with the correlation of the ensemble. This paper analyzes NCL and reveals that the training of NCL (when $lambda=1$) corresponds to training the entire ensemble as a single learning machine that only minimizes the MSE without regularization. This analysis explains the reason why NCL is prone to overfitting the noise in the training set. This paper also demonstrates that tuning the correlation parameter $lambda$ in NCL by cross validation cannot overcome the overfitting problem. The paper analyzes this problem and proposes the regularized negative correlation learning (RNCL) algorithm which incorporates an additional regularization term for the whole ensemble. RNCL decomposes the ensemble's training objectives, including MSE and regularization, into a set of sub-objectives, and each sub-objective is implemented by an individual neural network. In this paper, we also provide a Bayesian interpretation for RNCL and provide an automatic algorithm to optimize regularization parameters based on Bayesian inference. The RNCL formulation is applicable to any nonlinear estimator minimizing the MSE. The experiments on synthetic as well as real-world data sets demonstrate that RNCL achieves better performance than NCL, especially when the noise level is nontrivial in the data set.   相似文献   

13.
Negative Correlation Learning (NCL) has been successfully applied to construct neural network ensembles. It encourages the neural networks that compose the ensemble to be different from each other and, at the same time, accurate. The difference among the neural networks that compose an ensemble is a desirable feature to perform incremental learning, for some of the neural networks can be able to adapt faster and better to new data than the others. So, NCL is a potentially powerful approach to incremental learning. With this in mind, this paper presents an analysis of NCL, aiming at determining its weak and strong points to incremental learning. The analysis shows that it is possible to use NCL to overcome catastrophic forgetting, an important problem related to incremental learning. However, when catastrophic forgetting is very low, no advantage of using more than one neural network of the ensemble to learn new data is taken and the test error is high. When all the neural networks are used to learn new data, some of them can indeed adapt better than the others, but a higher catastrophic forgetting is obtained. In this way, it is important to find a trade-off between overcoming catastrophic forgetting and using an entire ensemble to learn new data. The NCL results are comparable with other approaches which were specifically designed to incremental learning. Thus, the study presented in this work reveals encouraging results with negative correlation in incremental learning, showing that NCL is a promising approach to incremental learning.
Xin YaoEmail:
  相似文献   

14.
This paper presents a new algorithm for designing neural network ensembles for classification problems with noise. The idea behind this new algorithm is to encourage different individual networks in an ensemble to learn different parts or aspects of the training data so that the whole ensemble can learn the whole training data better. Negatively correlated neural networks are trained with a novel correlation penalty term in the error function to encourage such specialization. In our algorithm, individual networks are trained simultaneously rather than independently or sequentially. This provides an opportunity for different networks to interact with each other and to specialize. Experiments on two real-world problems demonstrate that the new algorithm can produce neural network ensembles with good generalization ability. This work was presented, in part, at the Third International Symposium on Artificial Life and Robotics, Oita, Japan January 19–21, 1998  相似文献   

15.
This paper presents a new cooperative ensemble learning system (CELS) for designing neural network ensembles. The idea behind CELS is to encourage different individual networks in an ensemble to learn different parts or aspects of a training data so that the ensemble can learn the whole training data better. In CELS, the individual networks are trained simultaneously rather than independently or sequentially. This provides an opportunity for the individual networks to interact with each other and to specialize. CELS can create negatively correlated neural networks using a correlation penalty term in the error function to encourage such specialization. This paper analyzes CELS in terms of bias-variance-covariance tradeoff. CELS has also been tested on the Mackey-Glass time series prediction problem and the Australian credit card assessment problem. The experimental results show that CELS can produce neural network ensembles with good generalization ability.  相似文献   

16.
Combining accurate neural networks (NN) in the ensemble with negative error correlation greatly improves the generalization ability. Mixture of experts (ME) is a popular combining method which employs special error function for the simultaneous training of NN experts to produce negatively correlated NN experts. Although ME can produce negatively correlated experts, it does not include a control parameter like negative correlation learning (NCL) method to adjust this parameter explicitly. In this study, an approach is proposed to introduce this advantage of NCL into the training algorithm of ME, i.e., mixture of negatively correlated experts (MNCE). In this proposed method, the capability of a control parameter for NCL is incorporated in the error function of ME, which enables its training algorithm to establish better balance in bias-variance-covariance trade-off and thus improves the generalization ability. The proposed hybrid ensemble method, MNCE, is compared with their constituent methods, ME and NCL, in solving several benchmark problems. The experimental results show that our proposed ensemble method significantly improves the performance over the original ensemble methods.  相似文献   

17.
Negative Correlation Learning (NCL) is a popular combining method that employs special error function for the simultaneous training of base neural network (NN) experts. In this article, we propose an improved version of NCL method in which the capability of gating network, as the combining part of Mixture of Experts method, is used to combine the base NNs in the NCL ensemble method. The special error function of the NCL method encourages each NN expert to learn different parts or aspects of the training data. Thus, the local competence of the experts should be considered in the combining approach. The gating network provides a way to support this needed functionality for combining the NCL experts. So the proposed method is called Gated NCL. The improved ensemble method is compared with the previous approaches were used for combining NCL experts, including winner-take-all (WTA) and average (AVG) combining techniques, in solving several classification problems from UCI machine learning repository. The experimental results show that our proposed ensemble method significantly improved performance over the previous combining approaches.  相似文献   

18.
Evolutionary ensembles with negative correlation learning   总被引:3,自引:0,他引:3  
Based on negative correlation learning and evolutionary learning, this paper presents evolutionary ensembles with negative correlation learning (EENCL) to address the issues of automatic determination of the number of individual neural networks (NNs) in an ensemble and the exploitation of the interaction between individual NN design and combination. The idea of EENCL is to encourage different individual NNs in the ensemble to learn different parts or aspects of the training data so that the ensemble can learn better the entire training data. The cooperation and specialization among different individual NNs are considered during the individual NN design. This provides an opportunity for different NNs to interact with each other and to specialize. Experiments on two real-world problems demonstrate that EENCL can produce NN ensembles with good generalization ability.  相似文献   

19.
分析了神经网络集成泛化误差、个体神经网络泛化误差、个体神经网络差异度之间的关系,提出了一种个体神经网络主动学习方法.个体神经网络同时交互训练,既满足了个体神经网络的精度要求,又满足了个体神经网络的差异性要求.另外,给出了一种个体神经网络选择性集成方法,对个体神经网络加入偏置量,增加了个体神经网络的可选数量,降低了神经网络集成的泛化误差.理论分析和实验结果表明,使用这种个体神经网络训练方法、个体神经网络选择性集成方法能够构建有效的神经网络集成系统.  相似文献   

20.
Neural-Based Learning Classifier Systems   总被引:1,自引:0,他引:1  
UCS is a supervised learning classifier system that was introduced in 2003 for classification in data mining tasks. The representation of a rule in UCS as a univariate classification rule is straightforward for a human to understand. However, the system may require a large number of rules to cover the input space. Artificial neural networks (NNs), on the other hand, normally provide a more compact representation. However, it is not a straightforward task to understand the network. In this paper, we propose a novel way to incorporate NNs into UCS. The approach offers a good compromise between compactness, expressiveness, and accuracy. By using a simple artificial NN as the classifier's action, we obtain a more compact population size, better generalization, and the same or better accuracy while maintaining a reasonable level of expressiveness. We also apply negative correlation learning (NCL) during the training of the resultant NN ensemble. NCL is shown to improve the generalization of the ensemble.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号