首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 80 毫秒
1.
传统的神经网络集成中各子网络之间的相关性较大,从而影响集成的泛化能力.为此,提出用负相关学习算法来训练神经网络集成,以增加子网络间的差异度,从而提高集成的泛化能力.并将基于负相关学习法的神经网络集成应用于中医舌诊诊断,以肝病病证诊断进行仿真.实验结果表明:基于负相关学习法的神经网络集成比单个子网和传统神经网络集成更能有效地提高其泛化能力.因此,基于负相关神经网络集成算法的研究是可行的、有效的.  相似文献   

2.
在回顾以往神经网络集成的研究成果基础上,提出一种新的负相关学习方法,该方法易于执行,计算量小,有效的消除了学习中的复合线性问题,减小了集成误差,最后用测试用例对该方法进行了考察,证明该方法可以有效的降低集成预测误差,得到较为理想的集成效果。  相似文献   

3.
该文针对Win32PE病毒种类多,破坏力强的特点,提出一种基于神经网络集成的病毒检测方法。神经网络集成采用负相关学习方法进行训练,采用n-gram特征字统计方法得到病毒特征字,计算特征字信息条件熵,来选择作为训练样本的特征字。实验结果表明,神经网络集成改善了传统的特征字比对法不能识别新的病毒,容易被病毒制造者克服的缺点,在保证对Win32PE病毒较高的检测率的同时保持了较低的误检率。  相似文献   

4.
本文提出了一种新的基于深度负相关学习的WiFi室内定位模型.通过负相关学习约束,多个学习器能学习到不同的表征特性,从而有效降低模型的过拟合,并极大地提升其泛化能力.同时该模型将负相关学习方法应用到去噪自编码器和回归预测器上,并利用深度学习方法使其很好地适应随环境和时间变化的RSSI信号,提高了在长时间间隔内的定位性能.利用负相关学习方法使定位模型在初始时候的平均定位误差从1.57 m下降为0.77 m,60 d平均定位误差也仅为0.89 m,误差仅仅只增加0.12 m,验证了负相关学习能够削弱环境变化对定位的影响.  相似文献   

5.
巩文科  李心广  赵洁 《计算机工程》2007,33(8):152-153,156
针对目前入侵检测中存在的误检率高、对新的入侵方法不敏感等问题,提出了一种基于神经网络集成的入侵检测方法。使用负相关法训练神经网络集成,采用tf×idf的系统调用编码方式作为输入。实验结果表明,与单神经网络方法相比,神经网络集成弥补了神经网络方法在检测数据上的不足,在保证较高的入侵检测率的前提下,保持了较低的误检率。  相似文献   

6.
基于人工神经网络的足球机器人分层学习研究   总被引:8,自引:2,他引:8  
主要研究人工神经网络在机器人足球比赛中的应用。介绍了足球机器人使用BP网络学习基本动作和行为决策的分层学习模型,并讨论了对BP算法的诸多改进方法。结合BP网络和产生式系统,提出了一个混合动作选择器,并进行了实验,给出了实验结果。  相似文献   

7.
一种异构神经网络集成协同构造算法   总被引:4,自引:0,他引:4  
提出一种异构神经网络集成的协同构造算法(HNNECC)。首先利用进化规划同时进化网络拓扑结构和连接权值,生成多个异构最优网络,然后对异构网络进行组合.在构造神经网络集成的过程中通过协同合作,保持各网络间的负相关。从而在提高成员网络精度的同时增加各成员网络之间的差异度.利用统计学习理论对算法进行分析,表明该方法具有很好的泛化性能.分别在四个数据集上进行了实验,相对于单个网络,本文方法可提高性能17%到85%,亦优于Bagging等传统固定结构的神经网络集成方法。  相似文献   

8.
9.
一种基于人工神经网络在线学习的自适应预测方法   总被引:2,自引:0,他引:2  
本文在分析和比较了传统预测方法的基础上,研究基于人工神经网络非线性映射的预测方法,针对一类缓变故障的预测问题,提出单样本在线学习的自适应预测算法,并用于柴油机故障预测和诊断。  相似文献   

10.
提出一种基于模糊熵准则的人工神经网络的模糊学习方法。  相似文献   

11.
一种基于神经网络集成的规则学习算法   总被引:8,自引:0,他引:8  
将神经网络集成与规则学习相结合,提出了一种基于神经网络集成的规则学习算法.该算法以神经网络集成作为规则学习的前端,利用其产生出规则学习所用的数据集,在此基础上进行规则学习.在UCl机器学习数据库上的实验结果表明,该算法可以产生泛化能力非常强的规则.  相似文献   

12.
并行学习神经网络集成方法   总被引:23,自引:0,他引:23  
该文分析了神经网络集成中成员神经网络的泛化误差、成员神经网络之间的差异度对神经网络集成泛化误差的影响,提出了一种并行学习神经网络集成方法;对参与集成的成员神经网络,给出了一种并行训练方法,不仅满足了成员网络本身的精度要求,还满足了它与其余成员网络的差异性要求;另外,给出了一种并行确定集成成员神经网络权重方法.实验结果表明,使用该文的成员神经网络训练方法、成员神经网络集成方法能够构建有效的神经网络集成系统.  相似文献   

13.
刘青  周鹏 《计算机工程》2005,31(3):189-191
DNA微阵列技术使人们可同时观测成千上万个基因的表达水平,对其数据的分析已成为生物信息学研究的焦点。针对微阵列基因表达数据维数高、样本小、非线性的特点,设计并实现了一种基因表达数据分类识别方法,针对结肠数据集的实验表明其泛化效果有所增强。  相似文献   

14.
孟蜀锴  莫玉龙 《计算机工程》2004,30(2):36-37,63
提出了一种基于细胞神经网络的灰度图像负片算法。根据细胞神经网络高速并行的特点。提出并设计了单层细胞神经网络负片模板用于灰度图像的负片处理。为细胞神经网络在图像处理领域中的应用提供了一种优良的算法。实验证明了,该算法对灰度图像负片处理的有效性。  相似文献   

15.
This letter presents a novel cooperative neural network ensemble learning method based on Negative Correlation learning. It enables easy integration of various network models and reduces communication bandwidth significantly for effective parallel speedup. Comparison with the best Negative Correlation learning method reported demonstrates comparable performance at significantly reduced communication overhead.  相似文献   

16.
Ke  Minlong  Fernanda L.  Xin   《Neurocomputing》2009,72(13-15):2796
Negative correlation learning (NCL) is a successful approach to constructing neural network ensembles. In batch learning mode, NCL outperforms many other ensemble learning approaches. Recently, NCL has also shown to be a potentially powerful approach to incremental learning, while the advantages of NCL have not yet been fully exploited. In this paper, we propose a selective NCL (SNCL) algorithm for incremental learning. Concretely, every time a new training data set is presented, the previously trained neural network ensemble is cloned. Then the cloned ensemble is trained on the new data set. After that, the new ensemble is combined with the previous ensemble and a selection process is applied to prune the whole ensemble to a fixed size. This paper is an extended version of our preliminary paper on SNCL. Compared to the previous work, this paper presents a deeper investigation into SNCL, considering different objective functions for the selection process and comparing SNCL to other NCL-based incremental learning algorithms on two more real world bioinformatics data sets. Experimental results demonstrate the advantage of SNCL. Further, comparisons between SNCL and other existing incremental learning algorithms, such Learn++ and ARTMAP, are also presented.  相似文献   

17.
Both theoretical and experimental studies have shown that combining accurate neural networks (NNs) in the ensemble with negative error correlation greatly improves their generalization abilities. Negative correlation learning (NCL) and mixture of experts (ME), two popular combining methods, each employ different special error functions for the simultaneous training of NNs to produce negatively correlated NNs. In this paper, we review the properties of the NCL and ME methods, discussing their advantages and disadvantages. Characterization of both methods showed that they have different but complementary features, so if a hybrid system can be designed to include features of both NCL and ME, it may be better than each of its basis approaches. In this study, two approaches are proposed to combine the features of both methods in order to solve the weaknesses of one method with the strength of the other method, i.e., gated-NCL (G-NCL) and mixture of negatively correlated experts (MNCE). In the first approach, G-NCL, a dynamic combiner of ME is used to combine the outputs of base experts in the NCL method. The suggested combiner method provides an efficient tool to evaluate and combine the NCL experts by the weights estimated dynamically from the inputs based on the different competences of each expert regarding different parts of the problem. In the second approach, MNCE, the capability of a control parameter for NCL is incorporated in the error function of ME, which enables the training algorithm of ME to efficiently adjust the measure of negative correlation between the experts. This control parameter can be regarded as a regularization term added to the error function of ME to establish better balance in bias–variance–covariance trade-offs and thus improves the generalization ability. The two proposed hybrid ensemble methods, G-NCL and MNCE, are compared with their constituent methods, ME and NCL, in solving several benchmark problems. The experimental results show that our proposed methods preserve the advantages and alleviate the disadvantages of their basis approaches, offering significantly improved performance over the original methods.  相似文献   

18.
BP神经网络在目前的非线性系统中应用广泛,但是作为有导师的学习系统,BP神经网络必须要求提供相关的经验数据才能正常运行,这对一般系统来说是非常麻烦和不现实的。对此文章提出了一种基于神经网络集成的强化学习BP算法,通过强化学习体系来实现体统的自学习,通过网络集成来达到初始数据的预处理,提高系统的泛化能力,并在实际应用中取得较好的效果。  相似文献   

19.
Negative Correlation Learning (NCL) has been successfully applied to construct neural network ensembles. It encourages the neural networks that compose the ensemble to be different from each other and, at the same time, accurate. The difference among the neural networks that compose an ensemble is a desirable feature to perform incremental learning, for some of the neural networks can be able to adapt faster and better to new data than the others. So, NCL is a potentially powerful approach to incremental learning. With this in mind, this paper presents an analysis of NCL, aiming at determining its weak and strong points to incremental learning. The analysis shows that it is possible to use NCL to overcome catastrophic forgetting, an important problem related to incremental learning. However, when catastrophic forgetting is very low, no advantage of using more than one neural network of the ensemble to learn new data is taken and the test error is high. When all the neural networks are used to learn new data, some of them can indeed adapt better than the others, but a higher catastrophic forgetting is obtained. In this way, it is important to find a trade-off between overcoming catastrophic forgetting and using an entire ensemble to learn new data. The NCL results are comparable with other approaches which were specifically designed to incremental learning. Thus, the study presented in this work reveals encouraging results with negative correlation in incremental learning, showing that NCL is a promising approach to incremental learning.
Xin YaoEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号

京公网安备 11010802026262号