首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
本文利用神经网络BP算法建立网络性能评估的数学模型,采用各性能指标作为其输入,网络性能作为输出,基于最小二乘思想,采用梯度搜索技术,以期使网络的实际输出值与期望输出值的误差均方值最小.经实验证明,该数学模型具有较好的辩识精度.  相似文献   

2.
孙鹤立  孙玉柱  张晓云 《计算机应用》2020,40(11):3101-3106
在基于事件的社会网络(EBSNs)的相关研究中,基于事件描述来预测社交事件参与度是难点问题。相关的研究非常有限,研究难度主要来自对事件描述评价的主观性和语言建模算法的局限性。针对这些问题,首先定义了成功事件、相似事件和事件相似度等概念,并基于这些概念将采集自Meetup平台的社交数据进行抽取,同时分别设计了基于拉索回归、卷积神经网络(CNN)和门控循环神经网络(GRNN)的分析预测方法。实验时,先从抽取过的数据中选取部分数据训练三种模型,然后用剩余的数据进行分析预测。结果显示,相较于不含事件描述的事件,经过拉索回归模型处理的事件在不同分类器下的预测准确率可提高2.35%~3.8%,经过GRNN模型处理的事件在不同分类器下的预测准确率可提高4.5%~8.9%,而CNN模型的处理结果不理想。证明了事件描述能够提高事件参与度,GRNN模型在三个模型中预测准确率最高。  相似文献   

3.
朱帮助 《计算机科学》2008,35(3):132-133
针对现有神经网络集成研究方法在输入属性、集成方式和集成形式上的不足,提出一种基于特征提取的选择性神经网络集成模型-NsNNEIPCABag.该模型通过Bagging算法产生若干训练子集;利用改进的主成分分析(IPCA)提取主成分作为输入来训练个体网络;采用IPCA从所有个体网络中选择出部分线性无关的个体网络;采用神经网络对选择出来的个体网络进行非线性集成.为检验该模型的有效性,将其用于时间序列预测,结果表明本文提出的方法的泛化能力优于流行的其它集成方法.  相似文献   

4.
贝叶斯网络结构稀疏化学习因其既能简化结构又能保留原始网络中的重要信息,已经成为当前贝叶斯网络的研究热点.文中首先讨论贝叶斯网络结构稀疏学习的必要性、贝叶斯网络稀疏性的定义,并在此基础上介绍现有的贝叶斯网络结构稀疏学习研究思路.然后,回顾一般的贝叶斯网络结构学习方法,并分析它们在高维背景下存在的问题,进而发现基于评分的方法通常适合于贝叶斯网络结构的稀疏学习,因此重点介绍贝叶斯网络结构稀疏学习的目标函数和优化求解算法.最后,探讨未来贝叶斯网络结构稀疏学习的一些研究方向.  相似文献   

5.
Optimal ensemble construction via meta-evolutionary ensembles   总被引:1,自引:0,他引:1  
In this paper, we propose a meta-evolutionary approach to improve on the performance of individual classifiers. In the proposed system, individual classifiers evolve, competing to correctly classify test points, and are given extra rewards for getting difficult points right. Ensembles consisting of multiple classifiers also compete for member classifiers, and are rewarded based on their predictive performance. In this way we aim to build small-sized optimal ensembles rather than form large-sized ensembles of individually-optimized classifiers. Experimental results on 15 data sets suggest that our algorithms can generate ensembles that are more effective than single classifiers and traditional ensemble methods.  相似文献   

6.
金杉  金志刚 《计算机应用》2015,35(5):1499-1504
针对基于反向传播(BP)神经网络和经典概率论及其衍生算法进行火灾损失预测时,存在系统结构复杂、依赖不稳定的探测数据、易陷入局部极小值等缺点,提出一种基于自适应模糊广义回归神经网络(GRNN)的区域火灾数据推理预测算法.在网络输入层使用改进模糊C-聚类算法,对初始数据进行权重修正,减少了噪声和孤立点对算法造成的影响,提高了预测值的逼近精度; 引入自适应函数优化GRNN算法,调整迭代收敛的扩展速度、变化步长,找到全局最优解,改善了过早收敛问题,提高了搜索效率.实验结果表明,该算法代入已确定火灾损失数据,解决了依赖不稳定探测数据问题,并且具有良好的泛化能力、非线性逼近能力.  相似文献   

7.
Convolutional Neural Networks have dominated the field of computer vision for the last ten years, exhibiting extremely powerful feature extraction capabilities and outstanding classification performance. The main strategy to prolong this trend in the state-of-the-art literature relies on further upscaling networks in size. However, costs increase rapidly while performance improvements may be marginal. Our main hypothesis is that adding additional sources of information can help to increase performance and that this approach is more cost-effective than building bigger networks, which involve higher training time, larger parametrisation space and higher computational resources requirements. In this paper, an ensemble method for accurate image classification is proposed, fusing automatically detected features through a Convolutional Neural Network and a set of manually defined statistical indicators. Through a combination of the predictions of a CNN and a secondary classifier trained on statistical features, a better classification performance can be achieved cheaply. We test five different CNN architectures and multiple learning algorithms in a diverse number of datasets to validate our proposal. According to the results, the inclusion of additional indicators and an ensemble classification approach help to increase the performance in all datasets. Both code and datasets are publicly available via GitHub at: https://github.com/jahuerta92/cnn-prob-ensemble.  相似文献   

8.
随着网络游戏的流行,玩家越来越关注游戏的智能化。本文基于人工神经网络中的BP网络算法建立了一个自动过关模型,此模型根据游戏中的不同环境输入神经元,经过隐含神经的不断学习,输出正确的路径,实验表明此模型能够有效地解决游戏关卡问题,并和遗传算法,情境法进行了对比。  相似文献   

9.
将L-M算法与填充函数法相结合,提出一种训练前向网络的混合型全局优化GOBP(Global Optimization BP)算法。L-M算法的收敛速度快,利用它先得到一个局部极小点,然后利用填充函数算法跳出局部最小,得到一个更低的局部极小点,重复计算即可得到全局最优点。经实验验证,该算法收敛速度很快,避免了局部收敛,而且性能稳定。  相似文献   

10.
一种基于神经网络基函数的新型遗传算法   总被引:3,自引:0,他引:3  
尹志杰 《计算机仿真》2004,21(12):114-116
该文提出了一种新型的遗传优化方法。由参数模型描述的神经元基函数作为遗传基因,利用每个神经元输出序列与网络训练目标以及神经元输出序列之间的相关性得到网络遗传优化方法的选择算子,根据不同参数的特点得到相应的交叉和变异算子,建立基函数的参数化模型,得到遗传算法的初始基因组;并根据初始基因组建立各参数基因组,通过合适的交叉变异算子对个各参数基因组进行交叉变异操作。这样得到的算法使输出误差分布较为均匀,能够大大提高网络的输出精度,简化网络的结构,信号跟踪与非线性系统逼近中得到很好的效果,提高了网络的适时学习能力。  相似文献   

11.
Neural network ensembles: evaluation of aggregation algorithms   总被引:1,自引:0,他引:1  
Ensembles of artificial neural networks show improved generalization capabilities that outperform those of single networks. However, for aggregation to be effective, the individual networks must be as accurate and diverse as possible. An important problem is, then, how to tune the aggregate members in order to have an optimal compromise between these two conflicting conditions. We present here an extensive evaluation of several algorithms for ensemble construction, including new proposals and comparing them with standard methods in the literature. We also discuss a potential problem with sequential aggregation algorithms: the non-frequent but damaging selection through their heuristics of particularly bad ensemble members. We introduce modified algorithms that cope with this problem by allowing individual weighting of aggregate members. Our algorithms and their weighted modifications are favorably tested against other methods in the literature, producing a sensible improvement in performance on most of the standard statistical databases used as benchmarks.  相似文献   

12.
在开放环境下,数据流具有数据高速生成、数据量无限和概念漂移等特性.在数据流分类任务中,利用人工标注产生大量训练数据的方式昂贵且不切实际.包含少量有标记样本和大量无标记样本且还带概念漂移的数据流给机器学习带来了极大挑战.然而,现有研究主要关注有监督的数据流分类,针对带概念漂移的数据流的半监督分类的研究尚未引起足够的重视....  相似文献   

13.
Surface and normal ensembles for surface reconstruction   总被引:1,自引:0,他引:1  
The majority of the existing techniques for surface reconstruction and the closely related problem of normal reconstruction are deterministic. Their main advantages are the speed and, given a reasonably good initial input, the high quality of the reconstructed surfaces. Nevertheless, their deterministic nature may hinder them from effectively handling incomplete data with noise and outliers. An ensemble is a statistical technique which can improve the performance of deterministic algorithms by putting them into a statistics based probabilistic setting. In this paper, we study the suitability of ensembles in normal and surface reconstruction. We experimented with a widely used normal reconstruction technique [Hoppe H, DeRose T, Duchamp T, McDonald J, Stuetzle W. Surface reconstruction from unorganized points. Computer Graphics 1992;71-8] and Multi-level Partitions of Unity implicits for surface reconstruction [Ohtake Y, Belyaev A, Alexa M, Turk G, Seidel H-P. Multi-level partition of unity implicits. ACM Transactions on Graphics 2003;22(3):463-70], showing that normal and surface ensembles can successfully be combined to handle noisy point sets.  相似文献   

14.
Ke  Minlong  Fernanda L.  Xin   《Neurocomputing》2009,72(13-15):2796
Negative correlation learning (NCL) is a successful approach to constructing neural network ensembles. In batch learning mode, NCL outperforms many other ensemble learning approaches. Recently, NCL has also shown to be a potentially powerful approach to incremental learning, while the advantages of NCL have not yet been fully exploited. In this paper, we propose a selective NCL (SNCL) algorithm for incremental learning. Concretely, every time a new training data set is presented, the previously trained neural network ensemble is cloned. Then the cloned ensemble is trained on the new data set. After that, the new ensemble is combined with the previous ensemble and a selection process is applied to prune the whole ensemble to a fixed size. This paper is an extended version of our preliminary paper on SNCL. Compared to the previous work, this paper presents a deeper investigation into SNCL, considering different objective functions for the selection process and comparing SNCL to other NCL-based incremental learning algorithms on two more real world bioinformatics data sets. Experimental results demonstrate the advantage of SNCL. Further, comparisons between SNCL and other existing incremental learning algorithms, such Learn++ and ARTMAP, are also presented.  相似文献   

15.
Neural network ensemble based on rough sets reduct is proposed to decrease the computational complexity of conventional ensemble feature selection algorithm. First, a dynamic reduction technology combining genetic algorithm with resampling method is adopted to obtain reducts with good generalization ability. Second, Multiple BP neural networks based on different reducts are built as base classifiers. According to the idea of selective ensemble, the neural network ensemble with best generalization ability can be found by search strategies. Finally, classification based on neural network ensemble is implemented by combining the predictions of component networks with voting. The method has been verified in the experiment of remote sensing image and five UCI datasets classification. Compared with conventional ensemble feature selection algorithms, it costs less time and lower computing complexity, and the classification accuracy is satisfactory.  相似文献   

16.
张枭山  罗强 《计算机科学》2015,42(Z11):63-66
在面对现实中广泛存在的不平衡数据分类问题时,大多数 传统分类算法假定数据集类分布是平衡的,分类结果偏向多数类,效果不理想。为此,提出了一种基于聚类融合欠抽样的改进AdaBoost分类算法。该算法首先进行聚类融合,根据样本权值从每个簇中抽取一定比例的多数类和全部的少数类组成平衡数据集。使用AdaBoost算法框架,对多数类和少数类的错分类给予不同的权重调整,选择性地集成分类效果较好的几个基分类器。实验结果表明,该算法在处理不平衡数据分类上具有一定的优势。  相似文献   

17.
秦益  杨波 《计算机应用研究》2005,22(11):110-113
利用处理基于角色的证书体系的证书图,实现了对SDSI名字证书的处理。基于这种证书图,给出了针对SDSI名字证书的分布式搜索算法,并证明其可靠性和完备性,表明当证书适当存储时,该算法能搜索和找到相关证书并形成证书链。与现有的自下至上的证书搜索算法相比,该算法更好地适应了SDSI名字证书的分布式存储特性。作为有目的的搜索算法, 它具备了自下至上算法所不具有的高效性、灵活性,更适合应用于海量证书分散存储的Internet。  相似文献   

18.
基于改进型遗传算法的神经网络参数优化   总被引:2,自引:0,他引:2  
针对标准遗传算法的不足,文中提出一种改进型遗传算法,它将标准遗传算法和BP算法有机结合,兼具了标准遗传算法的全局搜索和BP网络局部精确搜索的特征,并将其应用于船舶自动舵神经网络控制器的训练中,取得较满意的结果。  相似文献   

19.
To get a better prediction of costs, schedule, and the risks of a software project, it is necessary to have a more accurate prediction of its development effort. Among the main prediction techniques are those based on mathematical models, such as statistical regressions or machine learning (ML). The ML models applied to predicting the development effort have mainly based their conclusions on the following weaknesses: (1) using an accuracy criterion which leads to asymmetry, (2) applying a validation method that causes a conclusion instability by randomly selecting the samples for training and testing the models, (3) omitting the explanation of how the parameters for the neural networks were determined, (4) generating conclusions from models that were not trained and tested from mutually exclusive data sets, (5) omitting an analysis of the dependence, variance and normality of data for selecting the suitable statistical test for comparing the accuracies among models, and (6) reporting results without showing a statistically significant difference. In this study, these six issues are addressed when comparing the prediction accuracy of a radial Basis Function Neural Network (RBFNN) with that of a regression statistical (the model most frequently compared with ML models), to feedforward multilayer perceptron (MLP, the most commonly used in the effort prediction of software projects), and to general regression neural network (GRNN, a RBFNN variant). The hypothesis tested is the following: the accuracy of effort prediction for RBFNN is statistically better than the accuracy obtained from a simple linear regression (SLR), MLP and GRNN when adjusted function points data, obtained from software projects, is used as the independent variable. Samples obtained from the International Software Benchmarking Standards Group (ISBSG) Release 11 related to new and enhanced projects were used. The models were trained and tested from a leave-one-out cross-validation method. The criteria for evaluating the models were based on Absolute Residuals and by a Friedman statistical test. The results showed that there was a statistically significant difference in the accuracy among the four models for new projects, but not for enhanced projects. Regarding new projects, the accuracy for RBFNN was better than for a SLR at the 99% confidence level, whereas the MLP and GRNN were better than for a SLR at the 90% confidence level.  相似文献   

20.
Generalization is an important technique for protecting privacy in data dissemination. In the framework of generalization, ?-diversity is a strong notion of privacy. However, since existing ?-diversity measures are defined in terms of the most specific (rather than general) sensitive attribute (SA) values, algorithms based on these measures can have narrow eligible ranges for data that has a heavily skewed distribution of SA values and produce anonymous data that has a low utility. In this paper, we propose a new ?-diversity measure called the functional (τ, ?)-diversity, which extends ?-diversity by using a simple function to constrain frequencies of base SA values that are induced by general SA values. As a result, algorithms based on (τ, ?)-diversity may generalize SA values, thus are much less constrained by skew SA distributions. We show that (τ, ?)-diversity is more flexible and elaborate than existing ?-diversity measures. We present an efficient heuristic algorithm that uses a novel order of quasi-identifier (QI) values to achieve (τ, ?)-diversity. We compare our algorithm with two state-of-the-art algorithms that are based on existing ?-diversity measures. Our preliminary experimental results indicate that our algorithm not only provides a stronger privacy protection but also results in better utility of anonymous data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号