首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 406 毫秒
1.
一种新的双层多分类器组合算法   总被引:1,自引:1,他引:0  
多分类器组合是解决复杂模式识别问题的有效办法。文章提出了一种新的双层多分类器组合算法,首先利用分类对象的主次特征构建了多个差异的融合方案,然后对这些融合方案进行最终的组合决策。实验结果表明,对于复杂分类问题,本文算法具有较高的正确识别率。  相似文献   

2.
基于AdaBoost的组合分类器在遥感影像分类中的应用*   总被引:2,自引:0,他引:2  
运用组合分类器的经典算法AdaBoost将多个弱分类器-神经网络分类器组合输出,并引入混合判别多分类器综合规则,有效提高疑难类别的分类精度,进而提高分类的总精度.最后以天津地区ASTER影像为例,介绍了基于AdaBoost的组合分类算法,并在此基础上实现了天津地区的土地利用分类.分类结果表明,组合分类器能有效提高单个分类器的分类精度,分类总精度由81.13%提高到93.32%.实验表明基于AdaBoost的组合分类是遥感图像分类的一种新的有效方法.  相似文献   

3.
论文提出了一种新的基于多特征多分类器融合的图像纹理分割方法。该方法结合Log-Gabor滤波方法对于规则纹理识别的高分辨性和DCT方法在纹理识别上的稳定性,对两种图像滤波特征分别用模糊c-均值方法进行聚类以获得模糊隶属度矩阵,针对此类高维数以及强非线性的软分类结果,论文引入多类支持向量机进行融合。实验表明,这种多特征多分类器融合方法与传统的单一特征、单一分类器方法相比,具有高准确度以及抗干扰能力。  相似文献   

4.
现有的多分类器系统采用固定的组合算子,适用性较差。将泛逻辑的柔性化思想引入多分类器系统中,应用泛组合运算模型建立了泛组合规则。泛组合规则采用遗传算法进行参数估计,对并行结构的多分类器系统具有良好的适用性。在时间序列数据集上的分类实验结果表明,泛组合规则的分类性能优于乘积规则、均值规则、中值规则、最大规则、最小规则、投票规则等固定组合规则。  相似文献   

5.
杜晓旭  钱沄涛 《计算机工程》2005,31(22):164-166
在很多应用中,组合使用多个分类器可以降低分类错误率。该文就是基于这个思想提出了新的人脸识别算法,即加强概率推理模型。在该算法中,将分类任务划分成多个子分类器,每个子分类器集中于一些难分类的样本,然后组合这些子分类器形成一个强的分类器。试验结果表明算法的识别率比原来的概率推理模型的识别率提高了1.8%。  相似文献   

6.
基于多分类器融合算法的3D人脸年龄识别   总被引:2,自引:0,他引:2  
为了提高人脸识别中待测人脸图像年龄估计的正确率,提出了一种基于多分类器融合的3D人脸年龄识别算法.首先.利用人脸的纹理信息将二维图像映射到标准三维模型上,并以贝叶斯决策理论为基础,对Kittler提出的多分类器融合算法理论框架及其组合规则进行了详细的研究、讨论和改进,然后应用改进后的多分类器组合规则将多个单独识别分类器加以融合以达到分类未知年龄目标人脸的目的,并估计人脸年龄.实验结果表明,算法可有效估计日标人脸年龄,并减小估计误差.  相似文献   

7.
梁小寒  陈慧萍 《计算机工程与设计》2011,32(4):1319-1321,1325
为得到更高的分类精度和效率,提出了基于一个新的类的关联分类算法CACA(a new class based associative classifica-tion approach)。该方法使用基于策略的类来削减频繁模式的搜索空间;设计一个OR-Tree(ordered rule-tree)的有序规则树来存储规则和他们的信息并且重新定义一个紧凑集,以便构造的分类器也是紧凑唯一的;同步规则的生成和分类器的构造以缩小规则的挖掘空间以便加快规则的生成。实验结果表明,CACA算法在关联分类方法中具有更高的准确度和效率。  相似文献   

8.
针对通信信号非稳定、信噪比(SNR)变化范围大的特性,利用调制信号的循环平稳特性,提取出五种对SNR和信号调制参数不敏感但对调制类型敏感的特征参量。为提高分类性能,设计了一种采用多个不同神经网络的组合分类器结构,采用输出向量加权表决的融合规则。仿真表明,低信噪比下组合神经网络分类器比单个神经网络分类器有更高的识别率。  相似文献   

9.
基于Boost和信任函数的多文本分类器组合模型   总被引:2,自引:0,他引:2  
人们对文本分类已经进行了大量的研究,取得很多研究成果,设计多种分类器,达到相当高的分类精确度。但使用单分类器进行文本分类有一些缺点,如分类模型对样本的敏感性,而且单分类器的分类精度很难再有很大的提高。因此,使用多分类器以提高分类的精度是一个非常活跃的研究领域。文章提出使用近年在传统概率统计方法之上发展起来的信任函数理论和方法对多个文本分类器进行组合使用。具体方法是使用信任函数将分类结果进行综合,得到最终的分类结果。实验证明,基于信任函数的信息综合方法比已有的方法更合理,精度也得到提高。  相似文献   

10.
k近邻分类(kNN)是一种简单而有效的非参数分类算法, 但存在着参数需要人工确定, 没有显式构建分类模型造成存储空间大、分类效率低, 且易受到“维灾”效应影响等缺点. 针对这些缺点, 提出一种高效的近邻分类新方法, 构造了两个新的近邻分类器. 新方法使用由K均值聚类产生的优化的簇原型集合为分类模型, 减少了存储空间的同时提高了分类效率; 提出三种类重叠分析策略并引入模糊基准度量以减轻维灾影响. 以该分类模型学习方法为基础, 提出一种新的kNN分类器和组合朴素贝叶斯的新分类器, 算法涉及的参数都可以自动确定. 在人工和现实数据集上进行的实验表明, 新分类器具有良好的分类效率和分类准确率.  相似文献   

11.
分类器选择是一种设计多分类器系统的有效方法,从给定候选分类器集中挑选出一个子集,使得该子集集成性能最佳。现有的分类器选择方法大多采用基于集成精度的随机搜索方法,但巨大的搜索复杂度限制了它们在更大系统中的应用。该文提出一种新的选择标准——IWCECR及一种基于IWCECR的启发式搜索算法,在手写体数字识别的实验中,从20个候选分类器中挑选子集,结果表明,该方法具有较高的搜索效率,在子集集成性能方面仅次于穷举法。  相似文献   

12.
Reducing SVM classification time using multiple mirror classifiers.   总被引:3,自引:0,他引:3  
We propose an approach that uses mirror point pairs and a multiple classifier system to reduce the classification time of a support vector machine (SVM). Decisions made with multiple simple classifiers formed from mirror pairs are integrated to approximate the classification rule of a single SVM. A coarse-to-fine approach is developed for selecting a given number of member classifiers. A clustering method, derived from the similarities between classifiers, is used for a coarse selection. A greedy strategy is then used for fine selection of member classifiers. Selected member classifiers are further refined by finding a weighted combination with a perceptron. Experiment results show that our approach can successfully speed up SVM decisions while maintaining comparable classification accuracy.  相似文献   

13.
将多分类器集合应用于"北京一号"小卫星多光谱遥感数据土地覆盖分类,首先构建分类器集合,应用最小距离分类、最大似然分类、支持向量机(SVM)、BP神经网络、RBF神经网络和决策树等进行土地覆盖分类,然后利用Bagging、Boosting、投票法、证据理论和模糊积分法等分类器集成方法,得到综合不同分类器输出的最终分类结果。试验表明,多分类器集成能够有效提高"北京一号"小卫星土地覆盖分类的精度,具有广泛的应用前景。  相似文献   

14.
In this paper we investigate the combination of four machine learning methods for text categorization using Dempster's rule of combination. These methods include Support Vector Machine (SVM), kNN (Nearest Neighbor), kNN model-based approach (kNNM), and Rocchio. We first present a general representation of the outputs of different classifiers, in particular, modeling it as a piece of evidence by using a novel evidence structure called focal element triplet. Furthermore, we investigate an effective method for combining pieces of evidence derived from classifiers generated by a 10-fold cross-validation. Finally, we evaluate our methods on the 20-newsgroup and Reuters-21578 benchmark data sets and perform the comparative analysis with majority voting in combining multiple classifiers along with the previous result. Our experimental results show that the best combined classifier can improve the performance of the individual classifiers and Dempster's rule of combination outperforms majority voting in combining multiple classifiers.  相似文献   

15.
In classifier combination, the relative values of beliefs assigned to different hypotheses are more important than accurate estimation of the combined belief function representing the joint observation space. Because of this, the independence requirement in Dempster’s rule should be examined from classifier combination point of view. In this study, it is investigated whether there is a set of dependent classifiers which provides a better combined accuracy than independent classifiers when Dempster’s rule of combination is used. The analysis carried out for three different representations of statistical evidence has shown that the combination of dependent classifiers using Dempster’s rule may provide much better combined accuracies compared to independent classifiers.  相似文献   

16.
The ensembling of classifiers tends to improve predictive accuracy. To obtain an ensemble with N classifiers, one typically needs to run N learning processes. In this paper we introduce and explore Model Jittering Ensembling, where one single model is perturbed in order to obtain variants that can be used as an ensemble. We use as base classifiers sets of classification association rules. The two methods of jittering ensembling we propose are Iterative Reordering Ensembling (IRE) and Post Bagging (PB). Both methods start by learning one rule set over a single run, and then produce multiple rule sets without relearning. Empirical results on 36 data sets are positive and show that both strategies tend to reduce error with respect to the single model association rule classifier. A bias–variance analysis reveals that while both IRE and PB are able to reduce the variance component of the error, IRE is particularly effective in reducing the bias component. We show that Model Jittering Ensembling can represent a very good speed-up w.r.t. multiple model learning ensembling. We also compare Model Jittering with various state of the art classifiers in terms of predictive accuracy and computational efficiency.  相似文献   

17.
《Knowledge》2006,19(6):438-444
One major goal for data mining is to understand data. Rule based methods are better than other methods in making mining results comprehensible. However, current rule based classifiers make use of a small number of rules and a default prediction to build a concise predictive model. This reduces the explanatory ability of the rule based classifier. In this paper, we propose to use multiple and negative target rules to improve explanatory ability of rule based classifiers. We show experimentally that this understandability is not at the cost of accuracy of rule based classifiers.  相似文献   

18.
An ensemble of multiple classifiers is widely considered to be an effective technique for improving accuracy and stability of a single classifier. This paper proposes a framework of sparse ensembles and deals with new linear weighted combination methods for sparse ensembles. Sparse ensemble is to sparsely combine the outputs of multiple classifiers by using a sparse weight vector. When the continuous outputs of multiple classifiers are provided in our methods, the problem of solving sparse weight vector can be formulated as linear programming problems in which the hinge loss or/and the 1-norm regularization are exploited. Both the hinge loss and the 1-norm regularization are techniques inducing sparsity used in machine learning. We only ensemble classifiers with nonzero weight coefficients. In these LP-based methods, the ensemble training error is minimized while the weight vector of ensemble learning is controlled, which can be thought as implementing the structure risk minimization rule and naturally explains good performance of these methods. The promising experimental results over UCI data sets and the radar high-resolution range profile data are presented.  相似文献   

19.
Generalized rules for combination and joint training of classifiers   总被引:1,自引:0,他引:1  
Classifier combination has repeatedly been shown to provide significant improvements in performance for a wide range of classification tasks. In this paper, we focus on the problem of combining probability distributions generated by different classifiers. Specifically, we present a set of new combination rules that generalize the most commonly used combination functions, such as the mean, product, min, and max operations. These new rules have continuous and differentiable forms, and can thus not only be used for combination of independently trained classifiers, but also as objective functions in a joint classifier training scheme. We evaluate both of these schemes by applying them to the combination of phone classifiers in a speech recognition system. We find a significant performance improvement over previously used combination schemes when jointly training and combining multiple systems using a generalization of the product rule.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号