首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 590 毫秒
1.
It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures-many of which are heuristic in nature-have been developed for this goal. In this brief, we describe a dynamic approach to combine classifiers that have expertise in different regions of the input space. To this end, we use local classifier accuracy estimates to weight classifier outputs. Specifically, we estimate local recognition accuracies of classifiers near a query sample by utilizing its nearest neighbors, and then use these estimates to find the best weights of classifiers to label the query. The problem is formulated as a convex quadratic optimization problem, which returns optimal nonnegative classifier weights with respect to the chosen objective function, and the weights ensure that locally most accurate classifiers are weighted more heavily for labeling the query sample. Experimental results on several data sets indicate that the proposed weighting scheme outperforms other popular classifier combination schemes, particularly on problems with complex decision boundaries. Hence, the results indicate that local classification-accuracy-based combination techniques are well suited for decision making when the classifiers are trained by focusing on different regions of the input space.  相似文献   

2.
Due to the wide variety of fusion techniques available for combining multiple classifiers into a more accurate classifier, a number of good studies have been devoted to determining in what situations some fusion methods should be preferred over other ones. However, the sample size behavior of the various fusion methods has hitherto received little attention in the literature of multiple classifier systems. The main contribution of this paper is thus to investigate the effect of training sample size on their relative performance and to gain more insight into the conditions for the superiority of some combination rules.A large experiment is conducted to study the performance of some fixed and trainable combination rules for executing one- and two-level classifier fusion for different training sample sizes. The experimental results yield the following conclusions: when implementing one-level fusion to combine homogeneous or heterogeneous base classifiers, fixed rules outperform trainable ones in nearly all cases, with only one exception of merging heterogeneous classifiers for large sample size. Moreover, the best classification for any considered sample size is generally achieved by a second level of combination (namely, utilizing one fusion rule to further combine a set of ensemble classifiers with each of them constructed by fusing base classifiers). Under these circumstances, it seems that adopting different types of fusion rules (fixed or trainable) as the combiners for two levels of fusion is appropriate.  相似文献   

3.
Generalized rules for combination and joint training of classifiers   总被引:1,自引:0,他引:1  
Classifier combination has repeatedly been shown to provide significant improvements in performance for a wide range of classification tasks. In this paper, we focus on the problem of combining probability distributions generated by different classifiers. Specifically, we present a set of new combination rules that generalize the most commonly used combination functions, such as the mean, product, min, and max operations. These new rules have continuous and differentiable forms, and can thus not only be used for combination of independently trained classifiers, but also as objective functions in a joint classifier training scheme. We evaluate both of these schemes by applying them to the combination of phone classifiers in a speech recognition system. We find a significant performance improvement over previously used combination schemes when jointly training and combining multiple systems using a generalization of the product rule.  相似文献   

4.
When combining outputs from multiple classifiers, many combination rules are available. Although easy to implement, fixed combination rules are optimal only in restrictive conditions. We discuss and evaluate their performance when the optimality conditions are not fulfilled. Fixed combination rules are then compared with trainable combination rules on real data in the context of face-based identity verification. The face images are classified by combining the outputs of five different face verification experts. It is demonstrated that a reduction in the error rates of up to 50% over the best single expert is achieved on the XM2VTS database, using either fixed or trainable combination rules.  相似文献   

5.
In this paper, a theoretical and experimental analysis of linear combiners for multiple classifier systems is presented. Although linear combiners are the most frequently used combining rules, many important issues related to their operation for pattern classification tasks lack a theoretical basis. After a critical review of the framework developed in works by Turner and Ghosh [1996], [1999] on which our analysis is based, we focus on the simplest and most widely used implementation of linear combiners, which consists of assigning a nonnegative weight to each individual classifier. Moreover, we consider the ideal performance of this combining rule, i.e., that achievable when the optimal values of the weights are used. We do not consider the problem of weights estimation, which has been addressed in the literature. Our theoretical analysis shows how the performance of linear combiners, in terms of misclassification probability, depends on the performance of individual classifiers, and on the correlation between their outputs. In particular, we evaluate the ideal performance improvement that can be achieved using the weighted average over the simple average combining rule and investigate in what way it depends on the individual classifiers. Experimental results on real data sets show that the behavior of linear combiners agrees with the predictions of our analytical model. Finally, we discuss the contribution to the state of the art and the practical relevance of our theoretical and experimental analysis of linear combiners for multiple classifier systems.  相似文献   

6.
An ensemble of multiple classifiers is widely considered to be an effective technique for improving accuracy and stability of a single classifier. This paper proposes a framework of sparse ensembles and deals with new linear weighted combination methods for sparse ensembles. Sparse ensemble is to sparsely combine the outputs of multiple classifiers by using a sparse weight vector. When the continuous outputs of multiple classifiers are provided in our methods, the problem of solving sparse weight vector can be formulated as linear programming problems in which the hinge loss or/and the 1-norm regularization are exploited. Both the hinge loss and the 1-norm regularization are techniques inducing sparsity used in machine learning. We only ensemble classifiers with nonzero weight coefficients. In these LP-based methods, the ensemble training error is minimized while the weight vector of ensemble learning is controlled, which can be thought as implementing the structure risk minimization rule and naturally explains good performance of these methods. The promising experimental results over UCI data sets and the radar high-resolution range profile data are presented.  相似文献   

7.
Segmentation using an ensemble of classifiers (or committee machine) combines multiple classifiers’ results to increase the performance when compared to single classifiers. In this paper, we propose new concepts for combining rules. They are based (1) on uncertainties of the individual classifiers, (2) on combining the result of existing combining rules, (3) on combining local class probabilities with the existing segmentation probabilities at each individual segmentation, and (4) on using uncertainty-based weights for the weighted majority rule. The results show that the proposed local-statistics-aware combining rules can reduce the effect of noise in the individual segmentation result and consequently improve the performance of the final (combined) segmentation. Also, combining existing combining rules and using the proposed uncertainty- based weights can further improve the performance.  相似文献   

8.
A robust training algorithm for a class of single-hidden layer feedforward neural networks (SLFNs) with linear nodes and an input tapped-delay-line memory is developed in this paper. It is seen that, in order to remove the effects of the input disturbances and reduce both the structural and empirical risks of the SLFN, the input weights of the SLFN are assigned such that the hidden layer of the SLFN performs as a pre-processor, and the output weights are then trained to minimize the weighted sum of the output error squares as well as the weighted sum of the output weight squares. The performance of an SLFN-based signal classifier trained with the proposed robust algorithm is studied in the simulation section to show the effectiveness and efficiency of the new scheme.  相似文献   

9.
Confidence Transformation for Combining Classifiers   总被引:1,自引:0,他引:1  
This paper investigates a number of confidence transformation methods for measurement-level combination of classifiers. Each confidence transformation method is the combination of a scaling function and an activation function. The activation functions correspond to different types of confidences: likelihood (exponential), log-likelihood, sigmoid, and the evidence combination of sigmoid measures. The sigmoid and evidence measures serve as approximates to class probabilities. The scaling functions are derived by Gaussian density modeling, logistic regression with variable inputs, etc. We test the confidence transformation methods in handwritten digit recognition by combining variable sets of classifiers: neural classifiers only, distance classifiers only, strong classifiers, and mixed strong/weak classifiers. The results show that confidence transformation is efficient to improve the combination performance in all the settings. The normalization of class probabilities to unity of sum is shown to be detrimental to the combination performance. Comparing the scaling functions, the Gaussian method and the logistic regression perform well in most cases. Regarding the confidence types, the sigmoid and evidence measures perform well in most cases, and the evidence measure generally outperforms the sigmoid measure. We also show that the confidence transformation methods are highly robust to the validation sample size in parameter estimation.  相似文献   

10.
分类器的动态选择与循环集成方法   总被引:1,自引:0,他引:1  
针对多分类器系统设计中最优子集选择效率低下、集成方法缺乏灵活性等问题, 提出了分类器的动态选择与循环集成方法 (Dynamic selection and circulating combination, DSCC). 该方法利用不同分类器模型之间的互补性, 动态选择出对目标有较高识别率的分类器组合, 使参与集成的分类器数量能够随识别目标的复杂程度而自适应地变化, 并根据可信度实现系统的循环集成. 在手写体数字识别实验中, 与其他常用的分类器选择方法相比, 所提出的方法灵活高效, 识别率更高.  相似文献   

11.
Using neural network ensembles for bankruptcy prediction and credit scoring   总被引:2,自引:0,他引:2  
Bankruptcy prediction and credit scoring have long been regarded as critical topics and have been studied extensively in the accounting and finance literature. Artificial intelligence and machine learning techniques have been used to solve these financial decision-making problems. The multilayer perceptron (MLP) network trained by the back-propagation learning algorithm is the mostly used technique for financial decision-making problems. In addition, it is usually superior to other traditional statistical models. Recent studies suggest combining multiple classifiers (or classifier ensembles) should be better than single classifiers. However, the performance of multiple classifiers in bankruptcy prediction and credit scoring is not fully understood. In this paper, we investigate the performance of a single classifier as the baseline classifier to compare with multiple classifiers and diversified multiple classifiers by using neural networks based on three datasets. By comparing with the single classifier as the benchmark in terms of average prediction accuracy, the multiple classifiers only perform better in one of the three datasets. The diversified multiple classifiers trained by not only different classifier parameters but also different sets of training data perform worse in all datasets. However, for the Type I and Type II errors, there is no exact winner. We suggest that it is better to consider these three classifier architectures to make the optimal financial decision.  相似文献   

12.
基于最小代价的多分类器动态集成   总被引:2,自引:0,他引:2  
本文提出一种基于最小代价准则的分类器动态集成方法.与一般方法不同,动态集成是根据“性能预测特征”,动态地为每一样本选择最适合的一组分类器进行集成.该选择基于使误识代价与时间代价最小化的准则,改变代价函数的定义可以方便地达到识别率与识别速度之间的不同折衷.本文中提出了两种分类器动态集成的方法,并介绍了在联机手写汉字识别中的具体应用.在实验中使了3个分类器进行动态集成,因此,得到7种分类组合.在预先定义的代价意义下,我们比较了动态集成方法和其它7种固定方法的性能.实验结果证明了动态集成方法的高灵活性、实用性和提高系统综合性能的能力.  相似文献   

13.
基于置信度的手写体数字识别多分类器动态组合   总被引:1,自引:0,他引:1  
张丽  杨静宇  娄震 《计算机工程》2003,29(16):103-105
多分类器组合利用不同分类器、不同特征之间的互补性,提高了组合分类器的识别率。传统的组合方法里,各分类器在组合中所承担的角色是固定的,而实际应用中,对于不同的测试样本,每个分类器识别结果的可信度是不同的。该文根据分类器置信度理论,提出了各类别的置信度。用测试样本自身的置信度信息实现分类器的动态组合,并把这种动态组合方法具体应用到手写体数字的识别。这种方法还可以在不影响已有数据的情况下添加新的分类器进行组合。  相似文献   

14.
现有的多分类器系统采用固定的组合算子,适用性较差。将泛逻辑的柔性化思想引入多分类器系统中,应用泛组合运算模型建立了泛组合规则。泛组合规则采用遗传算法进行参数估计,对并行结构的多分类器系统具有良好的适用性。在时间序列数据集上的分类实验结果表明,泛组合规则的分类性能优于乘积规则、均值规则、中值规则、最大规则、最小规则、投票规则等固定组合规则。  相似文献   

15.
用基于遗传算法的全局优化技术动态地选择一组分类器,并根据应用的背景,采用合适的集成规则进行集成,从而综合了不同分类器的优势和互补性,提高了分类性能。实验结果表明,通过将遗传算法引入到多分类器集成系统的设计过程,其分类性能明显优于传统的单分类器的分类方法。  相似文献   

16.
在许多模式识别的应用中经常遇到这样的问题:组合多个分类器.提出了一种新的组合多个分类器的方法,这个方法由反向传播神经网络来控制,一个无标号的模式输入到每一个单独的分类器,它也同时输入到神经网络中来决定哪两个分类器作为冠军和亚军.让这两个分类器通过一个随机数发生器来决定最终的胜者.并且将这个方法应用到识别手写体数字.实验显示单个分类器的性能能够得到可观的改变.  相似文献   

17.
基于可见光与红外数据融合的地形分类   总被引:1,自引:0,他引:1       下载免费PDF全文
顾迎节  金忠 《计算机工程》2013,39(2):187-191
针对单传感器地形分类效果不佳的问题,提出一种基于可见光与红外数据融合的地形分类方法。分别对可见光图像与红外图像提取特征,使用最近邻分类器和最小距离分类器进行后验概率估计,将来自不同特征、不同分类器的后验概率加权组合,通过散度计算得到特征的权重,实验确定分类器的权重,并在最小距离的后验概率估计中,使用马氏距离代替欧氏距离。实验结果表明,该方法对水泥路和沙子路的识别率分别达到99.33%和96.67%,均高于同类方法。  相似文献   

18.
赵玉娟  刘擎超 《计算机工程》2012,38(21):171-174
在机器学习领域,分类器加权在小样本数据集中的分类正确率较低。为此,提出一种基于混合距离度量的多分类器加权集成方法。结合欧氏距离、曼哈顿距离、切比雪夫距离,设计混合的距离度量加权方法,使用加权投票组合规则集成各分类器的输出结果。实验结果表明,该方法鲁棒性较好,分类正确率较高。  相似文献   

19.
在多分类器集成时,每个基分类器的效能不同,如每个权值都相同,则会影响基分类器发挥作用。基于此,提出基于PSO拓展的多分类器加权集成方法BCPSO。该方法采用随机子空间生成各个独立的子分类器,输出结果通过各分类器加权投票组合规则集成。实验结果表明,该方法有效可行,具有较高的分类正确率。  相似文献   

20.
多分类器联合是解决复杂模式识别问题的有效办法。对于多分类器联合,一个关键的问题是如何对每个分类器的分类性能作出可靠性估计。以往提出的方法是利用各个分类器在训练阶段得到的知识来判断决策的可靠性,这些方法都需要大量的存储空间,并且没有考虑到分类器在分类过程中,由于输入样本的质量变化从而分类性能也会改变。文章提出了一种分类器的动态联合方法,该方法直接利用分类器的输出信息来估计分类器的可靠性。实验结果表明,比较传统的联合方法,该方法是一种有效的联合方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号