首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
When combining outputs from multiple classifiers, many combination rules are available. Although easy to implement, fixed combination rules are optimal only in restrictive conditions. We discuss and evaluate their performance when the optimality conditions are not fulfilled. Fixed combination rules are then compared with trainable combination rules on real data in the context of face-based identity verification. The face images are classified by combining the outputs of five different face verification experts. It is demonstrated that a reduction in the error rates of up to 50% over the best single expert is achieved on the XM2VTS database, using either fixed or trainable combination rules.  相似文献   

2.
This paper presents a new approach to combine decisions from face and fingerprint classifiers for multi-modal biometry by exploiting the individual classifier space on the basis of availability of class-specific information present in the classifier space. We exploit the prior knowledge by training the face classifier using response vectors on a validation set, enhancing class separability (using parametric and nonparametric Linear Discriminant Analysis) in the classifier output space and thereby improving the performance of the face classifier. Fingerprint classifier often does not provide this information due to high sensitivity of available minutiae points, producing partial matches across subjects. The enhanced face and fingerprint classifiers are combined using a sum rule. We also propose a generalized algorithm for multiple classifier combination (MCC) based on our approach. Experimental results show superiority of the proposed method over other existing fusion techniques, such as sum, product, max, min rules, decision template and Dempster–Shafer theory.  相似文献   

3.
This paper investigates the effects of confidence transformation in combining multiple classifiers using various combination rules. The combination methods were tested in handwritten digit recognition by combining varying classifier sets. The classifier outputs are transformed to confidence measures by combining three scaling functions (global normalization, Gaussian density modeling, and logistic regression) and three confidence types (linear, sigmoid, and evidence). The combination rules include fixed rules (sum-rule, product-rule, median-rule, etc.) and trained rules (linear discriminants and weighted combination with various parameter estimation techniques). The experimental results justify that confidence transformation benefits the combination performance of either fixed rules or trained rules. Trained rules mostly outperform fixed rules, especially when the classifier set contains weak classifiers. Among the trained rules, the support vector machine with linear kernel (linear SVM) performs best while the weighted combination with optimized weights performs comparably well. I have also attempted the joint optimization of confidence parameters and combination weights but its performance was inferior to that of cascaded confidence transformation-combination. This justifies that the cascaded strategy is a right way of multiple classifier combination.  相似文献   

4.
A perception system for pedestrian detection in urban scenarios using information from a LIDAR and a single camera is presented. Two sensor fusion architectures are described, a centralized and a decentralized one. In the former, the fusion process occurs at the feature level, i.e., features from LIDAR and vision spaces are combined in a single vector for posterior classification using a single classifier. In the latter, two classifiers are employed, one per sensor‐feature space, which were offline selected based on information theory and fused by a trainable fusion method applied over the likelihoods provided by the component classifiers. The proposed schemes for sensor combination, and more specifically the trainable fusion method, lead to enhanced detection performance and, in addition, maintenance of false‐alarms under tolerable values in comparison with single‐based classifiers. Experimental results highlight the performance and effectiveness of the proposed pedestrian detection system and the related sensor data combination strategies. © 2009 Wiley Periodicals, Inc.  相似文献   

5.

Repeat buyer prediction is crucial for e-commerce companies to enhance their customer services and product sales. In particular, being aware of which factors or rules drive repeat purchases is as significant as knowing the outcomes of predictions in the business field. Therefore, an interpretable model with excellent prediction performance is required. Many classifiers, such as the multilayer perceptron, have exceptional predictive abilities but lack model interpretability. Tree-based models possess interpretability; however, their predictive performances usually cannot achieve high levels. Based on these observations, we design an approach to balance the predictive and interpretable performance of a decision tree with model distillation and heterogeneous classifier fusion. Specifically, we first train multiple heterogeneous classifiers and integrate them through diverse combination operators. Then, classifier combination plays the role of teacher model. Subsequently, soft targets are obtained from the teacher and guide training of the decision tree. A real-world repeat buyer prediction dataset is utilized in this paper, and we adopt features with respect to three aspects: users, merchants, and user–merchant pairs. Our experimental results show that the accuracy and AUC of the decision tree are both improved, and we provide model interpretations of three aspects.

  相似文献   

6.
Dynamic weighting ensemble classifiers based on cross-validation   总被引:1,自引:1,他引:0  
Ensemble of classifiers constitutes one of the main current directions in machine learning and data mining. It is accepted that the ensemble methods can be divided into static and dynamic ones. Dynamic ensemble methods explore the use of different classifiers for different samples and therefore may get better generalization ability than static ensemble methods. However, for most of dynamic approaches based on KNN rule, additional part of training samples should be taken out for estimating “local classification performance” of each base classifier. When the number of training samples is not sufficient enough, it would lead to the lower accuracy of the training model and the unreliableness for estimating local performances of base classifiers, so further hurt the integrated performance. This paper presents a new dynamic ensemble model that introduces cross-validation technique in the process of local performances’ evaluation and then dynamically assigns a weight to each component classifier. Experimental results with 10 UCI data sets demonstrate that when the size of training set is not large enough, the proposed method can achieve better performances compared with some dynamic ensemble methods as well as some classical static ensemble approaches.  相似文献   

7.
Trimmed bagging   总被引:1,自引:0,他引:1  
Bagging has been found to be successful in increasing the predictive performance of unstable classifiers. Bagging draws bootstrap samples from the training sample, applies the classifier to each bootstrap sample, and then averages over all obtained classification rules. The idea of trimmed bagging is to exclude the bootstrapped classification rules that yield the highest error rates, as estimated by the out-of-bag error rate, and to aggregate over the remaining ones. In this note we explore the potential benefits of trimmed bagging. On the basis of numerical experiments, we conclude that trimmed bagging performs comparably to standard bagging when applied to unstable classifiers as decision trees, but yields better results when applied to more stable base classifiers, like support vector machines.  相似文献   

8.
This paper describes a performance evaluation study in which some efficient classifiers are tested in handwritten digit recognition. The evaluated classifiers include a statistical classifier (modified quadratic discriminant function, MQDF), three neural classifiers, and an LVQ (learning vector quantization) classifier. They are efficient in that high accuracies can be achieved at moderate memory space and computation cost. The performance is measured in terms of classification accuracy, sensitivity to training sample size, ambiguity rejection, and outlier resistance. The outlier resistance of neural classifiers is enhanced by training with synthesized outlier data. The classifiers are tested on a large data set extracted from NIST SD19. As results, the test accuracies of the evaluated classifiers are comparable to or higher than those of the nearest neighbor (1-NN) rule and regularized discriminant analysis (RDA). It is shown that neural classifiers are more susceptible to small sample size than MQDF, although they yield higher accuracies on large sample size. As a neural classifier, the polynomial classifier (PC) gives the highest accuracy and performs best in ambiguity rejection. On the other hand, MQDF is superior in outlier rejection even though it is not trained with outlier data. The results indicate that pattern classifiers have complementary advantages and they should be appropriately combined to achieve higher performance. Received: July 18, 2001 / Accepted: September 28, 2001  相似文献   

9.
Non-parametric classification procedures based on a certainty measure and nearest neighbour rule for motor unit potential classification (MUP) during electromyographic (EMG) signal decomposition were explored. A diversity-based classifier fusion approach is developed and evaluated to achieve improved classification performance. The developed system allows the construction of a set of non-parametric base classifiers and then automatically chooses, from the pool of base classifiers, subsets of classifiers to form candidate classifier ensembles. The system selects the classifier ensemble members by exploiting a diversity measure for selecting classifier teams. The kappa statistic is used as the diversity measure to estimate the level of agreement between base classifier outputs, i.e., to measure the degree of decision similarity between base classifiers. The pool of base classifiers consists of two kinds of classifiers: adaptive certainty-based classifiers (ACCs) and adaptive fuzzy k-NN classifiers (AFNNCs) and both utilize different types of features. Once the patterns are assigned to their classes, by the classifier fusion system, firing pattern consistency statistics for each class are calculated to detect classification errors in an adaptive fashion. Performance of the developed system was evaluated using real and simulated EMG signals and was compared with the performance of the constituent base classifiers and the performance of the fixed ensemble containing the full set of base classifiers. Across the EMG signal data sets used, the diversity-based classifier fusion approach had better average classification performance overall, especially in terms of reducing classification errors.  相似文献   

10.
We consider the trainable fusion rule design problem when the expert classifiers provide crisp outputs and the behavior space knowledge method is used to fuse local experts' decisions. If the training set is utilized to design both the experts and the fusion rule, the experts' outputs become too self-assured. In small sample situations, "optimistically biased" experts' outputs bluffs the fusion rule designer. If the experts differ in complexity and in classification performance, then the experts' boasting effect and can severely degrade the performance of a multiple classification system. Theoretically-based and experimental procedures are suggested to reduce the experts' boasting effect.  相似文献   

11.
将集成学习的思想引入到增量学习之中可以显著提升学习效果,近年关于集成式增量学习的研究大多采用加权投票的方式将多个同质分类器进行结合,并没有很好地解决增量学习中的稳定-可塑性难题。针对此提出了一种异构分类器集成增量学习算法。该算法在训练过程中,为使模型更具稳定性,用新数据训练多个基分类器加入到异构的集成模型之中,同时采用局部敏感哈希表保存数据梗概以备待测样本近邻的查找;为了适应不断变化的数据,还会用新获得的数据更新集成模型中基分类器的投票权重;对待测样本进行类别预测时,以局部敏感哈希表中与待测样本相似的数据作为桥梁,计算基分类器针对该待测样本的动态权重,结合多个基分类器的投票权重和动态权重判定待测样本所属类别。通过对比实验,证明了该增量算法有比较高的稳定性和泛化能力。  相似文献   

12.
Ant colony optimization (ACO) algorithms have been successfully applied in data classification, which aim at discovering a list of classification rules. However, due to the essentially random search in ACO algorithms, the lists of classification rules constructed by ACO-based classification algorithms are not fixed and may be distinctly different even using the same training set. Those differences are generally ignored and some beneficial information cannot be dug from the different data sets, which may lower the predictive accuracy. To overcome this shortcoming, this paper proposes a novel classification rule discovery algorithm based on ACO, named AntMinermbc, in which a new model of multiple rule sets is presented to produce multiple lists of rules. Multiple base classifiers are built in AntMinermbc, and each base classifier is expected to remedy the weakness of other base classifiers, which can improve the predictive accuracy by exploiting the useful information from various base classifiers. A new heuristic function for ACO is also designed in our algorithm, which considers both of the correlation and coverage for the purpose to avoid deceptive high accuracy. The performance of our algorithm is studied experimentally on 19 publicly available data sets and further compared to several state-of-the-art classification approaches. The experimental results show that the predictive accuracy obtained by our algorithm is statistically higher than that of the compared targets.  相似文献   

13.
《Information Fusion》2005,6(1):21-36
In the context of Multiple Classifier Systems, diversity among base classifiers is known to be a necessary condition for improvement in ensemble performance. In this paper the ability of several pair-wise diversity measures to predict generalisation error is compared. A new pair-wise measure, which is computed between pairs of patterns rather than pairs of classifiers, is also proposed for two-class problems. It is shown experimentally that the proposed measure is well correlated with base classifier test error as base classifier complexity is systematically varied. However, correlation with unity-weighted sum and vote is shown to be weaker, demonstrating the difficulty in choosing base classifier complexity for optimal fusion. An alternative strategy based on weighted combination is also investigated and shown to be less sensitive to number of training epochs.  相似文献   

14.
随机森林是一种组合分类器技术,相较于决策树等单分类器,具有更好的预测和分类性能,但其也存在一些问题:因为随机森林自身的随机性,导致预测结果存在波动性;所使用的原始数据集样本基数大,维数多,增加了随机森林组合分类器的训练时间。针对以上问题,提出优化随机森林模型,对数据集进行数据集预处理和PCA降维操作,引入累计贡献率。结合选择的最佳阈值进行最终的预测结果分类,提高了模型的训练速度、预测准确率和稳定性。实验证明,该方法具有更优越的预测性能。  相似文献   

15.
Breast cancer is the most commonly occurring form of cancer in women. While mammography is the standard modality for diagnosis, thermal imaging provides an interesting alternative as it can identify tumors of smaller size and hence lead to earlier detection. In this paper, we present an approach to analysing breast thermograms based on image features and a hybrid multiple classifier system. The employed image features provide indications of asymmetry between left and right breast regions that are encountered when a tumor is locally recruiting blood vessels on one side, leading to a change in the captured temperature distribution. The presented multiple classifier system is based on a hybridisation of three computational intelligence techniques: neural networks or support vector machines as base classifiers, a neural fuser to combine the individual classifiers, and a fuzzy measure for assessing the diversity of the ensemble and removal of individual classifiers from the ensemble. In addition, we address the problem of class imbalance that often occurs in medical data analysis, by training base classifiers on balanced object subspaces. Our experimental evaluation, on a large dataset of about 150 breast thermograms, convincingly shows our approach not only to provide excellent classification accuracy and sensitivity but also to outperform both canonical classification approaches as well as other classifier ensembles designed for imbalanced datasets.  相似文献   

16.
Many studies have shown that rule-based classifiers perform well in classifying categorical and sparse high-dimensional databases. However, a fundamental limitation with many rule-based classifiers is that they find the rules by employing various heuristic methods to prune the search space and select the rules based on the sequential database covering paradigm. As a result, the final set of rules that they use may not be the globally best rules for some instances in the training database. To make matters worse, these algorithms fail to fully exploit some more effective search space pruning methods in order to scale to large databases. In this paper, we present a new classifier, HARMONY, which directly mines the final set of classification rules. HARMONY uses an instance-centric rule-generation approach and it can assure that, for each training instance, one of the highest-confidence rules covering this instance is included in the final rule set, which helps in improving the overall accuracy of the classifier. By introducing several novel search strategies and pruning methods into the rule discovery process, HARMONY also has high efficiency and good scalability. Our thorough performance study with some large text and categorical databases has shown that HARMONY outperforms many well-known classifiers in terms of both accuracy and computational efficiency and scales well with regard to the database size  相似文献   

17.
一种基于预分类的高效最近邻分类器算法   总被引:1,自引:0,他引:1  
本文的最近邻分类器算法是采用多分类器组合的方式对测试样本进行预分类,并根据预分类结果重新生成新的训练和测试样本集。对新的测试样本采用最近邻分类器进行分类识别,并将识别结果与预分类结果结合在一起进行正确率测试。在ORL人脸库上的实验结果说明,该算法对小样本数据的识别具有明显优势。  相似文献   

18.
Various methods of reducing correlation between classifiers in a multiple classifier framework have been attempted. Here we propose a recursive partitioning technique for analysing feature space of multiple classifier decisions. Spectral summation of individual pattern components in intermediate feature space enables each training pattern to be rated according to its contribution to separability, measured as k-monotonic constraints. A constructive algorithm sequentially extracts maximally separable subsets of patterns, from which is derived an inconsistently classified set (ICS). Leaving out random subsets of ICS patterns from individual (base) classifier training sets is shown to improve performance of the combined classifiers. For experiments reported here on artificial and real data, the constituent classifiers are identical single hidden layer MLPs with fixed parameters.  相似文献   

19.
It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures-many of which are heuristic in nature-have been developed for this goal. In this brief, we describe a dynamic approach to combine classifiers that have expertise in different regions of the input space. To this end, we use local classifier accuracy estimates to weight classifier outputs. Specifically, we estimate local recognition accuracies of classifiers near a query sample by utilizing its nearest neighbors, and then use these estimates to find the best weights of classifiers to label the query. The problem is formulated as a convex quadratic optimization problem, which returns optimal nonnegative classifier weights with respect to the chosen objective function, and the weights ensure that locally most accurate classifiers are weighted more heavily for labeling the query sample. Experimental results on several data sets indicate that the proposed weighting scheme outperforms other popular classifier combination schemes, particularly on problems with complex decision boundaries. Hence, the results indicate that local classification-accuracy-based combination techniques are well suited for decision making when the classifiers are trained by focusing on different regions of the input space.  相似文献   

20.
Enlarging the feature space of the base tree classifiers in a decision forest by means of informative features extracted from an additional predictive model is advantageous for classification tasks. In this paper, we have empirically examined the performance of this type of decision forest with three different base tree classifier models including; (1) the full decision tree, (2) eight-node decision tree and (3) two-node decision tree (or decision stump). The hybrid decision forest with these base classifiers are trained in nine different sized resampled training sets. We have examined the performance of all these ensembles from different point of views; we have studied the bias-variance decomposition of the misclassification error of the ensembles, then we have investigated the amount of dependence and degree of uncertainty among the base classifiers of these ensembles using information theoretic measures. The experiment was designed to find out: (1) optimal training set size for each base classifier and (2) which base classifier is optimal for this kind of decision forest. In the final comparison, we have checked whether the subsampled version of the decision forest outperform the bootstrapped version. All the experiments have been conducted with 20 benchmark datasets from UCI machine learning repository. The overall results clearly point out that with careful selection of the base classifier and training sample size, the hybrid decision forest can be an efficient tool for real world classification tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号