首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 734 毫秒
1.
为改进SVM对不均衡数据的分类性能,提出一种基于拆分集成的不均衡数据分类算法,该算法对多数类样本依据类别之间的比例通过聚类划分为多个子集,各子集分别与少数类合并成多个训练子集,通过对各训练子集进行学习获得多个分类器,利用WE集成分类器方法对多个分类器进行集成,获得最终分类器,以此改进在不均衡数据下的分类性能.在UCI数据集上的实验结果表明,该算法的有效性,特别是对少数类样本的分类性能.  相似文献   

2.
大数据时代,不平衡数据分类在实际应用场景中频繁出现。以二分类为例,传统分类器由于较难学习少数类数据集内部的本质结构,容易将少数类样本错误分类。针对这一问题,一种有效的解决方法是在传统的方法中引入代价敏感机制,为少数类样本赋予更高的误分代价以提升其预测精度。这类方法同等对待了同类样本集中的数据,然而同一类内的不同样本可能对训练过程有不同程度的贡献。为了提升代价敏感机制的有效性,样本自适应的代价敏感策略为不同的样本赋予不同的权重。首先,通过考察样本局部的类分布情况,判断其距离两类样本边界的远近;然后,根据边界分布理论,即距离决策面越近的样本对决策面位置的影响越大,为距离两类样本边界越近的样本赋予越高的权重。实验过程中,通过将样本自适应代价敏感策略应用于LDM,并在标准数据集上进行一系列对比实验,验证了样本自适应代价敏感策略在处理不平衡数据分类问题上的有效性。  相似文献   

3.
不平衡数据集中的组合分类算法   总被引:1,自引:0,他引:1  
吴广潮  陈奇刚 《计算机工程与设计》2007,28(23):5687-5689,5761
为提高少数类的分类性能,对基于数据预处理的组合分类器算法进行了研究.利用Tomek links对数据集进行预处理;把新数据集里的多数类样本按照不平衡比拆分为多个子集,每个子集和少数类样本合并成新子集;用最小二乘支持向量机对每个新子集进行训练,把训练后的各个子分类器组合为一个分类系统,新的测试样本的类别将由这个分类系统投票表决.数据试验结果表明,该算法在多数类和少数类的分类性能方面,都优于最小二乘支持向量机过抽样方法和欠抽样方法.  相似文献   

4.
宽度学习系统(broad learning system,BLS)作为深度神经网络的替代框架,具有快速自适应模型结构选择和在线增量学习能力,被认为是知识发现和数据工程领域中一种极具前途的技术.传统的BLS主要应用于数据分 布均衡且误分类代价相同的模式分类任务,但大多数实际应用的数据是非均衡分布的,如网络入侵监测、医疗诊断、信用卡欺诈检测等.基于此,提出一种基于数据分布特性的代价敏感BLS(data distribution-based cost-sensitive-BLS,DDbCs-BLS),解决数据分布不均、误分代价不同的模式分类任务.DDbCs-BLS在充分考虑数据统计分布特性的基础上寻找代价敏感型BLS分类器的最佳分类边界,保证少数类样本信息不被丢失,从而提高BLS在各类数据集上的模式分类性能.在多种公共数据集(包括均衡和不均衡数据集)上进行大量的验证性和对比性实验,结果表明DDbCs-BLS能有效确定分类边界线的最佳位置,无论是在均衡数据集还是在不均衡数据集上均能获得更好的分类性能.  相似文献   

5.
针对不平衡数据集中的少数类在传统分类器上预测精度低的问题,提出了一种基于欠采样和代价敏感的不平衡数据分类算法——USCBoost.首先在AdaBoost算法每次迭代训练基分类器之前对多数类样本按权重由大到小进行排序,根据样本权重选取与少数类样本数量相当的多数类样本;之后将采样后的多数类样本权重归一化并与少数类样本组成临...  相似文献   

6.
欠抽样方法在非平衡数据集分类时,未充分考虑数据分布变化对分类结果造成的影响。为此,提出一种基于聚类融合去冗余的改进欠抽样方法。采用聚类算法得到多数类样本高密度分布区域的聚类中心,将多数类样本划分为不同子集,通过计算各子集的相似度冗余系数对多数类样本进行去冗余删除,以达到欠抽样的目的。对15个不同平衡率的数据集欠抽样后,利用代价敏感混合属性多决策树模型进行分类。实验结果表明,在不降低非平衡数据集分类准确率的前提下,该方法能够提高少数类样本的正类率及预测模型的G-mean值。  相似文献   

7.
针对传统分类器在数据不均衡的情况下分类效果不理想的缺陷,为提高分类器在不均衡数据集下的分类性能,特别是少数类样本的分类能力,提出了一种基于BSMOTE 和逆转欠抽样的不均衡数据分类算法。该算法使用BSMOTE进行过抽样,人工增加少数类样本的数量,然后通过优先去除样本中的冗余和噪声样本,使用逆转欠抽样方法逆转少数类样本和多数类样本的比例。通过多次进行上述抽样形成多个训练集合,使用Bagging方法集成在多个训练集合上获得的分类器来提高有效信息的利用率。实验表明,该算法较几种现有算法不仅能够提高少数类样本的分类性能,而且能够有效提高整体分类准确度。  相似文献   

8.
实际的分类数据往往是分布不均衡的.传统的分类器大都会倾向多数类而忽略少数类,导致分类性能恶化.针对该问题提出一种基于变分贝叶斯推断最优高斯混合模型(varition Bayesian-optimized optimal Gaussian mixture model, VBoGMM)的自适应不均衡数据综合采样法. VBoGMM可自动衰减到真实的高斯成分数,实现任意数据的最优分布估计;进而基于所获得的分布特性对少数类样本进行自适应综合过采样,并采用Tomek-link对准则对采样数据进行清洗以获得相对均衡的数据集用于后续的分类模型学习.在多个公共不均衡数据集上进行大量的验证和对比实验,结果表明:所提方法能在实现样本均衡化的同时,维持多数类与少数类样本空间分布特性,因而能有效提升传统分类模型在不均衡数据集上的分类性能.  相似文献   

9.
分类是模式识别领域中的研究热点,大多数经典的分类器往往默认数据集是分布均衡的,而现实中的数据集往往存在类别不均衡问题,即属于正常/多数类别的数据的数量与属于异常/少数类数据的数量之间的差异很大。若不对数据进行处理往往会导致分类器忽略少数类、偏向多数类,使得分类结果恶化。针对数据的不均衡分布问题,本文提出一种融合谱聚类的综合采样算法。首先采用谱聚类方法对不均衡数据集的少数类样本的分布信息进行分析,再基于分布信息对少数类样本进行过采样,获得相对均衡的样本,用于分类模型训练。在多个不均衡数据集上进行了大量实验,结果表明,所提方法能有效解决数据的不均衡问题,使得分类器对于少数类样本的分类精度得到提升。  相似文献   

10.
特征选择是机器学习和数据挖据中一个重要的预处理步骤,而类别不均衡数据的特征选择是机器学习和模式识别中的一个热点研究问题。多数传统的特征选择分类算法追求高精度,并假设数据没有误分类代价或者有同样的代价。在现实应用中,不同的误分类往往会产生不同的误分类代价。为了得到最小误分类代价下的特征子集,本文提出一种基于样本邻域保持的代价敏感特征选择算法。该算法的核心思想是把样本邻域引入现有的代价敏感特征选择框架。在8个真实数据集上的实验结果表明了该算法的优越性。  相似文献   

11.
基于集成的非均衡数据分类主动学习算法   总被引:1,自引:0,他引:1  
当前,处理类别非均衡数据采用的主要方法之一就是预处理,将数据均衡化之后采取传统的方法加以训练.预处理的方法主要有过取样和欠取样,然而过取样和欠取样都有自己的不足,提出拆分提升主动学习算法SBAL( Split-Boost Active Learning),该算法将大类样本集根据非均衡比例分成多个子集,子集与小类样本集合并,对其采用AdaBoost算法训练子分类器,然后集成一个总分类器,并基于QBC( Query-by-committee)主动学习算法主动选取有效样本进行训练,基本避免了由于增加样本或者减少样本所带来的不足.实验表明,提出的算法对于非均衡数据具有更高的分类精度.  相似文献   

12.
Class imbalance is among the most persistent complications which may confront the traditional supervised learning task in real-world applications. The problem occurs, in the binary case, when the number of instances in one class significantly outnumbers the number of instances in the other class. This situation is a handicap when trying to identify the minority class, as the learning algorithms are not usually adapted to such characteristics.The approaches to deal with the problem of imbalanced datasets fall into two major categories: data sampling and algorithmic modification. Cost-sensitive learning solutions incorporating both the data and algorithm level approaches assume higher misclassification costs with samples in the minority class and seek to minimize high cost errors. Nevertheless, there is not a full exhaustive comparison between those models which can help us to determine the most appropriate one under different scenarios.The main objective of this work is to analyze the performance of data level proposals against algorithm level proposals focusing in cost-sensitive models and versus a hybrid procedure that combines those two approaches. We will show, by means of a statistical comparative analysis, that we cannot highlight an unique approach among the rest. This will lead to a discussion about the data intrinsic characteristics of the imbalanced classification problem which will help to follow new paths that can lead to the improvement of current models mainly focusing on class overlap and dataset shift in imbalanced classification.  相似文献   

13.
Class imbalance limits the performance of most learning algorithms since they cannot cope with large differences between the number of samples in each class, resulting in a low predictive accuracy over the minority class. In this respect, several papers proposed algorithms aiming at achieving more balanced performance. However, balancing the recognition accuracies for each class very often harms the global accuracy. Indeed, in these cases the accuracy over the minority class increases while the accuracy over the majority one decreases. This paper proposes an approach to overcome this limitation: for each classification act, it chooses between the output of a classifier trained on the original skewed distribution and the output of a classifier trained according to a learning method addressing the course of imbalanced data. This choice is driven by a parameter whose value maximizes, on a validation set, two objective functions, i.e. the global accuracy and the accuracies for each class. A series of experiments on ten public datasets with different proportions between the majority and minority classes show that the proposed approach provides more balanced recognition accuracies than classifiers trained according to traditional learning methods for imbalanced data as well as larger global accuracy than classifiers trained on the original skewed distribution.  相似文献   

14.
In many remote-sensing projects, one is usually interested in a small number of land-cover classes present in a study area and not in all the land-cover classes that make-up the landscape. Previous studies in supervised classification of satellite images have tackled specific class mapping problem by isolating the classes of interest and combining all other classes into one large class, usually called others, and by developing a binary classifier to discriminate the class of interest from the others. Here, this approach is called focused approach. The strength of the focused approach is to decompose the original multi-class supervised classification problem into a binary classification problem, focusing the process on the discrimination of the class of interest. Previous studies have shown that this method is able to discriminate more accurately the classes of interest when compared with the standard multi-class supervised approach. However, it may be susceptible to data imbalance problems present in the training data set, since the classes of interest are often a small part of the training set. A result the classification may be biased towards the largest classes and, thus, be sub-optimal for the discrimination of the classes of interest. This study presents a way to minimize the effects of data imbalance problems in specific class mapping using cost-sensitive learning. In this approach errors committed in the minority class are treated as being costlier than errors committed in the majority class. Cost-sensitive approaches are typically implemented by weighting training data points accordingly to their importance to the analysis. By changing the weight of individual data points, it is possible to shift the weight from the larger classes to the smaller ones, balancing the data set. To illustrate the use of the cost-sensitive approach to map specific classes of interest, a series of experiments with weighted support vector machines classifier and Landsat Thematic Mapper data were conducted to discriminate two types of mangrove forest (high-mangrove and low-mangrove) in Saloum estuary, Senegal, a United Nations Educational, Scientific and Cultural Organisation World Heritage site. Results suggest an increase in overall classification accuracy with the use of cost-sensitive method (97.3%) over the standard multi-class (94.3%) and the focused approach (91.0%). In particular, cost-sensitive method yielded higher sensitivity and specificity values on the discrimination of the classes of interest when compared with the standard multi-class and focused approaches.  相似文献   

15.
Identifying the temporal variations in mental workload level (MWL) is crucial for enhancing the safety of human–machine system operations, especially when there is cognitive overload or inattention of human operator. This paper proposed a cost-sensitive majority weighted minority oversampling strategy to address the imbalanced MWL data classification problem. Both the inter-class and intra-class imbalance problems are considered. For the former, imbalance ratio is defined to determine the number of the synthetic samples in the minority class. The latter problem is addressed by assigning different weights to borderline samples in the minority class based on the distance and density meaures of the sample distribution. Furthermore, multi-label classifier is designed based on an ensemble of binary classifiers. The results of analyzing 21 imbalanced UCI multi-class datasets showed that the proposed approach can effectively cope with the imbalanced classification problem in terms of several performance metrics including geometric mean (G-mean) and average accuracy (ACC). Moreover, the proposed approach was applied to the analysis of the EEG data of eight experimental participants subject to fluctuating levels of mental workload. The comparative results showed that the proposed method provides a competing alternative to several existing imbalanced learning algorithms and significantly outperforms the basic/referential method that ignores the imbalance nature of the dataset.  相似文献   

16.
概念漂移是数据流学习领域中的一个难点问题,同时数据流中存在的类不平衡问题也会严重影响算法的分类性能。针对概念漂移和类不平衡的联合问题,在基于数据块集成的方法上引入在线更新机制,结合重采样和遗忘机制提出了一种增量加权集成的不平衡数据流分类方法(incremental weighted ensemble for imbalance learning, IWEIL)。该方法以集成框架为基础,利用基于可变大小窗口的遗忘机制确定基分类器对窗口内最近若干实例的分类性能,并计算基分类器的权重,随着新实例的逐个到达,在线更新IWEIL中每个基分器及其权重。同时,使用改进的自适应最近邻SMOTE方法生成符合新概念的新少数类实例以解决数据流中类不平衡问题。在人工数据集和真实数据集上进行实验,结果表明,相比于DWMIL算法,IWEIL在HyperPlane数据集上的G-mean和recall指标分别提升了5.77%和6.28%,在Electricity数据集上两个指标分别提升了3.25%和6.47%。最后,IWEIL在安卓应用检测问题上表现良好。  相似文献   

17.
不平衡数据分类是机器学习研究领域中的一个热点问题。针对传统分类算法处理不平衡数据的少数类识别率过低问题,文章提出了一种基于聚类的改进AdaBoost分类算法。算法首先进行基于聚类的欠采样,在多数类样本上进行K均值聚类,之后提取聚类质心,与少数类样本数目一致的聚类质心和所有少数类样本组成新的平衡训练集。为了避免少数类样本数量过少而使训练集过小导致分类精度下降,采用少数过采样技术过采样结合聚类欠采样。然后,借鉴代价敏感学习思想,对AdaBoost算法的基分类器分类误差函数进行改进,赋予不同类别样本非对称错分损失。实验结果表明,算法使模型训练样本具有较高的代表性,在保证总体分类性能的同时提高了少数类的分类精度。  相似文献   

18.
针对传统采样方式准确率与鲁棒性不够明显,欠采样容易丢失重要的样本信息,而过采样容易引入冗杂信息等问题,以UCI公共数据集中的不平衡数据集Pima-Indians为例,综合考虑数据集正负类样本的类间距离、类内距离与不平衡度之间的关系,提出一种基于样本特性的新型过采样方式.首先对原始数据集进行距离带的划分,然后提出一种改进的基于样本特性的自适应变邻域Smote算法,在每个距离带的少数类样本中进行新样本的合成,并将此方式推广到UCI数据集中其他5种不平衡数据集.最后利用SVM分类器进行实验验证的结果表明:在6类不平衡数据集中,应用新型过采样SVM算法,相比已有的采样方式,少(多)数类样本的分类准确率均有明显提高,且算法具有更强的鲁棒性.  相似文献   

19.
大多数非均衡数据集的研究集中于纯重构数据集或者纯代价敏感学习,本文针对数据集类分布非均衡和不相等误分类代价往往同时发生这一事实,提出了一种以最小误分类代价为目标的基于混合重取样的代价敏感学习算法。该算法将两种不同类型解决方案有机地融合在一起,先用样本类空间重构的方法使原始数据集的两类数据达到基本均衡,然后再引入代价敏感学习算法进行分类,能提高少数类分类精度,同时有效降低总的误分类代价。实验结果验证了该算法在处理非均衡类问题时比传统算法要优越。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号