首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
特征选择就是从特征集合中选择出与分类类别相关性强而特征之间冗余性最小的特征子集,这样一方面可以提高分类器的计算效率,另一方面可以提高分类器的泛化能力,进而提高分类精度。基于互信息的特征相关性和冗余性的评价准则,在实际应用中存在以下的问题:(1)变量的概率计算困难,进而影响特征的信息熵计算困难;(2)互信息倾向于选择值较多的特征;(3)基于累积加和的候选特征与特征子集之间冗余性度量准则在特征维数较高的情况下容易失效。为了解决上述问题,提出了基于归一化模糊互信息最大的特征评价准则,基于模糊等价关系计算变量的信息熵、条件熵、联合熵;利用联合互信息最大替换累积加和的度量方法;基于归一化联合互信息对特征重要性进行评价;基于该准则建立了基于前向贪婪搜索的特征选择算法。在UCI机器学习标准数据集上的多组实验,证明算法能够有效地选择出对分类类别有效的特征子集,能够明显提高分类精度。  相似文献   

2.
刘海燕  王超  牛军钰 《计算机工程》2012,38(14):135-137
针对传统特征选择算法只专注于特征类相关性或者特征冗余性的问题,提出一种基于条件互信息的特征选择算法。该算法采用k-means的基本思想聚类特征,并从中选出类相关度最大的特征,从而去除不相关和冗余特征。实验使用5个数据集,结果表明,该算法的分类性能优于传统特征选择算法。  相似文献   

3.
特征选择是从原始数据集中去除无关的特征并选择良好的特征子集,可以避免维数灾难和提高学习算法的性能。为解决已选特征和类别动态变化(DCSF)算法在特征选择过程中只考虑已选特征和类别之间动态变化的信息量,而忽略候选特征和已选特征的交互相关性的问题,提出了一种基于动态相关性的特征选择(DRFS)算法。该算法采用条件互信息度量已选特征和类别的条件相关性,并采用交互信息度量候选特征和已选特征发挥的协同作用,从而选择相关特征并且去除冗余特征以获得优良特征子集。仿真实验表明,与现有算法相比,所提算法能有效地提升特征选择的分类准确率。  相似文献   

4.
基于互信息的无监督特征选择   总被引:5,自引:0,他引:5  
在数据分析中,特征选择可以用来降低特征的冗余,提高分析结果的可理解性和发现高维数据中隐藏的结构.提出了一种基于互信息的无监督的特征选择方法(UFS-MI),在UFS-MI中,使用了一种综合考虑了相关度和冗余度的特征选择标准UmRMR(无监督最小冗余最大相关)来评价特征的重要性.相关度和冗余度分别使用互信息来度量特征与潜在类别变量之间的依赖和特征与特征之间的依赖.UFS-MI同时适用于数值型和非数值型特征.在理论上证明了UFS-MI的有效性,实验结果也表明UFS-MI可以达到与传统的特征选择方法相当甚至更好的性能.  相似文献   

5.
Feature selection plays an important role in pattern recognition and machine learning. Feature selection based on information theory intends to preserve the feature relevancy between features and class labels while eliminating irrelevant and redundant features. Previous feature selection methods have offered various explanations for feature relevancy, but they ignored the relationships between candidate feature relevancy and selected feature relevancy. To fill this gap, we propose a feature selection method named Feature Selection based on Weighted Relevancy (WRFS). In WRFS, we introduce two weight coefficients that use mutual information and joint mutual information to balance the importance between the two kinds of feature relevancy terms. To evaluate the classification performance of our method, WRFS is compared to three competing feature selection methods and three state-of-the-art methods by two different classifiers on 18 benchmark data sets. The experimental results indicate that WRFS outperforms the other baselines in terms of the classification accuracy, AUC and F1 score.  相似文献   

6.
提出了一种针对分类属性数据特征选择的新算法。通过给出一种能够直接评价分类属性数据特征选择的评价函数新定义,重新构造能实现分类属性数据信息量、条件互信息、特征之间依赖度定义的计算公式,并在此基础上,提出了一种基于互信息较大相关、较小冗余的特征选择(MRLR)算法。MRLR算法在特征选择时不仅考虑了特征与类标签之间的相关性,而且还考虑了特征之间的冗余性。大量的仿真实验表明,MRLR算法在针对分类属性数据的特征选择时,能获得冗余度小且更具代表性的特征子集,具有较好的高效性和稳定性。  相似文献   

7.
Jin-Jie  Yun-Ze  Xiao-Ming   《Neurocomputing》2008,71(7-9):1656-1668
A parameterless feature ranking approach is presented for feature selection in the pattern classification task. Compared with Battiti's mutual information feature selection (MIFS) and Kwak and Choi's MIFS-U methods, the proposed method derives an estimation of the conditional MI between the candidate feature fi and the output class C given the subset of selected features S, i.e. I(C;fiS), without any parameters like β in MIFS and MIFS-U methods to be preset. Thus, the intractable problem can be avoided completely, which is how to choose an appropriate value for β to achieve the tradeoff between the relevance to the output classes and the redundancy with the already-selected features. Furthermore, a modified greedy feature selection algorithm called the second order MI feature selection approach (SOMIFS) is proposed. Experimental results demonstrate the superiority of SOMIFS in terms of both synthetic and benchmark data sets.  相似文献   

8.
维吾尔文常用切分方法会产生大量的语义抽象甚至多义的词特征,因此学习算法难以发现高维数据中隐藏的结构.提出一种无监督切分方法dme-TS和一种无监督特征选择方法UMRMR-UFS.dme-TS从大规模生语料中自动获取单词Bi-gram及上下文语境信息,并将相邻单词间的t-测试差、互信息及双词上下文邻接对熵的线性融合作为一个组合统计量(dme)来评价单词间的结合能力,从而将文本切分成语义具体的独立语言单位的特征集合.UMRMR-UFS用一种综合考虑最大相关度和最小冗余的无监督特征选择标准(UMRMR)来评价每一个特征的重要性,并将最重要的特征依次移入到特征子集中.实验结果表明dme-TS能有效控制原始特征集的规模,提高特征项本身的质量,用UMRMR-UFS的输出来表征文本时,学习算法也表现出其最高的性能.  相似文献   

9.
针对大量无关和冗余特征的存在可能降低分类器性能的问题,提出了一种基于近似Markov Blanket和动态互信息的特征选择算法。该算法利用互信息作为特征相关性的度量准则,并在未识别的样本上对互信息进行动态估值,利用近似Markov Blanket原理准确地去除冗余特征,从而获得远小于原始特征规模的特征子集。通过仿真试验证明了该算法的有效性。以支持向量机为分类器,在公共数据集UCI上进行了试验,并与DMIFS和ReliefF算法进行了对比。试验结果证明,该算法选取的特征子集与原始特征子集相比,以远小于原始特征规模的特征子集获得了高于或接近于原始特征集合的分类结果。  相似文献   

10.
张逸石  陈传波 《计算机科学》2011,38(12):200-205
提出了一种基于最小联合互信息亏损的最优特征选择算法。该算法首先通过一种动态渐增策略搜索一个特征全集的无差异特征子集,并基于最小条件互信息原则在保证每一步中联合互信息量亏损都最小的情况下筛选其中的冗余特征,从而得到一个近似最优特征子集。针对现有基于条件互信息的条件独立性测试方法在高维特征域上所面临的效率瓶颈问题,给出了一种用于估计条件互信息的快速实现方法,并将其用于所提算法的实现。分类实验结果表明,所提算法优于经典的特征选择算法。此外,执行效率实验结果表明,所提条件互信息的快速实现方法在执行效率上有着显著的优势。  相似文献   

11.
Mutual information (MI) is used in feature selection to evaluate two key-properties of optimal features, the relevance of a feature to the class variable and the redundancy of similar features. Conditional mutual information (CMI), i.e., MI of the candidate feature to the class variable conditioning on the features already selected, is a natural extension of MI but not so far applied due to estimation complications for high dimensional distributions. We propose the nearest neighbor estimate of CMI, appropriate for high-dimensional variables, and build an iterative scheme for sequential feature selection with a termination criterion, called CMINN. We show that CMINN is equivalent to feature selection MI filters, such as mRMR and MaxiMin, in the presence of solely single feature effects, and more appropriate for combined feature effects. We compare CMINN to mRMR and MaxiMin on simulated datasets involving combined effects and confirm the superiority of CMINN in selecting the correct features (indicated also by the termination criterion) and giving best classification accuracy. The application to ten benchmark databases shows that CMINN obtains the same or higher classification accuracy compared to mRMR and MaxiMin at a smaller cardinality of the selected feature subset.  相似文献   

12.
Feature selection is used to choose a subset of relevant features for effective classification of data. In high dimensional data classification, the performance of a classifier often depends on the feature subset used for classification. In this paper, we introduce a greedy feature selection method using mutual information. This method combines both feature–feature mutual information and feature–class mutual information to find an optimal subset of features to minimize redundancy and to maximize relevance among features. The effectiveness of the selected feature subset is evaluated using multiple classifiers on multiple datasets. The performance of our method both in terms of classification accuracy and execution time performance, has been found significantly high for twelve real-life datasets of varied dimensionality and number of instances when compared with several competing feature selection techniques.  相似文献   

13.
针对传统的拉普拉斯评分特征选择算法只适应单标记学习,无法直接应用于多标记学习的问题,提出一种应用于多标记任务的拉普拉斯评分特征选择算法。首先,考虑样本在整体标记空间中共同关联和共同不关联的相关性,重新构建样本相似度矩阵;然后,将特征之间的相关性及冗余性判定引入拉普拉斯评分算法中,采用前向贪心搜索策略依次评价候选特征与已选特征的联合作用能力,用于评价特征的重要性;最后,在5个不同评价指标和6个多标记数据集上实验。实验结果表明:相比基于最大依赖的多标记维数约简方法(MDDM)、基于贝叶斯分类器的多标记特征选择算法(MLNB)及基于多元互信息的多标记分类特征选择算法(PMU),所提算法不仅分类性能最优,且存在显著性优异达65%。  相似文献   

14.
彭晓冰  朱玉全 《计算机科学》2018,45(12):182-186
特征加权支持向量机没有考虑特征间的相关性,因此产生的冗余会形成干扰并对最后的分类结果产生负面影响。为解决这个问题,提出了一种基于特征内相关和互信息的特征加权算法,并将其应用于支持向量机。该算法引入了特征间相关系数作为衡量冗余度的一个指标,以此计算出惩罚因子,在特征加权向量机的基础上对权值进行处理,尽可能真实地体现出特征对分类的贡献度。经过多个数据集以及几种不同算法的实验比较,提出的新算法具有更好的鲁棒性和泛化能力。  相似文献   

15.
特征选择对于分类器的分类精度和泛化性能起重要作用。目前的多标记特征选择算法主要利用最大相关性最小冗余性准则在全部特征集中进行特征选择,没有考虑专家特征,因此多标记特征选择算法的运行时间较长、复杂度较高。实际上,在现实生活中专家依据几个或者多个关键特征就能够直接决定整体的预测方向。如果提取关注这些信息,必将减少特征选择的计算时间,甚至提升分类器性能。基于此,提出一种基于专家特征的条件互信息多标记特征选择算法。首先将专家特征与剩余的特征相联合,再利用条件互信息得出一个与标记集合相关性由强到弱的特征序列,最后通过划分子空间去除冗余性较大的特征。该算法在7个多标记数据集上进行了实验对比,结果表明该算法较其他特征选择算法有一定优势,统计假设检验与稳定性分析进一步证明了所提出算法的有效性和合理性。  相似文献   

16.
The curse of high dimensionality in text classification is a worrisome problem that requires efficient and optimal feature selection (FS) methods to improve classification accuracy and reduce learning time. Existing filter-based FS methods evaluate features independently of other related ones, which can then lead to selecting a large number of redundant features, especially in high-dimensional datasets, resulting in more learning time and less classification performance, whereas information theory-based methods aim to maximize feature dependency with the class variable and minimize its redundancy for all selected features, which gradually becomes impractical when increasing the feature space. To overcome the time complexity issue of information theory-based methods while taking into account the redundancy issue, in this article, we propose a new feature selection method for text classification termed correlation-based redundancy removal, which aims to minimize the redundancy using subsets of features having close mutual information scores without sequentially seeking already selected features. The idea is that it is not important to assess the redundancy of a dominant feature having high classification information with another irrelevant feature having low classification information and vice-versa since they are implicitly weakly correlated. Our method, tested on seven datasets using both traditional classifiers (Naive Bayes and support vector machines) and deep learning models (long short-term memory and convolutional neural networks), demonstrated strong performance by reducing redundancy and improving classification compared to ten competitive metrics.  相似文献   

17.
特征选择是处理高维数据的一项有效技术。针对传统方法的不足,结合[F-score]与互信息,提出了一种最小冗余最大分离的特征选择评价准则,该准则使所选择的特征具有更好的分类和预测能力;采用二进制布谷鸟搜索算法和二次规划两种搜索策略来搜索最优特征子集,并对两种搜索策略的准确性和计算量进行分析比较;最后,利用UCI数据集进行实验测试,实验结果说明了所提理论的有效性。  相似文献   

18.
李欣倩  杨哲  任佳 《测控技术》2022,41(2):36-40
根据朴素贝叶斯算法的特征条件独立假设,提出一种基于互信息和层次聚类双重特征选择的改进朴素贝叶斯算法。通过互信息方法剔除不相关的特征,然后依据欧氏距离将删减后的特征进行分层聚类,通过粒子群算法得到聚类簇的数量,最后将每个聚类簇中与类别互信息最高的特征合并为特征子集,并由朴素贝叶斯算法得到分类准确率。根据实验结果可知,该算法可以有效减少特征之间的相关性,提升算法的分类性能。  相似文献   

19.
分类问题普遍存在于现代工业生产中。在进行分类任务之前,利用特征选择筛选有用的信息,能够有效地提高分类效率和分类精度。最小冗余最大相关算法(mRMR)考虑最大化特征与类别的相关性和最小化特征之间的冗余性,能够有效地选择特征子集;但该算法存在中后期特征重要度偏差大以及无法直接给出特征子集的问题。针对该问题,文中提出了结合邻域粗糙集差别矩阵和mRMR原理的特征选择算法。根据最大相关性和最小冗余性原则,利用邻域熵和邻域互信息定义了特征的重要度,以更好地处理混合数据类型。基于差别矩阵定义了动态差别集,利用差别集的动态演化有效去除冗余属性,缩小搜索范围,优化特征子集,并根据差别矩阵判定迭代截止条件。实验选取SVM,J48,KNN和MLP作为分类器来评价该特征选择算法的性能。在公共数据集上的实验结果表明,与已有算法相比,所提算法的平均分类精度提升了2%左右,同时在特征较多的数据集上能够有效地缩短特征选择时间。所提算法继承了差别矩阵和mRMR的优点,能够有效地处理特征选择问题。  相似文献   

20.
In this paper, we introduced a novel feature selection method based on the hybrid model (filter-wrapper). We developed a feature selection method using the mutual information criterion without requiring a user-defined parameter for the selection of the candidate feature set. Subsequently, to reduce the computational cost and avoid encountering to local maxima of wrapper search, a wrapper approach searches in the space of a superreduct which is selected from the candidate feature set. Finally, the wrapper approach determines to select a proper feature set which better suits the learning algorithm. The efficiency and effectiveness of our technique is demonstrated through extensive comparison with other representative methods. Our approach shows an excellent performance, not only high classification accuracy, but also with respect to the number of features selected.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号