首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
特征选择是从原始数据集中去除无关的特征并选择良好的特征子集,可以避免维数灾难和提高学习算法的性能。为解决已选特征和类别动态变化(DCSF)算法在特征选择过程中只考虑已选特征和类别之间动态变化的信息量,而忽略候选特征和已选特征的交互相关性的问题,提出了一种基于动态相关性的特征选择(DRFS)算法。该算法采用条件互信息度量已选特征和类别的条件相关性,并采用交互信息度量候选特征和已选特征发挥的协同作用,从而选择相关特征并且去除冗余特征以获得优良特征子集。仿真实验表明,与现有算法相比,所提算法能有效地提升特征选择的分类准确率。  相似文献   

2.
唐小川  邱曦伟  罗亮 《计算机应用》2018,38(7):1857-1861
针对文本分类中的特征选择问题,提出了一种考虑特征之间交互作用的文本分类特征选择算法——Max-Interaction。首先,通过联合互信息(JMI),建立基于信息论的文本分类特征选择模型;其次,放松现有特征选择算法的假设条件,将特征选择问题转化为交互作用优化问题;再次,通过最大最小法避免过高估计高阶交互作用;最后,提出一个基于前向搜索和高阶交互作用的文本分类特征选择算法。实验结果表明,Max-Interaction比交互作用权重特征选择(IWFS)的平均分类精度提升了5.5%,Max-Interaction比卡方统计法(Chi-square)的平均分类精度提升了6%,Max-Interaction在93%的实验中分类精度高于对比方法,因此,Max-Interaction能有效利用交互作用提升文本分类特征选择的性能。  相似文献   

3.
特征选择是机器学习非常重要的预处理步骤,而邻域互信息是一种能直接处理连续型或离散型特征的有效方法。然而基于邻域互信息的特征选择方法一般采用启发式贪婪策略,其特征子集质量难以得到有效保证。基于三支决策的思想,提出了三支邻域互信息特征选择方法(NMI-TWD)。通过扩展三个潜在的候选特征子集,并保持各子集之间的差异性,以获得更高质量的特征子集。对三个差异性的特征子集进行集成学习,构建三支协同决策模型,以进一步提高分类学习性能。UCI实验数据表明,新方法的特征选择结果和分类性能较其他方法更优,说明了其有效性。  相似文献   

4.
This paper proposes a novel criterion for estimating the redundancy information of selected feature sets in multi-dimensional pattern classification. An appropriate feature selection process typically maximizes the relevancy of features to each class and minimizes the redundancy of features between selected features. Unlike to the relevancy information that can be measured by mutual information, however, it is difficult to estimate the redundancy information because its dynamic range is varied by the characteristics of features and classes.By utilizing the conceptual diagram of the relationship between candidate features, selected features, and class variables, this paper proposes a new criterion to accurately compute the amount of redundancy. Specifically, the redundancy term is estimated by conditional mutual information between selected and candidate features to each class variable, which does not need a cumbersome normalization process as the conventional algorithm does. The proposed algorithm is implemented into a speech/music discrimination system to evaluate classification performance. Experimental results by varying the number of selected features verify that the proposed method shows higher classification accuracy than conventional algorithms.  相似文献   

5.
Feature selection is an important filtering method for data analysis, pattern classification, data mining, and so on. Feature selection reduces the number of features by removing irrelevant and redundant data. In this paper, we propose a hybrid filter–wrapper feature subset selection algorithm called the maximum Spearman minimum covariance cuckoo search (MSMCCS). First, based on Spearman and covariance, a filter algorithm is proposed called maximum Spearman minimum covariance (MSMC). Second, three parameters are proposed in MSMC to adjust the weights of the correlation and redundancy, improve the relevance of feature subsets, and reduce the redundancy. Third, in the improved cuckoo search algorithm, a weighted combination strategy is used to select candidate feature subsets, a crossover mutation concept is used to adjust the candidate feature subsets, and finally, the filtered features are selected into optimal feature subsets. Therefore, the MSMCCS combines the efficiency of filters with the greater accuracy of wrappers. Experimental results on eight common data sets from the University of California at Irvine Machine Learning Repository showed that the MSMCCS algorithm had better classification accuracy than the seven wrapper methods, the one filter method, and the two hybrid methods. Furthermore, the proposed algorithm achieved preferable performance on the Wilcoxon signed-rank test and the sensitivity–specificity test.  相似文献   

6.
开放动态环境下的机器学习任务面临着数据特征空间的高维性和动态性。目前已有在线流特征选择算法基本仅考虑特征的重要性和冗余性,忽略了特征的交互性。特征交互是指那些本身与标签单独统计时呈现无关或弱相关,但与其他特征结合时却能与标签呈强相关的特征。基于此,提出一种基于邻域信息交互的在线流特征选择算法,该算法分为在线交互特征选择和在线冗余特征剔除两个阶段,即直接计算新到特征与整个已选特征子集的交互强弱程度,以及利用成对比较机制剔除冗余特征。在10个数据集上的实验结果表明了所提算法的有效性。  相似文献   

7.
本文针对入侵检测系统(IDS)被检测数据的特点,对适用于IDS的特征选择算法进行了研究,提出了一种基于分类的多次模糊迭代特征选择算法。该算法包括在属性空间中搜索特征子集、评估每个候选特征子集和分类这3个步骤,设计了与之相应的搜索算法和评估函数;算法通过多次迭代去除特征值集的冗余特征,得到精确度较高的特征值集;使用模糊逻辑得到与精确度要求相应的取值范围;由于单纯对数据进行操作,能比依赖于领域知识的算法更客观地分析数据。文内还对所提出的算法做了测试实验;并将实验结果与用可视化工具产生的特征可视化结果进行了比较。结果表明:该算法在IDS数据集上可取得良好的特征选择效果。  相似文献   

8.
This paper presents a feature selection method for data classification, which combines a model-based variable selection technique and a fast two-stage subset selection algorithm. The relationship between a specified (and complete) set of candidate features and the class label is modeled using a non-linear full regression model which is linear-in-the-parameters. The performance of a sub-model measured by the sum of the squared-errors (SSE) is used to score the informativeness of the subset of features involved in the sub-model. The two-stage subset selection algorithm approaches a solution sub-model with the SSE being locally minimized. The features involved in the solution sub-model are selected as inputs to support vector machines (SVMs) for classification. The memory requirement of this algorithm is independent of the number of training patterns. This property makes this method suitable for applications executed in mobile devices where physical RAM memory is very limited.An application was developed for activity recognition, which implements the proposed feature selection algorithm and an SVM training procedure. Experiments are carried out with the application running on a PDA for human activity recognition using accelerometer data. A comparison with an information gain-based feature selection method demonstrates the effectiveness and efficiency of the proposed algorithm.  相似文献   

9.
This correspondence presents a novel hybrid wrapper and filter feature selection algorithm for a classification problem using a memetic framework. It incorporates a filter ranking method in the traditional genetic algorithm to improve classification performance and accelerate the search in identifying the core feature subsets. Particularly, the method adds or deletes a feature from a candidate feature subset based on the univariate feature ranking information. This empirical study on commonly used data sets from the University of California, Irvine repository and microarray data sets shows that the proposed method outperforms existing methods in terms of classification accuracy, number of selected features, and computational efficiency. Furthermore, we investigate several major issues of memetic algorithm (MA) to identify a good balance between local search and genetic search so as to maximize search quality and efficiency in the hybrid filter and wrapper MA  相似文献   

10.
针对特征空间中存在潜在相关特征的规律,分别利用谱聚类探索特征间的相关性及邻域互信息以寻求最大相关特征子集,提出联合谱聚类与邻域互信息的特征选择算法.首先利用邻域互信息移除与标记不相干的特征.然后采用谱聚类将特征进行分簇,使同一簇组中的特征强相关而不同簇组中的特征强相异.继而基于邻域互信息从每一特征簇组中选择与类标记强相关而与本组特征低冗余的特征子集.最后将所有选中特征子集组成最终的特征选择结果.在2个基分类器下的实验表明,文中算法能以较少的合理特征获得较高的分类性能.  相似文献   

11.
为获取文本中的较优特征子集,剔除干扰和冗余特征,提出了一种结合过滤式算法和群智能算法的混合特征寻优算法。首先计算每个特征词的信息增益值,选取较优的特征作为预选特征集合,再利用正余弦算法对预选特征进行寻优,获取精选特征集合。为较好地平衡正余弦算法中的全局搜索和局部开发能力,加入了自适应惯性权重;为更精确地评价特征子集,引入以特征数量和准确率进行加权的适应度函数,并提出了新的位置更新机制。在KNN和贝叶斯分类器上的实验结果表明,该特征选择算法与其它特征选择算法及改进前的算法相比,分类准确率得到了一定的提升。  相似文献   

12.
A novel ensemble of classifiers for microarray data classification   总被引:1,自引:0,他引:1  
Yuehui  Yaou   《Applied Soft Computing》2008,8(4):1664-1669
Micorarray data are often extremely asymmetric in dimensionality, such as thousands or even tens of thousands of genes and a few hundreds of samples. Such extreme asymmetry between the dimensionality of genes and samples presents several challenges to conventional clustering and classification methods. In this paper, a novel ensemble method is proposed. Firstly, in order to extract useful features and reduce dimensionality, different feature selection methods such as correlation analysis, Fisher-ratio is used to form different feature subsets. Then a pool of candidate base classifiers is generated to learn the subsets which are re-sampling from the different feature subsets with PSO (Particle Swarm Optimization) algorithm. At last, appropriate classifiers are selected to construct the classification committee using EDAs (Estimation of Distribution Algorithms). Experiments show that the proposed method produces the best recognition rates on four benchmark databases.  相似文献   

13.
This correspondence presents a novel hybrid wrapper and filter feature selection algorithm for a classification problem using a memetic framework. It incorporates a filter ranking method in the traditional genetic algorithm to improve classification performance and accelerate the search in identifying the core feature subsets. Particularly, the method adds or deletes a feature from a candidate feature subset based on the univariate feature ranking information. This empirical study on commonly used data sets from the University of California, Irvine repository and microarray data sets shows that the proposed method outperforms existing methods in terms of classification accuracy, number of selected features, and computational efficiency. Furthermore, we investigate several major issues of memetic algorithm (MA) to identify a good balance between local search and genetic search so as to maximize search quality and efficiency in the hybrid filter and wrapper MA.  相似文献   

14.
李欣倩  杨哲  任佳 《测控技术》2022,41(2):36-40
根据朴素贝叶斯算法的特征条件独立假设,提出一种基于互信息和层次聚类双重特征选择的改进朴素贝叶斯算法。通过互信息方法剔除不相关的特征,然后依据欧氏距离将删减后的特征进行分层聚类,通过粒子群算法得到聚类簇的数量,最后将每个聚类簇中与类别互信息最高的特征合并为特征子集,并由朴素贝叶斯算法得到分类准确率。根据实验结果可知,该算法可以有效减少特征之间的相关性,提升算法的分类性能。  相似文献   

15.
In this paper, we introduced a novel feature selection method based on the hybrid model (filter-wrapper). We developed a feature selection method using the mutual information criterion without requiring a user-defined parameter for the selection of the candidate feature set. Subsequently, to reduce the computational cost and avoid encountering to local maxima of wrapper search, a wrapper approach searches in the space of a superreduct which is selected from the candidate feature set. Finally, the wrapper approach determines to select a proper feature set which better suits the learning algorithm. The efficiency and effectiveness of our technique is demonstrated through extensive comparison with other representative methods. Our approach shows an excellent performance, not only high classification accuracy, but also with respect to the number of features selected.  相似文献   

16.
目前特征选择方法中常用的特征相关性测度可有效评估两个特征之间的相关性,但却将特征孤立看待,没有考虑其它特征对它们相关性的影响。文中在整体考虑特征之间关系的前提下,提出用稀疏表示系数评估特征的相关性,它与现有特征相关性测度的不同之处在于可揭示特征在其它所有特征影响下与目标的相关性,反映特征间的相互影响。为验证稀疏表示系数评估特征相关性的有效性,在典型的高维小样本数据上,比较了Relief F方法及分别以稀疏表示系数、对称不确定性和皮尔森相关系数为相关性测度的特征选择方法选择的特征集的分类能力。实验结果表明文中方法选择的特征集的分类能力高且较稳定。  相似文献   

17.
A new local search based hybrid genetic algorithm for feature selection   总被引:2,自引:0,他引:2  
This paper presents a new hybrid genetic algorithm (HGA) for feature selection (FS), called as HGAFS. The vital aspect of this algorithm is the selection of salient feature subset within a reduced size. HGAFS incorporates a new local search operation that is devised and embedded in HGA to fine-tune the search in FS process. The local search technique works on basis of the distinct and informative nature of input features that is computed by their correlation information. The aim is to guide the search process so that the newly generated offsprings can be adjusted by the less correlated (distinct) features consisting of general and special characteristics of a given dataset. Thus, the proposed HGAFS receives the reduced redundancy of information among the selected features. On the other hand, HGAFS emphasizes on selecting a subset of salient features with reduced number using a subset size determination scheme. We have tested our HGAFS on 11 real-world classification datasets having dimensions varying from 8 to 7129. The performances of HGAFS have been compared with the results of other existing ten well-known FS algorithms. It is found that, HGAFS produces consistently better performances on selecting the subsets of salient features with resulting better classification accuracies.  相似文献   

18.
在类和特征分布不均时,传统信息增益算法的分类性能急剧下降。针对此不足,提出一种基于信息增益的文 本特征选择方法(TDpIU)。首先对数据集按类进行特征选择,以减少数据集不平衡性对特征选取的影响。其次运用 特征出现概率计算信息增益权值,以降低低频词对特征选择的千扰。最后使用离散度分析特征在每类中的信息增益 值,过滤掉高频词中的相对冗余特征,并对选取的特征应用信息增益差值做进一步细化,获取均匀精确的特征子集。 通过对比实验表明,选取的特征具有更好的分类性能。  相似文献   

19.
杨柳  李云 《计算机应用》2021,41(12):3521-3526
K-匿名算法通过对数据的泛化、隐藏等手段使得数据达到K-匿名条件,在隐藏特征的同时考虑数据的隐私性与分类性能,可以视为一种特殊的特征选择方法,即K-匿名特征选择。K-匿名特征选择方法结合K-匿名与特征选择的特点使用多个评价准则选出K-匿名特征子集。过滤式K-匿名特征选择方法难以搜索到所有满足K-匿名条件的候选特征子集,不能保证得到的特征子集的分类性能最优,而封装式特征选择方法计算成本很大,因此,结合过滤式特征排序与封装式特征选择的特点,改进已有方法中的前向搜索策略,设计了一种混合式K-匿名特征选择算法,使用分类性能作为评价准则选出分类性能最好的K-匿名特征子集。在多个公开数据集上进行实验,结果表明,所提算法在分类性能上可以超过现有算法并且信息损失更小。  相似文献   

20.
In classification problems, a large number of features are typically used to describe the problem’s instances. However, not all of these features are useful for classification. Feature selection is usually an important pre-processing step to overcome the problem of “curse of dimensionality”. Feature selection aims to choose a small number of features to achieve similar or better classification performance than using all features. This paper presents a particle swarm Optimization (PSO)-based multi-objective feature selection approach to evolving a set of non-dominated feature subsets which achieve high classification performance. The proposed algorithm uses local search techniques to improve a Pareto front and is compared with a pure multi-objective PSO algorithm, three well-known evolutionary multi-objective algorithms and a current state-of-the-art PSO-based multi-objective feature selection approach. Their performances are examined on 12 benchmark datasets. The experimental results show that in most cases, the proposed multi-objective algorithm generates better Pareto fronts than all other methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号