首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Normalized Mutual Information Feature Selection   总被引:6,自引:0,他引:6  
A filter method of feature selection based on mutual information, called normalized mutual information feature selection (NMIFS), is presented. NMIFS is an enhancement over Battiti's MIFS, MIFS-U, and mRMR methods. The average normalized mutual information is proposed as a measure of redundancy among features. NMIFS outperformed MIFS, MIFS-U, and mRMR on several artificial and benchmark data sets without requiring a user-defined parameter. In addition, NMIFS is combined with a genetic algorithm to form a hybrid filter/wrapper method called GAMIFS. This includes an initialization procedure and a mutation operator based on NMIFS to speed up the convergence of the genetic algorithm. GAMIFS overcomes the limitations of incremental search algorithms that are unable to find dependencies between groups of features.   相似文献   

2.
Measures of relevance between features play an important role in classification and regression analysis. Mutual information has been proved an effective measure for decision tree construction and feature selection. However, there is a limitation in computing relevance between numerical features with mutual information due to problems of estimating probability density functions in high-dimensional spaces. In this work, we generalize Shannon’s information entropy to neighborhood information entropy and propose a measure of neighborhood mutual information. It is shown that the new measure is a natural extension of classical mutual information which reduces to the classical one if features are discrete; thus the new measure can also be used to compute the relevance between discrete variables. In addition, the new measure introduces a parameter delta to control the granularity in analyzing data. With numeric experiments, we show that neighborhood mutual information produces the nearly same outputs as mutual information. However, unlike mutual information, no discretization is required in computing relevance when used the proposed algorithm. We combine the proposed measure with four classes of evaluating strategies used for feature selection. Finally, the proposed algorithms are tested on several benchmark data sets. The results show that neighborhood mutual information based algorithms yield better performance than some classical ones.  相似文献   

3.
Jin-Jie  Yun-Ze  Xiao-Ming   《Neurocomputing》2008,71(7-9):1656-1668
A parameterless feature ranking approach is presented for feature selection in the pattern classification task. Compared with Battiti's mutual information feature selection (MIFS) and Kwak and Choi's MIFS-U methods, the proposed method derives an estimation of the conditional MI between the candidate feature fi and the output class C given the subset of selected features S, i.e. I(C;fiS), without any parameters like β in MIFS and MIFS-U methods to be preset. Thus, the intractable problem can be avoided completely, which is how to choose an appropriate value for β to achieve the tradeoff between the relevance to the output classes and the redundancy with the already-selected features. Furthermore, a modified greedy feature selection algorithm called the second order MI feature selection approach (SOMIFS) is proposed. Experimental results demonstrate the superiority of SOMIFS in terms of both synthetic and benchmark data sets.  相似文献   

4.
提出了一种基于二次Renyi's熵的正则化互信息特征选择方法,该方法能高效地对互信息进行估计从而使计算复杂度大大降低。同时把正则化互信息特征选择方法与嵌入式方法相结合得到一个两段式特征选择算法,该算法可以找出更具特征的特征子集。通过实验比较了该方法与其他基于互信息的特征选择算法的效率与分类精度,结果表明该方法能够有效改善计算复杂度。  相似文献   

5.
针对大量无关和冗余特征的存在可能降低分类器性能的问题,提出了一种基于近似Markov Blanket和动态互信息的特征选择算法。该算法利用互信息作为特征相关性的度量准则,并在未识别的样本上对互信息进行动态估值,利用近似Markov Blanket原理准确地去除冗余特征,从而获得远小于原始特征规模的特征子集。通过仿真试验证明了该算法的有效性。以支持向量机为分类器,在公共数据集UCI上进行了试验,并与DMIFS和ReliefF算法进行了对比。试验结果证明,该算法选取的特征子集与原始特征子集相比,以远小于原始特征规模的特征子集获得了高于或接近于原始特征集合的分类结果。  相似文献   

6.
牟琦  毕孝儒  厍向阳 《计算机工程》2011,37(14):103-105
高维网络数据中的无关属性和冗余属性容易使分类算法的网络入侵检测速度变慢、检测率降低。为此,提出一种基于遗传量子粒子群优化(GQPSO)算法的网络入侵特征选择方法,该方法将遗传算法中的选择变异策略与QPSO有机结合形成GQPSO算法,并以网络数据属性之间的归一化互信息量作为该算法适应度函数,指导其对网络数据的属性约简,实现网络入侵特征子集的优化选择。在KDDCUP1999数据集上进行仿真实验,结果表明,与QPSO算法、PSO算法相比,该方法能更有效地精简网络数据特征,提高分类算法的网络入侵检测速度及检测率。  相似文献   

7.
特征选择是从原始数据集中去除无关的特征并选择良好的特征子集,可以避免维数灾难和提高学习算法的性能。为解决已选特征和类别动态变化(DCSF)算法在特征选择过程中只考虑已选特征和类别之间动态变化的信息量,而忽略候选特征和已选特征的交互相关性的问题,提出了一种基于动态相关性的特征选择(DRFS)算法。该算法采用条件互信息度量已选特征和类别的条件相关性,并采用交互信息度量候选特征和已选特征发挥的协同作用,从而选择相关特征并且去除冗余特征以获得优良特征子集。仿真实验表明,与现有算法相比,所提算法能有效地提升特征选择的分类准确率。  相似文献   

8.
特征选择对于分类器的分类精度和泛化性能起重要作用。目前的多标记特征选择算法主要利用最大相关性最小冗余性准则在全部特征集中进行特征选择,没有考虑专家特征,因此多标记特征选择算法的运行时间较长、复杂度较高。实际上,在现实生活中专家依据几个或者多个关键特征就能够直接决定整体的预测方向。如果提取关注这些信息,必将减少特征选择的计算时间,甚至提升分类器性能。基于此,提出一种基于专家特征的条件互信息多标记特征选择算法。首先将专家特征与剩余的特征相联合,再利用条件互信息得出一个与标记集合相关性由强到弱的特征序列,最后通过划分子空间去除冗余性较大的特征。该算法在7个多标记数据集上进行了实验对比,结果表明该算法较其他特征选择算法有一定优势,统计假设检验与稳定性分析进一步证明了所提出算法的有效性和合理性。  相似文献   

9.
This paper studies a new feature selection method for data classification that efficiently combines the discriminative capability of features with the ridge regression model. It first sets up the global structure of training data with the linear discriminant analysis that assists in identifying the discriminative features. And then, the ridge regression model is employed to assess the feature representation and the discrimination information, so as to obtain the representative coefficient matrix. The importance of features can be calculated with this representative coefficient matrix. Finally, the new subset of selected features is applied to a linear Support Vector Machine for data classification. To validate the efficiency, sets of experiments are conducted with twenty benchmark datasets. The experimental results show that the proposed approach performs much better than the state-of-the-art feature selection algorithms in terms of the evaluating indicator of classification. And the proposed feature selection algorithm possesses a competitive performance compared with existing feature selection algorithms with regard to the computational cost.  相似文献   

10.
基于最大熵原理的空间特征选择方法   总被引:10,自引:0,他引:10       下载免费PDF全文
特征选择在模式识别和数据挖掘等领域都有十分广泛的应用.然而,当涉及空间数据时,由于传统特征选择方法没有很好地考虑数据的空间特性,所以会导致特征选择结果性能下降.从空间数据本身的特性出发,提出一种特征选择方法MEFS(maximum entropy feature selection).MEFS在基于最大熵原理的基础上,运用互信息和Z-测试技术,采用两步方法进行空间特征选择.第1步,空间谓词选择;第2步,选择与每个空间谓词对应的相关属性集.最后,分别对MEFS方法和RELIEF方法以及基于MEFS的分类方法与决策树算法ID3分别进行了实验比较.实验结果表明,MEFS方法不仅可以节约特征提取和分类时间,而且也极大地提高了分类质量.  相似文献   

11.
多标签特征选择是应对数据维度灾难现象的主要方法之一,可以在降低特征维度的同时提高学习效率,优化分类性能。针对目前特征选择算法没有考虑标签间的相互关系,以及信息量的衡量范围存在偏差的问题,提出一种基于标签关系改进的多标签特征选择算法。首先引入对称不确定性对信息量进行归一化处理,然后用归一化的互信息量作为相关性的衡量方法,并据此定义标签的重要性权重,对依赖度和冗余度中的标签相关项进行加权处理;进而提出一种特征评分函数作为特征重要性的评价指标,并依次选择出评分最高的特征组成最佳特征子集。实验结果表明,与其他算法相比,该算法在提取出更加精确的低维特征子集后,不仅能够有效提高面向实体信息挖掘的多标签学习算法的性能,也能提高基于离散特征的多标签学习算法的效率。  相似文献   

12.
分类分析中基于信息论准则的特征选取   总被引:3,自引:1,他引:2  
Feature selection aims to reduce the dimensionality of patterns for classificatory analysis by selecting the most informative instead of irrelevant and/or redundant features. In this study, two novel information-theoretic measures for feature ranking are presented: one is an improved formula to estimate the conditional mutual information between the candidate feature fi and the target class C given the subset of selected features S, i.e., I(C;fi|S), under the assumption that information of features is distributed uniformly; the other is a mutual information (MI) based constructive criterion that is able to capture both irrelevant and redundant input features under arbitrary distributions of information of features. With these two measures, two new feature selection algorithms, called the quadratic MI-based feature selection (QMIFS) approach and the MI-based constructive criterion (MICC) approach, respectively, are proposed, in which no parameters like β in Battiti's MIFS and (Kwak and Choi)'s MIFS-U methods need to be preset. Thus, the intractable problem of how to choose an appropriate value for β to do the tradeoff between the relevance to the target classes and the redundancy with the already-selected features is avoided completely. Experimental results demonstrate the good performances of QMIFS and MICC on both synthetic and benchmark data sets.  相似文献   

13.
信息网络结构特征作为影响关系生成与演化的主要因素在信息网络关系分类与推断领域占据重要地位。现有的关系分类与推断算法在处理网络结构特征的过程中,无法达到令人满意的效果。为此,结合互信息的定义,提出一种基于互信息特征选择的关系分类与推断算法。通过定义CN、AA、Katz等相似度指标充分抽取局部和全局(半全局)两类网络结构特征,利用基于密度比函数的最大似然估计来计算特征之间的近似互信息。该密度函数有效地解决了特征选择中全局最优解的过程,同时筛选出更具判别性的特征。通过多个真实信息网络数据集上的实验结果表明,无论是经典分类算法还是新近提出的基于学习理论的关系分类算法,经过互信息特征选择步骤的算法在Accuracy、AUC、Precision等评价指标上均比基准算法要优。  相似文献   

14.
Feature selection plays an important role in data mining and pattern recognition, especially for large scale data. During past years, various metrics have been proposed to measure the relevance between different features. Since mutual information is nonlinear and can effectively represent the dependencies of features, it is one of widely used measurements in feature selection. Just owing to these, many promising feature selection algorithms based on mutual information with different parameters have been developed. In this paper, at first a general criterion function about mutual information in feature selector is introduced, which can bring most information measurements in previous algorithms together. In traditional selectors, mutual information is estimated on the whole sampling space. This, however, cannot exactly represent the relevance among features. To cope with this problem, the second purpose of this paper is to propose a new feature selection algorithm based on dynamic mutual information, which is only estimated on unlabeled instances. To verify the effectiveness of our method, several experiments are carried out on sixteen UCI datasets using four typical classifiers. The experimental results indicate that our algorithm achieved better results than other methods in most cases.  相似文献   

15.
针对高维度小样本数据在特征选择时出现的维数灾难和过拟合的问题,提出一种混合Filter模式与Wrapper模式的特征选择方法(ReFS-AGA)。该方法结合ReliefF算法和归一化互信息,评估特征的相关性并快速筛选重要特征;采用改进的自适应遗传算法,引入最优策略平衡特征多样性,同时以最小化特征数和最大化分类精度为目标,选择特征数作为调节项设计新的评价函数,在迭代进化过程中高效获得最优特征子集。在基因表达数据上利用不同分类算法对简化后的特征子集分类识别,实验结果表明,该方法有效消除了不相关特征,提高了特征选择的效率,与ReliefF算法和二阶段特征选择算法mRMR-GA相比,在取得最小特征子集维度的同时平均分类准确率分别提高了11.18个百分点和4.04个百分点。  相似文献   

16.
针对线性的互信息特征提取方法,通过研究互信息梯度在核空间中的线性不变性,提出一种快速、高效的非线性特征提取方法。该方法采用互信息二次熵快速算法及梯度上升的寻优策略,提取有判别能力的非线性高阶统计量;在计算时避免传统非线性特征提取中的特征值分解运算,有效降低计算量。通过UCT数据的投影和分类实验表明,该方法无论在投影空间的可分性上,还是在算法时间复杂度上,都明显优于传统算法。  相似文献   

17.
针对传统克隆选择算法的不足,提出了一个基于球面杂交的新型克隆选择算法。在该算法的每次迭代过程中,动态地计算出每个抗体的变异概率,根据抗体的亲和度将抗体种群动态分为记忆单元和一般抗体单元,并以球面杂交方式对种群进行调整,从而加快了算法的全局搜索速度。实例验证了所提算法的有效性、可行性。  相似文献   

18.
With the advent of technology in various scientific fields, high dimensional data are becoming abundant. A general approach to tackle the resulting challenges is to reduce data dimensionality through feature selection. Traditional feature selection approaches concentrate on selecting relevant features and ignoring irrelevant or redundant ones. However, most of these approaches neglect feature interactions. On the other hand, some datasets have imbalanced classes, which may result in biases towards the majority class. The main goal of this paper is to propose a novel feature selection method based on the interaction information (II) to provide higher level interaction analysis and improve the search procedure in the feature space. In this regard, an evolutionary feature subset selection algorithm based on interaction information is proposed, which consists of three stages. At the first stage, candidate features and candidate feature pairs are identified using traditional feature weighting approaches such as symmetric uncertainty (SU) and bivariate interaction information. In the second phase, candidate feature subsets are formed and evaluated using multivariate interaction information. Finally, the best candidate feature subsets are selected using dominant/dominated relationships. The proposed algorithm is compared with some other feature selection algorithms including mRMR, WJMI, IWFS, IGFS, DCSF, IWFS, K_OFSD, WFLNS, Information Gain and ReliefF in terms of the number of selected features, classification accuracy, F-measure and algorithm stability using three different classifiers, namely KNN, NB, and CART. The results justify the improvement of classification accuracy and the robustness of the proposed method in comparison with the other approaches.  相似文献   

19.
特征选择是邮件过滤重要的环节,特征的好坏不仅影响分类的准确率,还直接影响到分类器训练和分类的开销。比较了常用的CHI选择、互信息(MI)、信息增益(IG)和SVM 特征选择算法在垃圾邮件过滤中的效果,针对这些方法只排序而未消除特征间冗余的缺点,提出了利用特征词间条件概率和分类区分度消除冗余的混合邮件特征选择方法。实验结果表明:方法效果良好,提高了邮件分类准确率。  相似文献   

20.
This paper proposes a novel criterion for estimating the redundancy information of selected feature sets in multi-dimensional pattern classification. An appropriate feature selection process typically maximizes the relevancy of features to each class and minimizes the redundancy of features between selected features. Unlike to the relevancy information that can be measured by mutual information, however, it is difficult to estimate the redundancy information because its dynamic range is varied by the characteristics of features and classes.By utilizing the conceptual diagram of the relationship between candidate features, selected features, and class variables, this paper proposes a new criterion to accurately compute the amount of redundancy. Specifically, the redundancy term is estimated by conditional mutual information between selected and candidate features to each class variable, which does not need a cumbersome normalization process as the conventional algorithm does. The proposed algorithm is implemented into a speech/music discrimination system to evaluate classification performance. Experimental results by varying the number of selected features verify that the proposed method shows higher classification accuracy than conventional algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号