首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 156 毫秒
1.
基于区分类别能力的高性能特征选择方法   总被引:15,自引:0,他引:15  
特征选择在文本分类中起着重要作用.文档频率(document frequency,简称DF)、信息增益(informationgain,简称IG)和互信息(mutualin formation,简称MI)等特征选择方法在文本分类中广泛应用.已有的实验结果表明,IG是最有效的特征选择算法之一,DF稍差,而MI效果相对较差.在文本分类中,现有的特征选择函数性能的评估均是通过实验验证的方法,即完全是基于经验的方法.特征选择是选择部分最有区分类别能力的特征,为此,给出了两个特征选择函数需满足的基本约束条件,并提出了一种构造高性能特征选择的通用方法.依此方法构造了一个新的特征选择函数KG(knowledge gain).分析发现,IG和KG完全满足该构造方法,在Reuters-21578,OHSUMED和News Group这3个语料集上的实验表明,IG和KG性能最好,在两个语料集上,KG甚至超过了IG.验证了提出的构造高性能特征选择函数方法的有效性,同时也在理论上给出了一个评价高性能特征选择算法的标准.  相似文献   

2.
特征选择在文本分类中起重要的作用。文档频率(DF)、信息增益(IG)和互信息(MI)等特征选择方法在文本分类中广泛应用。已有的实验结果表明,IG是最有效的特征选择算法之一,该方法基于申农提出的信息论。本文基于粗糙集理论,提出了一种新的特征选择方法(KG算法),该方法依据粗糙集理论关于知识的观点,即知识是分类事物的能力,将知识进行量化,提出知识增益的概念,得到基于知识增益的特征选择方法。在两个通用的语料集OHSUMED和NewsGroup上进行分类实验发现KG算法均超过IG的性能,特别是在特征空间的维数降到低维时尤其明显,可见KG算法有较好的性能;  相似文献   

3.
一种改进的文本分类特征选择方法   总被引:1,自引:0,他引:1       下载免费PDF全文
文本分类中特征空间的高维问题是文本分类的主要障碍之一。特征选择(Feature Selection)是一种有效的特征降维方法。现有的特征选择函数主要有文档频率(DF),信息增益(IG),互信息(MI)等。基于特征的基本约束条件以及高性能特征选择方法的设计步骤,提出了一种改进的特征选择方法SIG。该特征选择方法在保证分类效果的同时,提高了对中低频特征的偏向。在语料集Reuters-21578上的实验证明,该方法能够获得较好的分类效果,同时有效提高了对具有强分类能力的中低频特征的利用。  相似文献   

4.
几种典型特征选取方法在中文网页分类上的效果比较   总被引:31,自引:2,他引:31  
针对中文网页,比较研究了CHI、IG、DF以及MI特征选取方法。主要的实验结果有:(1)CHI、IG和DF的性能明显优于MI;(2)CHI、IG和DF的性能大体相当,都能够过滤掉85%以上的特征项;(3)DF具有算法简单、质量高的优点,可以用来代替CHI和IG;(4)使用普通英文文本和中文网页评测特征选取方法的结果是一致的。  相似文献   

5.
中文文本分类中特征选择方法的比较   总被引:1,自引:0,他引:1  
在自动文本分类系统中,特征选择是有效的降维数方法.通过实验对中文文本分类中的特征选择方法逐一进行测试研究,力图确定较优的中文文本分类特征选择方法.根据实验得出:在所测试的所有特征选择方法中,统计方法的分类性能最好,其次为信息增益(IG),交叉熵(CE)和文本证据权(WE)也取得了较好的效果,互信息(MI)较差.  相似文献   

6.
本文研究了文档频率DF、信息增益IG、互信息MI、x2分布(CHI)、期望交叉熵、优势率、文本证据权七种不同的特征选取方法。针对DF对高频词过于依赖,以及MI,IG和CHI对低频词过于依赖这一特点,试验了将它们组合起来形成DF—MI,DF-IG两种组合式特征选择方法,同时针对DF的特点提出了新的特征选取方法DFR,用KNN分类器试验了几种组合方法和DFIK方法,实验结果表明DFIK较DF—MI、DF—IG对分类效果有明显的提高,而组合特征选取方法较单个特征选取方法对分类器的分类效果有了很大的提高。  相似文献   

7.
本文研究了文档频率DF、信息增益IG、互信息MI、x2分布(CHI)、期望交叉熵、优势率、文本证据权七种不同的特征选取方法.针对DF对高频词过于依赖,以及MI,IG和CHI对低频词过于依赖这一特点,试验了将它们组合起来形成DF-MI,DF-IG两种组合式特征选择方法-同时针对DF的特点提出了新的特征选取方法DFR-用KNN分类器试验了几种组合方法和DFR方法-实验结果表明DFR较DF-MI、DF-IG对分类效果有明显的提高,而组合特征选取方法较单个特征选取方法对分类器的分类效果有了很大的提高.  相似文献   

8.
中文文本分类中特征抽取方法的比较研究   总被引:99,自引:9,他引:99  
本文比较研究了在中文文本分类中特征选取方法对分类效果的影响。考察了文档频率DF、信息增益IG、互信息MI、χ2分布CHI四种不同的特征选取方法。采用支持向量机(SVM)和KNN两种不同的分类器以考察不同抽取方法的有效性。实验结果表明,在英文文本分类中表现良好的特征抽取方法(IG、MI和CHI)在不加修正的情况下并不适合中文文本分类。文中从理论上分析了产生差异的原因,并分析了可能的矫正方法包括采用超大规模训练语料和采用组合的特征抽取方法。最后通过实验验证组合特征抽取方法的有效性。  相似文献   

9.
中文情感分析中的一个重要问题就是情感倾向分类,情感特征选择是基于机器学习的情感倾向分类的前提和基础,其作用在于通过剔除无关或冗余的特征来降低特征集的维数。提出一种将Lasso算法与过滤式特征选择方法相结合的情感混合特征选择方法:先利用Lasso惩罚回归算法对原始特征集合进行筛选,得出冗余度较低的情感分类特征子集;再对特征子集引入CHI,MI,IG等过滤方法来评价候选特征词与文本类别的依赖性权重,并据此剔除候选特征词中相关性较低的特征词;最终,在使用高斯核函数的SVM分类器上对比所提方法与DF,MI,IG和CHI在不同特征词数量下的分类效果。在微博短文本语料库上进行了实验,结果表明所提算法具有有效性和高效性;并且在特征子集维数小于样本数量时,提出的混合方法相比DF,MI,IG和CHI的特征选择效果都有一定程度的改善;通过对比识别率和查全率可以发现,Lasso-MI方法相比MI以及其他过滤方法更为有效。  相似文献   

10.
二值文本分类中基于Bayes推理的特征选择方法   总被引:7,自引:0,他引:7  
针对二值文本分类算法中的特征选择问题,本文提出了基于贝叶斯推理的评估函数算法来替代常用的、以IG或MI为评估函数的算法;同时,提出了以评估函数值的累计贡献率表示置信度,并以此确定特征选择维度的可量化的方法.对比实验显示,本文提出的新方法具有简便易行、高效实用的优点,此算法不仅对文本分类问题,对其它各类二值分类问题中的特征选择方法研究也都具有很好的参考、借鉴价值.  相似文献   

11.
一种优化的k-NN文本分类算法   总被引:1,自引:0,他引:1  
k-NN是经典的文本分类算法之一,在解决概念漂移问题上尤其具有优势,但其运行速度低下的缺点也非常严重,为此它通常借助特征选择降维方法来避免维度灾难、提高运行效率。但特征选择又会引起信息丢失等问题,不利于分类系统整体性能的提高。从文本向量的稀疏性特点出发,对传统的k-NN算法进行了诸多优化。优化算法简化了欧氏距离分类模型,大大降低了系统的运算开销,使运行效率有了质的提高。此外,优化算法还舍弃了特征选择预处理过程,从而可以完全避免因特征选择而引起的诸多不利问题,其分类性能也远远超出了普通k-NN。实验显示,优化算法在性能与效率双方面都有非常优秀的表现,它为传统的k-NN算法注入了新的活力,并可以在解决概念漂移等问题上发挥更大的作用。  相似文献   

12.
Text classification (TC) is a very crucial task in this century of high-volume text datasets. Feature selection (FS) is one of the most important stages in TC studies. In the literature, numerous feature selection methods are recommended for TC. In the TC domain, filter-based FS methods are commonly utilized to select a more informative feature subsets. Each method uses a scoring system that is based on its algorithm to order the features. The classification process is then carried out by choosing the top-N features. However, each method's feature order is distinct from the others. Each method selects by giving the qualities that are critical to its algorithm a high score, but it does not select by giving the features that are unimportant a low value. In this paper, we proposed a novel filter-based FS method namely, brilliant probabilistic feature selector (BPFS), to assign a fair score and select informative features. While the BPFS method selects unique features, it also aims to select sparse features by assigning higher scores than common features. Extensive experimental studies using three effective classifiers decision tree (DT), support vector machines (SVM), and multinomial naive bayes (MNB) on four widely used datasets named Reuters-21,578, 20Newsgroup, Enron1, and Polarity with different characteristics demonstrate the success of the BPFS method. For feature dimensions, 20, 50, 100, 200, 500, and 1000 dimensions were used. The experimental results on different benchmark datasets show that the BPFS method is more successful than the well-known and recent FS methods according to Micro-F1 and Macro-F1 scores.  相似文献   

13.
A new local search based hybrid genetic algorithm for feature selection   总被引:2,自引:0,他引:2  
This paper presents a new hybrid genetic algorithm (HGA) for feature selection (FS), called as HGAFS. The vital aspect of this algorithm is the selection of salient feature subset within a reduced size. HGAFS incorporates a new local search operation that is devised and embedded in HGA to fine-tune the search in FS process. The local search technique works on basis of the distinct and informative nature of input features that is computed by their correlation information. The aim is to guide the search process so that the newly generated offsprings can be adjusted by the less correlated (distinct) features consisting of general and special characteristics of a given dataset. Thus, the proposed HGAFS receives the reduced redundancy of information among the selected features. On the other hand, HGAFS emphasizes on selecting a subset of salient features with reduced number using a subset size determination scheme. We have tested our HGAFS on 11 real-world classification datasets having dimensions varying from 8 to 7129. The performances of HGAFS have been compared with the results of other existing ten well-known FS algorithms. It is found that, HGAFS produces consistently better performances on selecting the subsets of salient features with resulting better classification accuracies.  相似文献   

14.
The curse of high dimensionality in text classification is a worrisome problem that requires efficient and optimal feature selection (FS) methods to improve classification accuracy and reduce learning time. Existing filter-based FS methods evaluate features independently of other related ones, which can then lead to selecting a large number of redundant features, especially in high-dimensional datasets, resulting in more learning time and less classification performance, whereas information theory-based methods aim to maximize feature dependency with the class variable and minimize its redundancy for all selected features, which gradually becomes impractical when increasing the feature space. To overcome the time complexity issue of information theory-based methods while taking into account the redundancy issue, in this article, we propose a new feature selection method for text classification termed correlation-based redundancy removal, which aims to minimize the redundancy using subsets of features having close mutual information scores without sequentially seeking already selected features. The idea is that it is not important to assess the redundancy of a dominant feature having high classification information with another irrelevant feature having low classification information and vice-versa since they are implicitly weakly correlated. Our method, tested on seven datasets using both traditional classifiers (Naive Bayes and support vector machines) and deep learning models (long short-term memory and convolutional neural networks), demonstrated strong performance by reducing redundancy and improving classification compared to ten competitive metrics.  相似文献   

15.
高利源  倪佑生 《计算机工程》2006,32(24):141-143
介绍了私有信息获取和可信计算的概念,由此引出了基于可信计算PIR的概念并列举了几种现有模型及其性能,并为进一步提高PIR的性能提出了一种新的模型,该模型可以把安全处理器(SC)读写数据库的时间复杂度从O(N3/2)降低到O(cN),其中c是大于1的 常数。  相似文献   

16.
Feature Selection (FS) or Attribute Reduction techniques are employed for dimensionality reduction and aim to select a subset of the original features of a data set which are rich in the most useful information. The benefits of employing FS techniques include improved data visualization and transparency, a reduction in training and utilization times and potentially, improved prediction performance. Many approaches based on rough set theory up to now, have employed the dependency function, which is based on lower approximations as an evaluation step in the FS process. However, by examining only that information which is considered to be certain and ignoring the boundary region, or region of uncertainty, much useful information is lost. This paper examines a rough set FS technique which uses the information gathered from both the lower approximation dependency value and a distance metric which considers the number of objects in the boundary region and the distance of those objects from the lower approximation. The use of this measure in rough set feature selection can result in smaller subset sizes than those obtained using the dependency function alone. This demonstrates that there is much valuable information to be extracted from the boundary region. Experimental results are presented for both crisp and real-valued data and compared with two other FS techniques in terms of subset size, runtimes, and classification accuracy.  相似文献   

17.
This research evaluated physicians' agreement about patients' diagnoses and nurses' ability to detect patient change using traditional charts (TC) and a work domain analysis-based paper prototype (PP) and also sought to determine whether differences persisted when the PP was represented as an electronic prototype (EP). Nurses' change detection improved using the PP and EP compared to TC (PP vs TC, t(df=6) = 1.94, p < 0.03; EP vs TC, t(df=6) = 3.14, p < 0.01) and detection was better using the EP compared with the PP (t(df=6) = 5.96, p < 0.001). Physicians were more likely to agree about failed physiological systems using the EP compared with the PP (t(df=10) = 3.14, p < 0.01), but agreement about patient diagnoses was higher using the PP compared with the EP (t(df=10) = 2.23; p < 0.02). These results are attributed to information grouping around physiological functions and the direct association of cause-and-effect relations in clinical information design.  相似文献   

18.
随着Internet普及和应用,电子商务已经成为一种发展趋势。网络的安全日益引起人们的关注。提供一定的手段,实时对网络中的信息进行监测具有十分重要的意义。本文利用向量空间模型、TC3分类算法、Rocchio反馈模型等构造了一个具有反馈机制的网络信息过滤系统(NIFS),并且从信息过滤系统结构、网络信息捕获、用户兴趣文件(Profile)的形成与重构等方面对网络信息过滤系统(NIFS)基本理论和实现方法进行了详细的讨论。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号