首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
Automatic keyword extraction is an important research direction in text mining, natural language processing and information retrieval. Keyword extraction enables us to represent text documents in a condensed way. The compact representation of documents can be helpful in several applications, such as automatic indexing, automatic summarization, automatic classification, clustering and filtering. For instance, text classification is a domain with high dimensional feature space challenge. Hence, extracting the most important/relevant words about the content of the document and using these keywords as the features can be extremely useful. In this regard, this study examines the predictive performance of five statistical keyword extraction methods (most frequent measure based keyword extraction, term frequency-inverse sentence frequency based keyword extraction, co-occurrence statistical information based keyword extraction, eccentricity-based keyword extraction and TextRank algorithm) on classification algorithms and ensemble methods for scientific text document classification (categorization). In the study, a comprehensive study of comparing base learning algorithms (Naïve Bayes, support vector machines, logistic regression and Random Forest) with five widely utilized ensemble methods (AdaBoost, Bagging, Dagging, Random Subspace and Majority Voting) is conducted. To the best of our knowledge, this is the first empirical analysis, which evaluates the effectiveness of statistical keyword extraction methods in conjunction with ensemble learning algorithms. The classification schemes are compared in terms of classification accuracy, F-measure and area under curve values. To validate the empirical analysis, two-way ANOVA test is employed. The experimental analysis indicates that Bagging ensemble of Random Forest with the most-frequent based keyword extraction method yields promising results for text classification. For ACM document collection, the highest average predictive performance (93.80%) is obtained with the utilization of the most frequent based keyword extraction method with Bagging ensemble of Random Forest algorithm. In general, Bagging and Random Subspace ensembles of Random Forest yield promising results. The empirical analysis indicates that the utilization of keyword-based representation of text documents in conjunction with ensemble learning can enhance the predictive performance and scalability of text classification schemes, which is of practical importance in the application fields of text classification.  相似文献   

2.
在文本情感分析研究中,一条评论分别包含了篇章级、句子级和词语级等不同粒度的语义信息,而不同的词和句子在情感分类中所起的作用也是不同的,直接使用整条评论进行建模的情感分析方法则过于粗糙,同时也忽略了表达情感的用户信息和被评价的产品信息。针对该问题,提出一种基于多注意力机制的层次神经网络模型。该模型分别从词语级别、句子级别和篇章级别获取语义信息,并分别在句子级和篇章级引入基于用户和商品的注意力机制来计算不同句子和词的重要性。最后通过三个公开数据集进行测试验证,实验结果表明,基于多注意力层次神经网络的文本情感分析方法较其他模型性能有显著的提升。  相似文献   

3.
Web文本分类及其阻塞减少策略   总被引:1,自引:0,他引:1  
Web挖掘中,根据内容对Web文档进行分类是至关重要的一步.在Web文档分类中一种通常的方法是层次型分类方法,这种方法采用自顶向下的方式把文档分类到一个分类树的相应类别.然而,层次型分类方法在对文档进行分类时经常产生待分类的文档在分类树的上层分类器被错误地拒绝的现象(阻塞).针对这种现象,采用了以分类器为中心的阻塞因子去衡量阻塞的程度,并介绍了两种新的层次型分类方法,即基于降低阈值的方法和基于限制投票的方法,去改善Web文档分类中文档被错误阻塞的情况.  相似文献   

4.
Nonnegative matrix factorization (NMF) is a data analysis technique used in a great variety of applications such as text mining, image processing, hyperspectral data analysis, computational biology, and clustering. In this letter, we consider two well-known algorithms designed to solve NMF problems: the multiplicative updates of Lee and Seung and the hierarchical alternating least squares of Cichocki et al. We propose a simple way to significantly accelerate these schemes, based on a careful analysis of the computational cost needed at each iteration, while preserving their convergence properties. This acceleration technique can also be applied to other algorithms, which we illustrate on the projected gradient method of Lin. The efficiency of the accelerated algorithms is empirically demonstrated on image and text data sets and compares favorably with a state-of-the-art alternating nonnegative least squares algorithm.  相似文献   

5.
6.
文本分类任务作为文本挖掘的核心问题,已成为自然语言处理领域的一个重要课题.而短文本分类由于稀疏性、实时性和不规范性等特点,已经成为文本分类的亟待解决的问题之一.在某些特定的场景,短文本存在大量隐含语义,由此对挖掘有限文本内的隐含语义特征等任务带来挑战.已有的方法对短文本分类主要是采用传统机器学习或深度学习算法,但是该类算法的模型构建复杂且工作量大,效率不高.此外,短文本包含有效信息较少且口语化严重,对模型的特征学习能力要求较高.针对以上问题,本文提出了KAeRCNN模型,该模型在TextRCNN模型的基础上,融合了知识感知与双重注意力机制.知识感知包含了知识图谱实体链接和知识图谱嵌入,可以引入外部知识以获取语义特征,同时双重注意力机制可以提高模型对短文本中有效信息提取的效率.实验结果表明,KAeRCNN模型在分类准确度、F1值和实际应用效果等方面显著优于传统的机器学习算法.我们对算法的性能和适应性进行了验证,准确率达到95.54%,F1值达到0.901,对比四种传统机器学习算法,准确率平均提高了约14%,F1值提升了约13%.与TextRCNN相比,KAeRCNN模型在准确性方面提升了约3%.此外,与深度学习算法的对比实验结果也说明了我们的模型在其它领域的短文本分类中也有较好的表现.理论和实验结果都证明,提出的KAeRCNN模型对短文本分类效果更优.  相似文献   

7.
基于TF-IDF和余弦相似度的文本分类方法   总被引:1,自引:0,他引:1  
文本分类是文本处理的基本任务。大数据处理时代的到来致使文本分类问题面临着新的挑战。研究者已经针对不同情况提出多种文本分类算法,如KNN、朴素贝叶斯、支持向量机及一系列改进算法。这些算法的性能取决于固定数据集,不具有自学习功能。该文提出一种新的文本分类方法,包括三个步骤: 基于TF-IDF方法提取类别关键词;通过类别关键词和待分类文本关键词的相似性进行文本分类;在分类过程中更新类别关键词改进分类器性能。仿真实验结果表明,本文提出方法的准确度较目前常用方法有较大提高,在实验数据集上分类准确度达到90%,当文本数据量较大时,分类准确度可达到95%。算法初次使用时,需要一定的训练样本和训练时间,但分类时间可下降到其他算法的十分之一。该方法具有自学习模块,在分类过程中,可以根据分类经验自动更新类别关键词,保证分类器准确率,具有很强的现实应用性。  相似文献   

8.
基于不平衡数据的中文情感分类   总被引:2,自引:0,他引:2  
近些年来,情感分类在自然语言处理研究领域获得了显著的发展。然而,大部分已有的研究都假设参与分类的正类样本和负类样本一样多,而实际情况中正负类数据的分布往往是不平衡的。该文收集四个产品领域的中文评论文本,发现正类样本的数目远远多于负类样本。针对不平衡数据的中文情感分类,提出了一种基于欠采样和多分类算法的集成学习框架。在四个不同领域的实验结果表明,我们的方法能够显著提高分类性能,并明显优于目前主流的多种不平衡分类方法。  相似文献   

9.
Sentiment polarity detection is one of the most popular tasks related to Opinion Mining. Many papers have been presented describing one of the two main approaches used to solve this problem. On the one hand, a supervised methodology uses machine learning algorithms when training data exist. On the other hand, an unsupervised method based on a semantic orientation is applied when linguistic resources are available. However, few studies combine the two approaches. In this paper we propose the use of meta-classifiers that combine supervised and unsupervised learning in order to develop a polarity classification system. We have used a Spanish corpus of film reviews along with its parallel corpus translated into English. Firstly, we generate two individual models using these two corpora and applying machine learning algorithms. Secondly, we integrate SentiWordNet into the English corpus, generating a new unsupervised model. Finally, the three systems are combined using a meta-classifier that allows us to apply several combination algorithms such as voting system or stacking. The results obtained outperform those obtained using the systems individually and show that this approach could be considered a good strategy for polarity classification when we work with parallel corpora.  相似文献   

10.
Sentiment analysis is a text mining task that determines the polarity of a given text, i.e., its positiveness or negativeness. Recently, it has received a lot of attention given the interest in opinion mining in micro-blogging platforms. These new forms of textual expressions present new challenges to analyze text because of the use of slang, orthographic and grammatical errors, among others. Along with these challenges, a practical sentiment classifier should be able to handle efficiently large workloads.The aim of this research is to identify in a large set of combinations which text transformations (lemmatization, stemming, entity removal, among others), tokenizers (e.g., word n-grams), and token-weighting schemes make the most impact on the accuracy of a classifier (Support Vector Machine) trained on two Spanish datasets. The methodology used is to exhaustively analyze all combinations of text transformations and their respective parameters to find out what common characteristics the best performing classifiers have. Furthermore, we introduce a novel approach based on the combination of word-based n-grams and character-based q-grams. The results show that this novel combination of words and characters produces a classifier that outperforms the traditional word-based combination by 11.17% and 5.62% on the INEGI and TASS’15 dataset, respectively.  相似文献   

11.
12.
Multi-label text classification is an increasingly important field as large amounts of text data are available and extracting relevant information is important in many application contexts. Probabilistic generative models are the basis of a number of popular text mining methods such as Naive Bayes or Latent Dirichlet Allocation. However, Bayesian models for multi-label text classification often are overly complicated to account for label dependencies and skewed label frequencies while at the same time preventing overfitting. To solve this problem we employ the same technique that contributed to the success of deep learning in recent years: greedy layer-wise training. Applying this technique in the supervised setting prevents overfitting and leads to better classification accuracy. The intuition behind this approach is to learn the labels first and subsequently add a more abstract layer to represent dependencies among the labels. This allows using a relatively simple hierarchical topic model which can easily be adapted to the online setting. We show that our method successfully models dependencies online for large-scale multi-label datasets with many labels and improves over the baseline method not modeling dependencies. The same strategy, layer-wise greedy training, also makes the batch variant competitive with existing more complex multi-label topic models.  相似文献   

13.
Clustering Text Data Streams   总被引:2,自引:0,他引:2       下载免费PDF全文
Clustering text data streams is an important issue in data mining community and has a number of applica- tions such as news group filtering,text crawling,document organization and topic detection and tracing etc.However, most methods axe similaxity-based approaches and only use the TF*IDF scheme to represent the semantics of text data and often lead to poor clustering quality.Recently,researchers argue that semantic smoothing model is more efficient than the existing TF*IDF scheme for improving text clus...  相似文献   

14.
设计并实现了一种高效率、高性能的网页文本过滤系统,该系统采用分层过滤策略,包括实时过滤和事后分析。实时过滤模块是基于Linux下的IP Queue机制实现的,采用高效的过滤策略,在保证过滤实时性的同时也保证了过滤的准确性;事后分析模块研究过滤系统经过协议还原后备份的网页文本,通过网页预处理、非法关键词抽取、特征选择等步骤,实现了基于二元模型的文本过滤方法,该方法在一定大小的词语距离窗口内,采用包含非法关键词的二元词串作为特征,解决了使用二元词串带来数据稀疏的问题,同时保留了二元词串的强类别分辨能力的特征。实验表明,文章实现的过滤系统有较高的效率和准确率,用于事后分析的基于二元模型的文本过滤方法达到了较高的性能,其准确率、召唤率和F1的值分别为:96.98%,85.75%和91.02%。  相似文献   

15.
A graph-based approach to document classification is described in this paper. The graph representation offers the advantage that it allows for a much more expressive document encoding than the more standard bag of words/phrases approach, and consequently gives an improved classification accuracy. Document sets are represented as graph sets to which a weighted graph mining algorithm is applied to extract frequent subgraphs, which are then further processed to produce feature vectors (one per document) for classification. Weighted subgraph mining is used to ensure classification effectiveness and computational efficiency; only the most significant subgraphs are extracted. The approach is validated and evaluated using several popular classification algorithms together with a real world textual data set. The results demonstrate that the approach can outperform existing text classification algorithms on some dataset. When the size of dataset increased, further processing on extracted frequent features is essential.  相似文献   

16.
文本语言的情感分析历来是自然语言处理领域的热点研究课题,尤其是在当下互联网迈入web2.0时代,多样的社交网络平台呈现出巨量而丰富的文本情感信息,因此挖掘网络数据文本信息并作情感倾向判断对人机交互与人工智能具有重大的现实意义。传统的解决文本情感分析问题的方法主要是浅层学习算法,利用回归、分类等方案实现特征的提取及分类。以这类方法为起点,本文探索采用深度学习的方法对网络文本进行细粒度的情感分析,以期达到即时获取依附于网络世界的社会人的情感,甚至是让机器达到对人类情感表达的深度理解。对于深度学习的具体实现,本文采用的是降噪自编码器来对文本进行无标记特征学习并进行情感分类,后文中利用实验训练获得最佳的参数设置,并通过对实验结果的分析和评估论证深度学习对于情感信息的强大解析能力。  相似文献   

17.
情感分类是通过分析数据中的情感信息,来预测数据所传递的情感倾向.其中结合语言学词典与产生式分类器构造带有先验知识的分类模型,是一类重要的研究课题.通过研究情感词的领域性和不同权重的特性,提出了一种新的融入情感先验知识的情感分类方法.通过自动分析构造领域相关的情感词及其权重信息,将其作为情感先验知识,融入到产生式分类模型...  相似文献   

18.
In real-life data, information is frequently lost in data mining, caused by the presence of missing values in attributes. Several schemes have been studied to overcome the drawbacks produced by missing values in data mining tasks; one of the most well known is based on preprocessing, formerly known as imputation. In this work, we focus on a classification task with twenty-three classification methods and fourteen different imputation approaches to missing values treatment that are presented and analyzed. The analysis involves a group-based approach, in which we distinguish between three different categories of classification methods. Each category behaves differently, and the evidence obtained shows that the use of determined missing values imputation methods could improve the accuracy obtained for these methods. In this study, the convenience of using imputation methods for preprocessing data sets with missing values is stated. The analysis suggests that the use of particular imputation methods conditioned to the groups is required.  相似文献   

19.
微博作为一种新型的社会媒体,以其信息的高实时性、话题动态关注、传播速度快的特点,逐渐被人们所接受和使用。筛选出相关话题的微博信息,帮助用户关注话题的动态发展,成为迫切需要解决的问题。由于微博信息篇幅极短、包含的信息和特征少等特点,为相关话题微博信息的筛选带来了新的挑战,而传统的文本分类技术已不再适用。该文提出了基于信息熵的筛选规则学习算法,利用学习得到的规则对微博信息进行有效的筛选。算法利用信息熵来评价规则的好坏,同时基于模拟退火的随机策略使算法中的规则选择避免了过于贪心。分别通过来自新浪微博的约九万条标注数据和TREC2011中约三千条特定话题的标注数据进行实验,该文算法相比于CPAR和SVM算法,学习得到的规则在筛选时取得了较高的F值。  相似文献   

20.
Most of the research on text categorization has focused on classifying text documents into a set of categories with no structural relationships among them (flat classification). However, in many information repositories documents are organized in a hierarchy of categories to support a thematic search by browsing topics of interests. The consideration of the hierarchical relationship among categories opens several additional issues in the development of methods for automated document classification. Questions concern the representation of documents, the learning process, the classification process and the evaluation criteria of experimental results. They are systematically investigated in this paper, whose main contribution is a general hierarchical text categorization framework where the hierarchy of categories is involved in all phases of automated document classification, namely feature selection, learning and classification of a new document. An automated threshold determination method for classification scores is embedded in the proposed framework. It can be applied to any classifier that returns a degree of membership of a document to a category. In this work three learning methods are considered for the construction of document classifiers, namely centroid-based, naïve Bayes and SVM. The proposed framework has been implemented in the system WebClassIII and has been tested on three datasets (Yahoo, DMOZ, RCV1) which present a variety of situations in terms of hierarchical structure. Experimental results are reported and several conclusions are drawn on the comparison of the flat vs. the hierarchical approach as well as on the comparison of different hierarchical classifiers. The paper concludes with a review of related work and a discussion of previous findings vs. our findings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号