首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
首先,选择合适的文本集合,并且对文本进行分词处理,然后,进行文档内部特征词的提取,通过采用词频统计的方法对文本向量进行降维处理,从而选择最佳的特征向量。最后,将非数值的文本数据进行量化处理后,利用减聚类优化的模糊C-均值算法对文本集合进行聚类,从而提高文本聚类的效果。  相似文献   

2.
There is a growing interest in efficient models of text mining and an emergent need for new data structures that address word relationships. Detailed knowledge about the taxonomic environment of keywords that are used in text documents can provide valuable insight into the nature of the subject matter contained therein. Such insight may be used to enhance the data structures used in the text data mining task as relationships become usefully apparent. A popular scalable technique used to infer these relationships, while reducing dimensionality, has been Latent Semantic Analysis. We present a new approach, which uses an ontology of lexical abstractions to create abstraction profiles of documents and uses these profiles to perform text organization based on a process that we call frequent abstraction analysis. We introduce TATOO, the Text Abstraction TOOlkit, which is a full implementation of this new approach. We present our data model via an example of how taxonomically derived abstractions can be used to supplement semantic data structures for the text classification task.  相似文献   

3.
In this article, we propose a novel approach for measuring word association based on the joint occurrences distribution in a text. Our approach relies on computing a sum of distances between neighboring occurrences of a given word pair and comparing it with a vector of randomly generated occurrences. The idea behind this assumption is that if the distribution of co‐occurrences is close to random or if they tend to appear together less frequently than by chance, such words are not semantically related. We devise a distance function S that evaluates the words association rate. Using S, we build a concept tree, which provides a visual and comprehensive representation of keywords association in a text. In order to illustrate the effectiveness of our algorithm, we apply it to three different texts, showing the consistency and significance of the obtained results with respect to the semantics of documents. Finally, we compare the results obtained by applying our proposed algorithm with the ones achieved by both human experts and the co‐occurrence correlation method. We show that our method is consistent with the experts' evaluation and outperforms with respect to the co‐occurrence correlation method.  相似文献   

4.
Existing word embeddings learning algorithms only employ the contexts of words, but different text documents use words and their relevant parts of speech very differently. Based on the preceding assumption, in order to obtain appropriate word embeddings and further improve the effect of text classification, this paper studies in depth a representation of words combined with their parts of speech. First, using the parts of speech and context of words, a more expressive word embeddings can be obtained. Further, to improve the efficiency of look‐up tables, we construct a two‐dimensional table that is in the <word, part of speech> format to represent words in text documents. Finally, the two‐dimensional table and a Bayesian theorem are used for text classification. Experimental results show that our model has achieved more desirable results on standard data sets. And it has more preferable versatility and portability than alternative models.  相似文献   

5.
We describe a process of word recognition that has high tolerance for poor image quality, tunability to the lexical content of the documents to which it is applied, and high speed of operation. This process relies on the transformation of text images into character shape codes, and on special lexica that contain information on the shape of words. We rely on the structure of English and the high efficiency of mapping between shape codes and the characters in the words. Remaining ambiguity is reduced by template matching using exemplars derived from surrounding text, taking advantage of the local consistency of font, face and size as well as image quality. This paper describes the effects of lexical content, structure and processing on the performance of a word recognition engine. Word recognition performance is shown to be enhanced by the application of an appropriate lexicon. Recognition speed is shown to be essentially independent of the details of lexical content provided the intersection of the occurrences of words in the document and the lexicon is high. Word recognition accuracy is dependent on both intersection and specificity of the lexicon. Received May 1, 1998 / Revised October 20, 1998  相似文献   

6.
Identifying and interpreting user intent are fundamental to semantic search. In this paper, we investigate the association of intent with individual words of a search query. We propose that words in queries can be classified as either content or intent, where content words represent the central topic of the query, while users add intent words to make their requirements more explicit. We argue that intelligent processing of intent words can be vital to improving the result quality, and in this work we focus on intent word discovery and understanding. Our approach towards intent word detection is motivated by the hypotheses that query intent words satisfy certain distributional properties in large query logs similar to function words in natural language corpora. Following this idea, we first prove the effectiveness of our corpus distributional features, namely, word co-occurrence counts and entropies, towards function word detection for five natural languages. Next, we show that reliable detection of intent words in queries is possible using these same features computed from query logs. To make the distinction between content and intent words more tangible, we additionally provide operational definitions of content and intent words as those words that should match, and those that need not match, respectively, in the text of relevant documents. In addition to a standard evaluation against human annotations, we also provide an alternative validation of our ideas using clickthrough data. Concordance of the two orthogonal evaluation approaches provide further support to our original hypothesis of the existence of two distinct word classes in search queries. Finally, we provide a taxonomy of intent words derived through rigorous manual analysis of large query logs.  相似文献   

7.
In this paper we show how a vector-based word representation obtained via word2vec can help to improve the results of a document classifier based on bags of words. Both models allow obtaining numeric representations from texts, but they do it very differently. The bag of words model can represent documents by means of widely dispersed vectors in which the indices are words or groups of words. word2vec generates word level representations building vectors that are much more compact, where indices implicitly contain information about the context of word occurrences. Bags of words are very effective for document classification and in our experiments no representation using only word2vec vectors is able to improve their results. However, this does not mean that the information provided by word2vec is not useful for the classification task. When this information is used in combination with the bags of words, the results are improved, showing its complementarity and its contribution to the task. We have also performed cross-domain experiments in which word2vec has shown much more stable behavior than bag of words models.  相似文献   

8.
基于相似性进行文本分类是当前流行的文本处理方法。基于特征隶属度的文本分类相似性度量方法旨在利用特征与文档间的隶属关系度量文档相似性,从而实现文本分类。该方法基于特征与文档的隶属关系,对特征进行全隶属、偏隶属和无隶属词集划分,并基于3种隶属词集定义隶属度函数。全隶属词集隶属于两篇文档,隶属度随权差增大而降低;偏隶属词集仅隶属于其中某一篇文档,隶属度为一个定值;无隶属词集与两篇文档无隶属关系,隶属度为零。在度量相似性时,偏隶属关系高于全隶属关系。由于同类文档词集相近,异类文档词集差异明显,因此,基于特征与文档的隶属度进行相似性度量,可清晰界定词集与类别的隶属关系,提升分类精度。最后,采用数据集20-Newgroups和Reuters-21578对分类有效性进行验证,结果表明基于特征隶属度的相似性度量方法的性能优于目前流行的相似性度量方法。  相似文献   

9.
基于主题词频数特征的文本主题划分   总被引:4,自引:1,他引:4  
康恺  林坤辉  周昌乐 《计算机应用》2006,26(8):1993-1995
目前文本分类所采用的文本—词频矩阵具有词频维数过大和过于稀疏两个特点,给计算造成了一定困难。为解决这一问题,从用户使用搜索引擎时选择所需文本的心理出发,提出了一种基于主题词频数特征的文本主题划分方法。该方法首先根据统计方法筛选各文本类的主题词,然后以主题词类替代单个词作为特征采用模糊C 均值(FCM)算法施行文本聚类。实验获得了较好的主题划分效果,并与一种基于词聚类的文本聚类方法进行了过程及结果中多个方面的比较,得出了一些在实施要点和应用背景上较有意义的结论。  相似文献   

10.
本文提出一种新词语识别新方法。该方法直接抽取分类网页上人工标引的关键词,并按照其网页栏目所属类目存储进各分类词表,从而快速完成新词语识别和聚类任务。该方法简单快捷。我们利用该方法从15类6亿字网页中抽取到229237个词条,其中新词语175187个,新词率为76.42% ,其中游戏类新词率最高,时政_社会类新词率最低。新词语以命名实体为主,结构固定,意义完整性和专指性强,有助于解决歧义切分和未登录词问题,并能提高文本表示如分类和关键词标引的效果。  相似文献   

11.
Electronic documents allow readers to access definitions of unfamiliar words by clicking on the screen display. Five studies report eight comparisons which explore how changing the modality (visual/auditory) and form (verbal/graphic) of the defining information influences people's willingness to consult definitions while reading short stories. Study 1 gave verbal definitions of words that were completely novel and explored the effects of definition length and reading task. It was found that readers checked the meanings of nearly all the unknown words. This remained the case in study 2 where auditory rather than visual definitions were given; but people re-read more stories than in Study 1 suggesting gist comprehension may have been impaired by mixing modalities. In study 3 the defined words were familiar but not always precisely understood (e.g. architrave). Here people consulted significantly fewer definitions regardless of whether these included pictures. Study 4 visually cued the defined architectural terms within the text and found significantly more definitions were read. In study 5 these defined terms were listed in a separate glossary alongside the text. This affected both the frequency and pattern with which readers consulted definitions. Overall, this series of studies shows that, unless readers recognise words as novel, their willingness to consult definitions depends on how the definitions are made available.  相似文献   

12.
在传统的文本分类中,文本向量空间矩阵存在“维数灾难”和极度稀疏等问题,而提取与类别最相关的关键词作为文本分类的特征有助于解决以上两个问题。针对以上结论进行研究,提出了一种基于关键词相似度的短文本分类框架。该框架首先通过大量语料训练得到word2vec词向量模型;然后通过TextRank获得每一类文本的关键词,在关键词集合中进行去重操作作为特征集合。对于任意特征,通过词向量模型计算短文本中每个词与该特征的相似度,选择最大相似度作为该特征的权重。最后选择K近邻(KNN)和支持向量机(SVM)作为分类器训练算法。实验基于中文新闻标题数据集,与传统的短文本分类方法相比,分类效果约平均提升了6%,从而验证了该框架的有效性。  相似文献   

13.
A new method is described and tested for using an unreliable character recognition device to produce a reliable index for a collection of documents. All highly likely substitution errors of the recognition device are handled by transforming characters which confuse readily into the same pseudocharacter. An analysis of the method is done showing the expected precision (fraction of words correctly found to words present) and recall (fraction of words retrieved properly to those which were retrieved). Published substitution error matrices were employed, along with a large file of words and word frequencies to evaluate the method. Performance was surprisingly good. Suggestions for further enhancements are given.  相似文献   

14.
In this paper we describe a database that consists of handwritten English sentences. It is based on the Lancaster-Oslo/Bergen (LOB) corpus. This corpus is a collection of texts that comprise about one million word instances. The database includes 1,066 forms produced by approximately 400 different writers. A total of 82,227 word instances out of a vocabulary of 10,841 words occur in the collection. The database consists of full English sentences. It can serve as a basis for a variety of handwriting recognition tasks. However, it is expected that the database would be particularly useful for recognition tasks where linguistic knowledge beyond the lexicon level is used, because this knowledge can be automatically derived from the underlying corpus. The database also includes a few image-processing procedures for extracting the handwritten text from the forms and the segmentation of the text into lines and words. Received September 28, 2001 / Revised October 10, 2001  相似文献   

15.
In this paper, we propose a novel technique for word spotting in historical printed documents combining synthetic data and user feedback. Our aim is to search for keywords typed by the user in a large collection of digitized printed historical documents. The proposed method consists of the following stages: (1) creation of synthetic image words; (2) word segmentation using dynamic parameters; (3) efficient feature extraction for each word image and (4) a retrieval procedure that is optimized by user feedback. Experimental results prove the efficiency of the proposed approach.  相似文献   

16.
严宇宇  陶煜波  林海 《软件学报》2016,27(5):1114-1126
随着信息技术的快速发展,大量的文本数据产生、被收集和存储.主题模型是文本分析的重要工具之一,被广泛地应用于分析大规模文本集.然而,主题模型通常无法直观而有效地结合用户的领域专业知识对模型结果进行修正.针对这一问题,提出了一个交互式可视分析系统,帮助用户对主题模型进行交互修正.首先对层次狄利克雷过程进行了改进,使其支持单词约束;然后,使用矩阵视图对主题模型进行展示,并使用语义相关的词云布局帮助用户寻找单词约束,用户通过添加单词约束迭代优化主题模型;最后,通过案例分析及用户研究来评价该系统的可用性.  相似文献   

17.
赵彦斌  李庆华 《计算机应用》2006,26(6):1396-1397
文本相似性分析、聚类和分类多基于特征词,由于汉语词之间无分隔符,汉语分词及高维特征空间的处理等基础工作必然引起高计算费用问题。探索了一种在不使用特征词的条件下,使用汉字间的关系进行文本相似性分析的研究思路。首先定义了文本中汉字与汉字之间关系的量化方法,提出汉字关联度的概念,然后构造汉字关联度矩阵来表示汉语文本,并设计了一种基于汉字关联度矩阵的汉语文本相似性度量算法。实验结果表明,汉字关联度优于二字词词频、互信息、T检验等统计量。由于无需汉语分词,本算法适用于海量中文信息处理。  相似文献   

18.
一种基于大规模语料的新词识别方法   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种基于大规模语料的新词识别方法,在重复串统计的基础上,结合分析不同串的外部环境和内部构成,依次判断上下文邻接种类,首尾单字位置成词概率以及双字耦合度等语言特征,分别过滤得到新词。通过在不同规模的语料上实验发现,此方法可行有效,能够应用到词典编撰,术语提取等领域。  相似文献   

19.
作文跑题检测任务的核心问题是文本相似度计算。传统的文本相似度计算方法一般基于向量空间模型,即把文本表示成高维向量,再计算文本之间的相似度。这种方法只考虑文本中出现的词项(词袋模型),而没有利用词项的语义信息。该文提出一种新的文本相似度计算方法:基于词扩展的文本相似度计算方法,将词袋模型(Bag-of-Words)方法与词的分布式表示相结合,在词的分布式表示向量空间中寻找与文本出现的词项语义上相似的词加入到文本表示中,实现文本中单词的扩展。然后对扩展后的文本计算相似度。该文将这种方法运用到英文作文的跑题检测中,构建一套跑题检测系统,并在一个真实数据中进行测试。实验结果表明该文的跑题检测系统能有效识别跑题作文,性能明显高于基准系统。
  相似文献   

20.
Deep Web的查询中,关键词的选择是一个关键问题。文中针对查询Deep Web中的文本数据库,对查询词的选择作出一些研究。将Zipf Estimator应用于根据查询词的频率选择词条的方法中,提出了用部分文档中的查询词的排序来得出整个文档集中查询词的排序的方法。将Zipf Estimator运用于查询词的选择,减少查询词选择时的运算量,以较少的查询次数得到较多的查询结果。测试结果证明了Zipf Estimator运用于查询词的选择可有效提高查询Deep Web中的文本数据库的效率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号