首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 187 毫秒
1.
将传统的文本相似度量方法直接移植到短文本时,由于短文本内容简短的特性会导致数据稀疏而造成计算结果出现偏差。该文通过使用复杂网络表征短文本,提出了一种新的短文本相似度量方法。该方法首先对短文本进行预处理,然后对短文本建立复杂网络模型,计算短文本词语的复杂网络特征值,再借助外部工具计算短文本词语之间的语义相似度,然后结合短文本语义相似度定义计算短文本之间的相似度。最后在基准数据集上进行聚类实验,验证本文提出的短文本相似度计算方法在基于F-度量值标准上,优于传统的TF-IDF方法和另一种基于词项语义相似度的计算方法。  相似文献   

2.
文本表示是自然语言处理中的基础任务,针对传统短文本表示高维稀疏问题,提出1种基于语义特征空间上下文的短文本表示学习方法。考虑到初始特征空间维度过高,通过计算词项间互信息与共现关系,得到初始相似度并对词项进行聚类,利用聚类中心表示降维后的语义特征空间。然后,在聚类后形成的簇上结合词项的上下文信息,设计3种相似度计算方法分别计算待表示文本中词项与特征空间中特征词的相似度,以形成文本映射矩阵对短文本进行表示学习。实验结果表明,所提出的方法能很好地反映短文本的语义信息,能对短文本进行合理而有效的表示学习。  相似文献   

3.
通过分析现有短文本聚类算法的缺陷,提出了一种基于改进相似度与类中心向量的半监督短文本聚类算法。首先,定义强类别区分度词,利用已加标数据的类别信息提取并构造强类别区分度词集合,并对基于初始特征的余弦相似度和基于强类别区分度词项的相似度进行有效融合,得到更加合理的改进的短文本相似度计算公式。然后,通过计算样本与类中心向量的相似度实现对未分类样本的正确划分,与此同时,更新加标数据集合、类中心向量,重新抽取强类别区分度词。重复这个过程,直到实现所有数据的类别划分。实验表明:与其他同类算法相比,本文算法在聚类准确性和时间效率上有了较大的改进。  相似文献   

4.
因中文短文本特征词词频低、存在大量变形词和新词的特点,使得中文短文本相似度发生漂移,难以直接使用现有的面向长文本的聚类算法。针对短文本相似度漂移问题,提出了一种基于《知网》扩充相关词集构建动态文本向量的方法,利用动态向量计算中文短文本的内容相似度,进而发现短文本之间的内在关联,从而缓解特征词词频过低和存在变形词以及新词对聚类的影响,获得较好的聚类结果。实验结果表明,该算法的聚类质量高于传统算法。  相似文献   

5.
针对短文本内容简短、特征稀疏等特点,提出一种融合共现距离和区分度的短文本相似度计算方法。一方面,该方法在整个短文本语料库中利用两个共现词之间距离计算它们的共现距离相关度。另一方面通过计算共现区分度来提高距离相关度的准确度,然后对每个文本中词项进行相关性加权,最后通过词项的权重和词项之间的共现距离相关度计算两个文本的相似度。实验结果表明,本文提出的方法能够提高短文本相似度计算的准确率。  相似文献   

6.
针对短文本聚类存在的三个主要挑战,特征关键词的稀疏性、高维空间处理的复杂性和簇的可理解性,提出了一种结合语义改进的K-means短文本聚类算法。该算法通过词语集合表示短文本,缓解了短文本特征关键词的稀疏性问题;通过挖掘短文本集的最大频繁词集获取初始聚类中心,有效克服了K-means聚类算法对初始聚类中心敏感的缺点,解决了簇的理解性问题;通过结合TF-IDF值的语义相似度计算文档之间的相似度,避免了高维空间的运算。实验结果表明,从语义角度出发实现的短文本聚类算法优于传统的短文本聚类算法。  相似文献   

7.
针对互联网短文本特征稀疏和速度更新快而导致的短文本聚类性能较差的问题,本文提出了一种基于特征词向量的短文本聚类算法。首先,定义基于词性和词长度加权的特征词提取公式并提取特征词代表短文本;然后,使用Skip-gram模型(Continous skip-gram model)在大规模语料中训练得到表示特征词语义的词向量;最后,引入词语游走距离(Word mover′s distance,WMD)来计算短文本间的相似度并将其应用到层次聚类算法中实现短文本聚类。在4个测试数据集上的评测结果表明,本文方法的效果明显优于传统的聚类算法,平均F值较次优结果提高了56.41%。  相似文献   

8.
老挝语属于低资源语言,在有限的语料中获取更多的语义信息可以有效解决汉语和老挝语短文本相似度计算不准确的问题。多任务学习是有效获取语义信息的一种方法,该文对汉语和老挝语短文本特点进行研究后,提出一种融合词性位置特征的多任务汉老双语短文本相似度计算方法:首先,通过词性位置特征权重和TF-IDF权重加权表征双语短文本的同时,使用改进后的TextRank算法获取双语短文本的核心句;然后,通过带有自注意力机制的双向长短时记忆网络分别计算双语短文本的相似度与双语短文本对应核心句的相似度;最后,使用多任务学习方法,将双语短文本的核心句相似度计算作为辅助任务,获取更多的语义信息进行共享以提升汉老双语短文本相似度计算模型的性能。实验结果表明,该文提出的方法在有限的训练语料下取得了更好的效果,F1值达76.16%。  相似文献   

9.
短文本的研究一直是自然语言处理领域的热门话题,由于短文本特征稀疏、用语口语化严重的特点,它的聚类模型存在维度高、主题聚焦性差、语义信息不明显的问题.针对对上述问题的研究,本文提出了一种改进特征权重的短文本聚类算法.首先,定义多因子权重规则,基于词性和符号情感分析构造综合评估函数,结合词项和文本内容相关度进行特征词选择;接着,使用Skip-gram模型(Continuous Skip-gram Model)在大规模语料中训练得到表示特征词语义的词向量;最后,利用RWMD算法计算短文本之间的相似度并将其应用K-Means算法中进行聚类.最后在3个测试集上的聚类效果表明,该算法有效提高了短文本聚类的准确率.  相似文献   

10.
为提高中文短文本相似度计算的准确率,提出一种新的基于混合策略的中文短文本相似度计算方法。首先,根据词语的语义距离,利用层次聚类,构建短文本聚类二叉树,改进传统的向量空间模型(VSM),计算关键词加权的文本相似度。然后,通过提取句子的主干成分对传统的基于语法语义模型的方法进行改进,得到文本主干的语义相似度;最后,对两种相似度进行加权,计算最终的文本相似度。实验结果表明,提出的方法在短文本相似度计算方面准确性更高,更加适合人们的主观判断。  相似文献   

11.
Short text similarity based on probabilistic topics   总被引:1,自引:0,他引:1  
In this paper, we propose a new method for measuring the similarity between two short text snippets by comparing each of them with the probabilistic topics. Specifically, our method starts by firstly finding the distinguishing terms between the two short text snippets and comparing them with a series of probabilistic topics, extracted by Gibbs sampling algorithm. The relationship between the distinguishing terms of the short text snippets can be discovered by examining their probabilities under each topic. The similarity between two short text snippets is calculated based on their common terms and the relationship of their distinguishing terms. Extensive experiments on paraphrasing and question categorization show that the proposed method can calculate the similarity of short text snippets more accurately than other methods including the pure TF-IDF measure.  相似文献   

12.
Short text message streams are produced by Instant Messaging and Short Message Service which are wildly used nowadays. Each stream contains more than one thread usually. Detecting threads in the streams is helpful to various applications, such as business intelligence, investigation of crime and public opinion analysis. Existing works which are mainly based on text similarity encounter many challenges including the sparse eigenvector and anomaly of short text message. This paper introduces a novel concept of contextual correlation instead of the traditional text similarity into single-pass clustering algorithm to cover the challenges of thread detection. We firstly analyze the contextually correlative nature of conversations in short text message streams, and then propose an unsupervised method to compute the correlative degree. As a reference, a single-pass algorithm employing the contextual correlation is developed to detect threads in massive short text stream. Experiments on large real-life online chat logs show that our approach improves the performance by 11% when compared with the best similarity-based algorithm in terms of F1 measure.  相似文献   

13.
Knowledge-based vector space model for text clustering   总被引:5,自引:4,他引:1  
This paper presents a new knowledge-based vector space model (VSM) for text clustering. In the new model, semantic relationships between terms (e.g., words or concepts) are included in representing text documents as a set of vectors. The idea is to calculate the dissimilarity between two documents more effectively so that text clustering results can be enhanced. In this paper, the semantic relationship between two terms is defined by the similarity of the two terms. Such similarity is used to re-weight term frequency in the VSM. We consider and study two different similarity measures for computing the semantic relationship between two terms based on two different approaches. The first approach is based on the existing ontologies like WordNet and MeSH. We define a new similarity measure that combines the edge-counting technique, the average distance and the position weighting method to compute the similarity of two terms from an ontology hierarchy. The second approach is to make use of text corpora to construct the relationships between terms and then calculate their semantic similarities. Three clustering algorithms, bisecting k-means, feature weighting k-means and a hierarchical clustering algorithm, have been used to cluster real-world text data represented in the new knowledge-based VSM. The experimental results show that the clustering performance based on the new model was much better than that based on the traditional term-based VSM.  相似文献   

14.
针对微博短文本有效特征较稀疏且难以提取,从而影响微博文本表示、分类与聚类准确性的问题,提出一种基于统计与语义信息相结合的微博短文本特征词选择算法。该算法基于词性组合匹配规则,根据词项的TF-IDF、词性与词长因子构造综合评估函数,结合词项与文本内容的语义相关度,对微博短文本进行特征词选择,以使挑选出来的特征词能准确表示微博短文本内容主题。将新的特征词选择算法与朴素贝叶斯分类算法相结合,对微博分类语料集进行实验,结果表明,相比其它的传统算法,新算法使得微博短文本分类准确率更高,表明该算法选取出来的特征词能够更准确地表示微博短文本内容主题。  相似文献   

15.
为优化文本聚类效果,提出一种基于单词超团理论的文本聚类方法.利用文档中单词的关联模式来评估文档间的相似度,将单词超团作为文档向量辅助信息,以图划分的方式进行聚类分析.对不同聚类方法的结果进行比较,证明基于单词超团的文本聚类方法能提高文本聚类的准确性.  相似文献   

16.
Short text clustering by finding core terms   总被引:1,自引:1,他引:0  
A new clustering strategy, TermCut, is presented to cluster short text snippets by finding core terms in the corpus. We model the collection of short text snippets as a graph in which each vertex represents a piece of short text snippet and each weighted edge between two vertices measures the relationship between the two vertices. TermCut is then applied to recursively select a core term and bisect the graph such that the short text snippets in one part of the graph contain the term, whereas those snippets in the other part do not. We apply the proposed method on different types of short text snippets, including questions and search results. Experimental results show that the proposed method outperforms state-of-the-art clustering algorithms for clustering short text snippets.  相似文献   

17.
王刚  钟国祥 《计算机科学》2010,37(9):222-224
为了改善文本聚类的质量,得到满意的聚类结果,针对文本聚类缺少涉及概念的内涵及概念间的联系,提出了一种基于本体相似度计算的文本聚类算法TCBO(Text Clustering Based on Ontology).该算法把文档用本体来刻画,以便描述概念的内涵及概念间的联系.设计和改进了文本相似度计算算法,应用本体的语义相似度来度量文档间相近程度,设计了具体的根据相似度进行文本聚类的算法.实验证明,该方法从聚类的准确性和聚类的关联度方面改善了聚类质量.  相似文献   

18.
The probabilistic linguistic term set is a powerful tool to express and characterize people’s cognitive complex information and thus has obtained a great development in the last several years. To better use the probabilistic linguistic term sets in decision making, information measures such as the distance measure, similarity measure, entropy measure and correlation measure should be defined. However, as an important kind of information measure, the inclusion measure has not been defined by scholars. This study aims to propose the inclusion measure for probabilistic linguistic term sets. Formulas to calculate the inclusion degrees are put forward Then, we introduce the normalized axiomatic definitions of the distance, similarity and entropy measures of probabilistic linguistic term sets to construct a unified framework of information measures for probabilistic linguistic term sets. Based on these definitions, we present the relationships and transformation functions among the distance, similarity, entropy and inclusion measures. We believe that more formulas to calculate the distance, similarity, inclusion degree and entropy can be induced based on these transformation functions. Finally, we put forward an orthogonal clustering algorithm based on the inclusion measure and use it in classifying cities in the Economic Zone of Chengdu Plain, China.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号