首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We describe an implementation of a parallel document clustering scheme based on latent semantic indexing, which uses singular value decomposition. Given a set of documents, the clustering algorithm is dynamic in the sense that it automatically infers the number of clusters to be output. The parallel version has been implemented on a LAN and on a dual‐core system. Experimental evaluation of the algorithm shows an average speed‐up of 6.22 for the LAN implementation and an average speed‐up of 3.71 for the dual‐core implementation, while still maintaining a precision and recall in the range [0.85, 1]. To put these implementations in the context of information retrieval, we use the parallel clustering algorithm and develop a document similarity search system. The similarity search system shows good performance in terms of precision and recall. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
实用、高效的Web搜索引擎依赖于有效的检索算法,而如何提高检索精度是检索算法的关键。该文论述了一种在最短片段算法的基础上进行改进后的检索算法。它是基于扩展布尔检索框架的检索算法。还给出了新的算法在200l文本检索会议(TREC10)的Web语料上的实验数据。实验结果表明,新的算法比原始的算法在检索精度上有所提高。  相似文献   

3.
In this paper, we propose to use Harman, Croft and Okapi measures with Lesk algorithm to develop a system for Arabic word sense disambiguation, that combines unsupervised and knowledge based methods. This system must solve the lexical semantic ambiguity in Arabic language. The information retrieval measures are used to estimate the most relevant sense of the ambiguous word, by returning a semantic coherence score corresponding to the context that is semantically closest to the original sentence containing the ambiguous word. The Lesk algorithm is used to assign and select the adequate sense from those proposed by the information retrieval measures mentioned above. This selection is based on a comparison between the glosses of the word to be disambiguated, and its different contexts of use extracted from a corpus. Our experimental study proves that using of Lesk algorithm with Harman, Croft, and Okapi measures allows us to obtain an accuracy rate of 73%.  相似文献   

4.
词义归纳是解决词义知识获取的重要研究课题,利用聚类算法对词义进行归纳分析是目前最广泛采用的方法。通过比较K-Means聚类算法和EM聚类算法在 各自 词义归纳模型上的优势,提出一种新的融合距离度量和高斯混合模型的聚类算法,以期利用两种聚类算法分别在距离度量和数据分布计算上的优势,挖掘数据的几何特性和正态分布信息在词义聚类分析中的作用,从而提高词义归纳模型的性能。实验结果表明,所提混合聚类算法对于改进词义归纳模型的性能是十分有效的。  相似文献   

5.
实体消歧作为知识库构建、信息检索等应用的重要支撑技术,在自然语言处理领域有着重要的作用。然而在短文本环境中,对实体的上下文特征进行建模的传统消歧方式很难提取到足够多用以消歧的特征。针对短文本的特点,提出一种基于实体主题关系的中文短文本图模型消歧方法,首先,通过TextRank算法对知识库信息构建的语料库进行主题推断,并使用主题推断的结果作为实体间关系的表示;然后,结合基于BERT的语义匹配模型给出的消歧评分对待消歧文本构建消歧网络图;最终,通过搜索排序得出最后的消歧结果。使用CCKS2020短文本实体链接任务提供的数据集对所提方法进行评测,实验结果表明,该方法对短文本的实体消歧效果优于其他方法,能有效解决在缺乏知识库实体关系情况下的中文短文本实体消歧问题。  相似文献   

6.
基于支持向量机分类和语义信息的中文跨文本指代消解   总被引:2,自引:0,他引:2  
跨文本(实体)指代消解(CDCR)的任务就是把所有分布在不同文本但指向相同实体的词组合在一起形成一个指代链。传统的跨文本指代消解主要采用聚类方法来解决信息检索中遇到的重名消歧问题。将聚类问题转换为分类问题,并采用支持向量机(SVM)分类器来解决信息抽取中的重名消歧和多名聚合问题。该方法可有效融合实体名称的构词特征、读音特征以及文本内部和文本外部的多种语义特征。在中文跨文本指代语料库上的实验表明,同聚类方法相比,该方法在提高精度的同时,也提高了召回率。  相似文献   

7.
部分整体关系获取是知识获取中的重要组成部分。Web逐步成为知识获取的重要资源之一。搜索引擎是从Web中获取部分整体关系知识的有效手段之一,我们将Web中包含部分整体关系的检索结果集合称为部分整体关系语料。由于目前主流搜索引擎尚不支持语义搜索,如何构造有效的查询以得到富含部分整体关系的语料,从而进一步获取部分整体关系,就成为一个重要的问题。该文提出了一种新的查询构造方法,目的在于从Web中获取部分整体关系语料。该方法能够构造基于语境词的查询,进而利用现有的搜索引擎从Web中获取部分整体关系语料。该方法在两个方面与人工构造查询方法和基于语料库查询构造查询方法所获取的语料进行对比,其一是语料中含有部分整体关系的语句数量;二是从语料中进一步获取部分整体关系的难易程度。实验结果表明,该方法远远优于后两者。  相似文献   

8.
This paper presents a new method of retrieving cases from a case-base on the K-tree search algorithm. Building an automated CBR system relies on representing knowledge in an appropriate form and having efficient case retrieval methods. Using the Intelligent Business Process Reengineering System (IBPRS) architecture as a base, we discuss a model-based case representation approach to solve the knowledge elicitation bottleneck problems. In addition to presenting the model-based case representation method, we introduce a K-tree search method to transform the case base into a tree structure, and discuss how it can be applied to the case retrieval process in IBPRS. The basic idea of the algorithm is to use various attribute values defined in the case label as general information for the case matching and retrieval.  相似文献   

9.
传统的搜索引擎性能评价方法需要人工标注标准答案集,需花费大量的人力物力,并且评价结果依赖于人工标注的准确性,效率较低。该文基于聚类分析的思路,提出了一种搜索引擎性能评价指标和自动进行搜索引擎性能评价的方法,此方法能自动计算信息类查询的覆盖范围,并根据其覆盖范围对检索结果进行聚类,通过类间距和类内距等指标实现检索性能的自动评价。实验结果表明,基于聚类指标的评价方法与人工标注的评价方法的评价结果是相一致的。  相似文献   

10.
针对传统集中式索引处理大规模数据的性能和效率问题,提出了一种基于文本聚类的检索算法。利用文本聚类算法改进现有的索引划分方案,根据查询与聚类结果的距离计算判断查询意图,缩减查询范围。实验结果表明,所提方案能够有效地缓解大规模数据建索引和检索的压力,大幅提高分布式检索性能,同时保持着较高的准确率和查全率。  相似文献   

11.
文本聚类是自然语言处理中的一项重要研究课题,主要应用于信息检索和Web挖掘等领域。其中的关键是文本的表示和聚类算法。在层次聚类的基础上,提出了一种新的基于边界距离的层次聚类算法,该方法通过选择两个类间边缘样本点的距离作为类间距离,有效地利用类的边界信息,提高类间距离计算的准确性。综合考虑不同词性特征对文本的贡献,采用多向量模型对文本进行表示。不同文本集上的实验表明,基于边界距离的多向量文本聚类算法取得了较好的性能。  相似文献   

12.
Sentence similarity based on semantic nets and corpus statistics   总被引:3,自引:0,他引:3  
Sentence similarity measures play an increasingly important role in text-related research and applications in areas such as text mining, Web page retrieval, and dialogue systems. Existing methods for computing sentence similarity have been adopted from approaches used for long text documents. These methods process sentences in a very high-dimensional space and are consequently inefficient, require human input, and are not adaptable to some application domains. This paper focuses directly on computing the similarity between very short texts of sentence length. It presents an algorithm that takes account of semantic information and word order information implied in the sentences. The semantic similarity of two sentences is calculated using information from a structured lexical database and from corpus statistics. The use of a lexical database enables our method to model human common sense knowledge and the incorporation of corpus statistics allows our method to be adaptable to different domains. The proposed method can be used in a variety of applications that involve text knowledge representation and discovery. Experiments on two sets of selected sentence pairs demonstrate that the proposed method provides a similarity measure that shows a significant correlation to human intuition.  相似文献   

13.
14.
网络信息检索在当前互联网社会得到了广泛应用,但是其检索准确性却不容乐观,究其原因是割裂了检索关键词之间的概念联系。从一类限定领域的用户需求入手,以搜索引擎作为网络语料资源的访问接口,综合利用规则与统计的方法,生成查询需求的语义概念图。可将其作为需求分析的结果,导引后续的语义检索过程,提高用户查询与返回结果的相关性。实验结果表明,生成方法是有效可行的,对基于概念图的语义检索有一定的探索意义。  相似文献   

15.
Word sense disambiguation assumes word senses. Withinthe lexicography and linguistics literature, they areknown to bevery slippery entities. The first part of the paperlooks at problemswith existing accounts of ‘word sense’ and describesthe various kinds of ways in which a word's meaning candeviate from its coremeaning. An analysis is presented in which wordsenses areabstractions from clusters of corpus citations, inaccordance withcurrent lexicographic practice. The corpus citations,not the wordsenses, are the basic objects in the ontology. Thecorpus citationswill be clustered into senses according to thepurposes of whoever or whatever does the clustering. In theabsence of suchpurposes, word senses do not exist. Word sense disambiguation also needs a set of wordsenses todisambiguate between. In most recent work, the sethas been takenfrom a general-purpose lexical resource, with theassumption that thelexical resource describes the word senses ofEnglish/French/...,between which NLP applications will need todisambiguate. Theimplication of the first part of the paper is, bycontrast, that wordsenses exist only relative to a task. Thefinal part of the paper pursues this, exploring, bymeans of asurvey, whether and how word sense ambiguity is infact a problem forcurrent NLP applications.  相似文献   

16.
近年来,随着建筑信息模型(BIM)构件库资源在互联网上迅猛增长,对大量 BIM 构件资源的聚类和检索应用变得日益迫切。现有方法还缺乏对 BIM 构件所承载的领域信息提取, 基于 BIM 构件所承载的领域信息,对 BIM 构件库资源开展聚类研究:①针对 BIM 构件,提出 了一种基于属性信息量的 BIM 构件相似性度量算法,以充分利用 BIM 构件属性信息。通过与 传统的Tversky相似性度量算法以及几何形状相似匹配算法相比,其在相似性度量上效果更好。 ②基于 BIM 构件间的相似性度量算法,提出了一种 BIM 构件库聚类方法。并在 BIMSeek 检索 引擎中集成了 BIM 构件的关键字检索功能以及分类器查看功能,为用户提供更丰富的检索和查 看方式。通过与传统的 K-medoids 和 AP 聚类算法相比,其聚类方法效果更好。  相似文献   

17.
At present a steep increase in efforts by human beings in search of information take place. Pattern Recognition extracts information from the real world. It is a tool for modelling and real-world learning. Thus it requires precise knowledge of the objectives to be achieved with the information it produces. Human senses are part of their interface with the environment. Our senses are signal encoders according some representation that is understood by the central nervous system. Multimodality in signal capturing makes the spectrum of activities of intelligent agents wider. This paper aims to stimulate research for providing multimodal information, say, information that can be captured by different senses. This technique can improve decision-making; promote the inclusion of sense disabled individuals, giving higher amplitude for usefulness of Pattern Recognition. It considers multimodal display of knowledge to practical feasibility of knowledge presentation adapted to human sensing and perception towards decision improvement and inclusion of perception-impaired people.  相似文献   

18.
根据科技文献的结构特点,搭建了一个四层挖掘模式,提出了一种应用于科技文献分类的文本特征选择方法。该方法首先依据科技文献的结构将其分为四个层次,然后采用K-means聚类对前三层逐层实现特征词提取,最后再使用Aprori算法找出第四层的最大频繁项集,并作为第四层的特征词集合。在该方法中,针对K-means算法受初始中心点的影响较大的问题,首先采用信息熵对聚类对象赋权的方式来修正对象间的距离函数,然后再利用初始聚类的赋权函数值选出较合适的初始聚类中心点。同时,通过为K-means算法的终止条件设定标准值,来减少算法迭代次数,以减少学习时间;通过删除由信息动态变化而产生的冗余信息,来减少动态聚类过程中的干扰,从而使算法达到更准确更高效的聚类效果。上述措施使得该文本特征选择方法能够在文献语料库中更加准确地找到特征词,较之以前的方法有很大提升,尤其是在科技文献方面更为适用。实验结果表明,当数据量较大时,该方法结合改进后的K-means算法在科技文献分类方面有较高的性能。  相似文献   

19.
In recent years we have witnessed several applications of frequent sequence mining, such as feature selection for protein sequence classification and mining block correlations in storage systems. In typical applications such as clustering, it is not the complete set but only a subset of discriminating frequent subsequences which is of interest. One approach to discovering the subset of useful frequent subsequences is to apply any existing frequent sequence mining algorithm to find the complete set of frequent subsequences. Then, a subset of interesting subsequences can be further identified. Unfortunately, it is very time consuming to mine the complete set of frequent subsequences for large sequence databases. In this paper, we propose a new algorithm, CONTOUR, which efficiently mines a subset of high-quality subsequences directly in order to cluster the input sequences. We mainly focus on how to design some effective search space pruning methods to accelerate the mining process and discuss how to construct an accurate clustering algorithm based on the result of CONTOUR. We conducted an extensive performance study to evaluate the efficiency and scalability of CONTOUR, and the accuracy of the frequent subsequence-based clustering algorithm.  相似文献   

20.
As language evolves over time, documents stored in long- term archives become inaccessible to users. Automatically, detecting and handling language evolution will become a necessity to meet user’s information needs. In this paper, we investigate the performance of modern tools and algorithms applied on modern English to find word senses that will later serve as a basis for finding evolution. We apply the curvature clustering algorithm on all nouns and noun phrases extracted from The Times Archive (1785–1985). We use natural language processors for part-of-speech tagging and lemmatization and report on the performance of these processors over the entire period. We evaluate our clusters using WordNet to verify whether they correspond to valid word senses. Because The Times Archive contains OCR errors, we investigate the effects of such errors on word sense discrimination results. Finally, we present a novel approach to correct OCR errors present in the archive and show that the coverage of the curvature clustering algorithm improves. We increase the number of clusters by 24 %. To verify our results, we use the New York Times corpus (1987–2007), a recent collection that is considered error free, as a ground truth for our experiments. We find that after correcting OCR errors in The Times Archive, the performance of word sense discrimination applied on The Times Archive is comparable to the ground truth.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号