首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 203 毫秒
1.
[目的/意义]数据源描述(又称数据源摘要)是Deep Web集成检索领域存在的关键问题之一,数据源描述的质量直接影响着集成检索系统的检索效率和效果。本文提出一种基于领域特征和用户查询取样的数据源描述方法,以期为非合作环境下资源集成应用与研究提供参考和借鉴。[方法/过程]该方法为异构非合作型数据源的离线取样方法,通过分析数据源和用于查询的领域主题属性,依次构建领域特征词集、初始特征词集和高频特征词集,并最终获得以高频特征词查询取样的数据源描述信息。结合流行的CORI算法,深入分析基于推理网络的用户查询与数据源描述的相关度计算方法,并基于此方法设计基于Lemur工具集的集成检索系统,验证了上述方法的有效性。[结果/结论]所提方法在查全率和查准率方面均得到很好的表现。与其他方法相比,该方法在样本数据自动更新和运维管理方面具有明显成本优势和实用价值。  相似文献   

2.
基于统计自然语言处理技术的领域本体半自动构建研究   总被引:1,自引:0,他引:1  
本体的构建是影响语义Web成功与否的重要因素之一.本文借鉴机器学习以及自然语言处理等技术成果尝试半自动构建本体,以专业研究论文为研究语料,采用N-Gram文本表达法从语料中抽取关键概念,计算主题度获取领域概念.利用改进的层次聚类算法对领域概念进行聚类以获取其等级体系,采用句法分析与统计相结合的方法从语料中获取可能的主、谓、宾模式为领域关系提供参考,并以农业史为例,设计开发了一个领域本体半自动构建实验系统,文中重点介绍了本体构建中概念的获取、等级关系、领域关系的构建以及形式化处理等关键技术的实现过程.  相似文献   

3.
基于修正点互信息的特征级情感词极性自动研判   总被引:1,自引:0,他引:1  
[目的/意义]基于语料的情感词发现依语句上下文推断情感词极性,能显著提升情感分析的准确率,在面向领域的特征级情感分析任务中有重要应用价值。[方法/过程]对特征级情感极性研判问题展开探讨,提出基于点互信息的"特征-情感"对情感极性自动判别算法,算法借助大规模领域语料,根据观点表达"特征-情感"对与情感语义明确的种子词的共现关系,同时引入依存句法分析语句间的情感转折,通过修正经典的点互信息算法,对上下文约束下的用户观点表达进行褒贬预测。[结果/结论]实验证明,修正算法的性能显著优于词典匹配算法和经典的点互信息情感识别算法,不仅能够推断词典中未纳入的观点表达的情感指向,而且能较准确地推断"语境"中的情感词极性。在餐饮评论和数码产品评论两个评测语料集上,修正算法的F1宏平均指标分别达到0.827和0.878。该算法以领域相关的大规模语料为支撑,基于概率统计和句法分析,因数据获取便利,算法效率高,移植性好,具有普适性,尤其适用于面向领域的情感分析任务。  相似文献   

4.
一种基于N-Gram改进的文本特征提取算法   总被引:3,自引:0,他引:3  
介绍一种改进的文本特征提取及匹配算法。该算法基于N-Gram算法思路进行文本处理和特征提取,设计了gram关联矩阵用于统计与合并特征词,从而在固定长度N-Gram算法的基础上能够提取出不同长度的特征词。实验证明,该特征提取算法能够更为准确地描述文本特征,可应用于文本检索、Web挖掘等信息处理领域。  相似文献   

5.
[目的/意义]通过在标注资源丰富的源领域(Source Domain)中学习,并将目标领域(Target Domain)的文档投影到与源领域相同的特征空间(Feature Space)中去,从而解决目标领域因标注数据量较小而难以获得好的分类模型的问题。[方法/过程]选择亚马逊在书籍、DVD和音乐类目下的中文评论作为实验数据,以跨领域情感分析作为研究任务,提出一种跨领域深度循环神经网络(Cross Domain Deep Recurrent Neural Network,CD-DRNN)模型,实现不同领域环境下的知识迁移。CD-DRNN模型在跨领域环境下的平均分类准确度达到了81.70%,优于传统的栈式长短时记忆网络(Stacked Long Short Term Memory,Stacked-LSTM)模型(79.90%)、双向长短时记忆网络模型(Bidirectional Long Short Term Memory,Bi-LSTM)模型(80.50%)、卷积神经网络长短时记忆网络串联(Convolution Neural Network with Long Short Term Memory,CNN-LSTM)(74.70%)模型以及卷积神经网络长短时记忆网络并联(Merged Convolution Neural Network with Long Short Term Memory,Merged-CNN-LSTM)模型(80.90%)。[结果/结论]源领域和目标领域的知识迁移能够有效解决监督学习在小数据集上难以获得好的分类效果的问题,通过CD-DRNN模型能够从无标注数据中有效地筛选特征,从而大大降低目标领域数据标注相关的工作量。  相似文献   

6.
基于关键词的科技文献聚类研究   总被引:1,自引:0,他引:1  
描述一种基于改进TF IDF特征词加权算法的科技文献聚类方法:首先提取科技文献的特征词;然后根据特征词的词频、所在位置和词性为特征词加权,建立科技文献的向量空间模型;接着使用基于密度的聚类算法对科技文献向量空间模型数据进行聚类分析;最后使用主成分分析法对科技文献聚类的结果进行标识,利用F measure方法对聚类结果进行评价。实验表明,用提出的科技文献聚类方法能够从所检索的科技文献中发现热点研究领域,并能识别具有学科融合性质的研究方向。  相似文献   

7.
特征词抽取和相关性融合的伪相关反馈查询扩展   总被引:2,自引:0,他引:2  
针对现有信息检索系统中存在的词不匹配问题,提出一种基于特征词抽取和相关性融合的伪相关反馈查询扩展算法以及新的扩展词权重计算方法。该算法从前列n篇初检局部文档中抽取与原查询相关的特征词,根据特征词在初检文档集中出现的频度以及与原查询的相关度,将特征词确定为最终的扩展词实现查询扩展。实验结果表明,该方法有效,并能提高和改善信息检索性能。  相似文献   

8.
领域本体的关系抽取研究   总被引:3,自引:0,他引:3  
利用机器学习和自然语言处理技术中的有关方法,研究从语料中抽取概念关系为领域本体构建服务。对等级关系以及领域关系的抽取方法作详细阐述,并通过实验证明该方法是有效的。  相似文献   

9.
知识领域可视化(Knowledge Domain Visualization)在分析学科结构、揭示知识领域、识别学科前沿等方面由于其客观、高效而备受相关学者的关注.著者同被引分析是对知识领域进行可视化时应用最多的一种方法.本文以中国杂交水稻研究为例,利用著者同被引分析,采取目前国际上应用较多的寻径网络技术对该知识领域进行了可视化显示,验证了基于著者同被引分析的知识领域可视化在揭示学科领域发展规律方面的有效性.  相似文献   

10.
[目的/意义] 基于高维矩阵稀疏降维的思想,提出一种利用惩罚性矩阵分解(Penalized Matrix Decomposition,PMD)实现共词分析的新方法。[方法/过程] 以"学科服务"为研究主题,根据PMD算法原理,在Matlab环境下分别实现特征词的提取、特征词的软聚类以及聚类效果的可视化。[结果/结论] 与传统的共词分析方法对比,PMD算法在共词分析中具有独特的优势:提取的特征词比较全面,聚类数目便于确定,聚类结果易于理解。  相似文献   

11.
Patent prior art search is a type of search in the patent domain where documents are searched for that describe the work previously carried out related to a patent application. The goal of this search is to check whether the idea in the patent application is novel. Vocabulary mismatch is one of the main problems of patent retrieval which results in low retrievability of similar documents for a given patent application. In this paper we show how the term distribution of the cited documents in an initially retrieved ranked list can be used to address the vocabulary mismatch. We propose a method for query modeling estimation which utilizes the citation links in a pseudo relevance feedback set. We first build a topic dependent citation graph, starting from the initially retrieved set of feedback documents and utilizing citation links of feedback documents to expand the set. We identify the important documents in the topic dependent citation graph using a citation analysis measure. We then use the term distribution of the documents in the citation graph to estimate a query model by identifying the distinguishing terms and their respective weights. We then use these terms to expand our original query. We use CLEF-IP 2011 collection to evaluate the effectiveness of our query modeling approach for prior art search. We also study the influence of different parameters on the performance of the proposed method. The experimental results demonstrate that the proposed approach significantly improves the recall over a state-of-the-art baseline which uses the link-based structure of the citation graph but not the term distribution of the cited documents.  相似文献   

12.
Terminology extraction is an essential task in domain knowledge acquisition, as well as for information retrieval. It is also a mandatory first step aimed at building/enriching terminologies and ontologies. As often proposed in the literature, existing terminology extraction methods feature linguistic and statistical aspects and solve some problems related (but not completely) to term extraction, e.g. noise, silence, low frequency, large-corpora, complexity of the multi-word term extraction process. In contrast, we propose a cutting edge methodology to extract and to rank biomedical terms, covering all the mentioned problems. This methodology offers several measures based on linguistic, statistical, graphic and web aspects. These measures extract and rank candidate terms with excellent precision: we demonstrate that they outperform previously reported precision results for automatic term extraction, and work with different languages (English, French, and Spanish). We also demonstrate how the use of graphs and the web to assess the significance of a term candidate, enables us to outperform precision results. We evaluated our methodology on the biomedical GENIA and LabTestsOnline corpora and compared it with previously reported measures.  相似文献   

13.
The need to cluster small text corpora composed of a few hundreds of short texts rises in various applications; e.g., clustering top-retrieved documents based on their snippets. This clustering task is challenging due to the vocabulary mismatch between short texts and the insufficient corpus-based statistics (e.g., term co-occurrence statistics) due to the corpus size. We address this clustering challenge using a framework that utilizes a set of external knowledge resources that provide information about term relations. Specifically, we use information induced from the resources to estimate similarity between terms and produce term clusters. We also utilize the resources to expand the vocabulary used in the given corpus and thus enhance term clustering. We then project the texts in the corpus onto the term clusters to cluster the texts. We evaluate various instantiations of the proposed framework by varying the term clustering method used, the approach of projecting the texts onto the term clusters, and the way of applying external knowledge resources. Extensive empirical evaluation demonstrates the merits of our approach with respect to applying clustering algorithms directly on the text corpus, and using state-of-the-art co-clustering and topic modeling methods.  相似文献   

14.
In the information retrieval process, functions that rank documents according to their estimated relevance to a query typically regard query terms as being independent. However, it is often the joint presence of query terms that is of interest to the user, which is overlooked when matching independent terms. One feature that can be used to express the relatedness of co-occurring terms is their proximity in text. In past research, models that are trained on the proximity information in a collection have performed better than models that are not estimated on data. We analyzed how co-occurring query terms can be used to estimate the relevance of documents based on their distance in text, which is used to extend a unigram ranking function with a proximity model that accumulates the scores of all occurring term combinations. This proximity model is more practical than existing models, since it does not require any co-occurrence statistics, it obviates the need to tune additional parameters, and has a retrieval speed close to competing models. We show that this approach is more robust than existing models, on both Web and newswire corpora, and on average performs equal or better than existing proximity models across collections.  相似文献   

15.
本文介绍了一个由哈尔滨工业大学设计和开发的面向科技语料的短语结构句法分析器。与传统的短语结构句法分析器不同,本句法分析器不需要对输入语料进行预处理。给定未经预处理的语料,本句法分析器可以联合地进行分词、词性标注以及短语结构的句法分析。这可以看成是多任务学习的一个实例。此外,针对科技语料的特点,本句法分析器对所使用的特征模板进行了优化,同时构建了面向科技语料的单词内部结构树库。实验结果表明,我们的句法分析器在通用领域的测试集以及科技领域的测试集上均取得了较好的效果。  相似文献   

16.
Focused web crawling in the acquisition of comparable corpora   总被引:2,自引:0,他引:2  
Cross-Language Information Retrieval (CLIR) resources, such as dictionaries and parallel corpora, are scarce for special domains. Obtaining comparable corpora automatically for such domains could be an answer to this problem. The Web, with its vast volumes of data, offers a natural source for this. We experimented with focused crawling as a means to acquire comparable corpora in the genomics domain. The acquired corpora were used to statistically translate domain-specific words. The same words were also translated using a high-quality, but non-genomics-related parallel corpus, which fared considerably worse. We also evaluated our system with standard information retrieval (IR) experiments, combining statistical translation using the Web corpora with dictionary-based translation. The results showed improvement over pure dictionary-based translation. Therefore, mining the Web for comparable corpora seems promising.  相似文献   

17.
Learning Algorithms for Keyphrase Extraction   总被引:20,自引:0,他引:20  
Many academic journals ask their authors to provide a list of about five to fifteen keywords, to appear on the first page of each article. Since these key words are often phrases of two or more words, we prefer to call them keyphrases. There is a wide variety of tasks for which keyphrases are useful, as we discuss in this paper. We approach the problem of automatically extracting keyphrases from text as a supervised learning task. We treat a document as a set of phrases, which the learning algorithm must learn to classify as positive or negative examples of keyphrases. Our first set of experiments applies the C4.5 decision tree induction algorithm to this learning task. We evaluate the performance of nine different configurations of C4.5. The second set of experiments applies the GenEx algorithm to the task. We developed the GenEx algorithm specifically for automatically extracting keyphrases from text. The experimental results support the claim that a custom-designed algorithm (GenEx), incorporating specialized procedural domain knowledge, can generate better keyphrases than a general-purpose algorithm (C4.5). Subjective human evaluation of the keyphrases generated by GenEx suggests that about 80% of the keyphrases are acceptable to human readers. This level of performance should be satisfactory for a wide variety of applications.  相似文献   

18.
We investigate the effect of feature weighting on document clustering, including a novel investigation of Okapi BM25 feature weighting. Using eight document datasets and 17 well-established clustering algorithms we show that the benefit of tf-idf weighting over tf weighting is heavily dependent on both the dataset being clustered and the algorithm used. In addition, binary weighting is shown to be consistently inferior to both tf-idf weighting and tf weighting. We investigate clustering using both BM25 term saturation in isolation and BM25 term saturation with idf, confirming that both are superior to their non-BM25 counterparts under several common clustering quality measures. Finally, we investigate estimation of the k1 BM25 parameter when clustering. Our results indicate that typical values of k1 from other IR tasks are not appropriate for clustering; k1 needs to be higher.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号