首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
目前针对国内在中文环境下本体学习的研究才刚刚起步的现状,对本体学习和HowNet进行了简单介绍,提出了基于HowNet的中文本体学习的主要思路.当前,本体学习的研究重点在于概念及概念间关系抽取.采用文本语料作为输入,首先对文本进行预处理,然后基于HowNet生成了一个领域语义词典,在本体学习中加入领域核心概念本体,在概念关系抽取阶段,采用基于HowNet的语义相似度计算方法.实验证明,提出的本体学习方法能够有效改进概念和概念间关系抽取的准确度.  相似文献   

2.
本体构造就是利用各种数据源以半自动方式新建或扩充改编已有本体以构建一个新本体。现有的本体构造方法大都以大量领域文本和背景语料库为基础抽取大量概念术语,然后从中选出领域概念构造出一个本体。Cluster-Merge算法首先对领域文档先用k-means聚类算法进行聚类,然后根据文档聚类的结果来构造本体,最后根据本体相似度进行本体合并得到最终的输出本体。通过实验可证明用Cluster-Merge算法得出的本体可以提高查全率、查准率。  相似文献   

3.
对海量文本语料进行上下位语义关系自动抽取是自然语言处理的重要内容,利用简单模式匹配方法抽取得到候选上下位关系后,对其进行验证过滤是难点问题。为此,分别通过对词汇语境相似度与布朗聚类相似度计算,提出一种结合语境相似度和布朗聚类相似度特征对候选下位词集合进行聚类的上下位关系验证方法。通过对少量已标注训练语料的语境相似度和布朗聚类相似度进行计算,得到验证模型和2种相似度的结合权重系数。该方法无需借助现有的词汇关系词典和知识库,可对上下位关系抽取结果进行有效过滤。在CCF NLP&2012词汇语义关系评测语料上进行实验,结果表明,与模式匹配和上下文比较等方法相比,该方法可使 F 值指标得到明显提升。  相似文献   

4.
实体上下位关系是构建领域知识图谱不可或缺的一种重要的语义关系,传统抽取上下位关系的方法大多不考虑关系的组织。提出一种结合词向量和Bootstrapping的方法来实现领域实体上下位关系的获取与组织。首先,选取旅游领域的种子语料集;然后,采用基于词向量的相似度计算方法对种子集中包含的上下位关系模式进行聚类,筛选出置信度高的模式并对未标注语料进行上下位关系识别,得到候选关系实例,同时选择置信度高的关系实例加入到种子集中,进行下一轮的迭代,直到得到所有的关系实例;最后,根据领域实体上下位关系对的向量偏移并结合领域实体层级关系的特点,采用映射的学习方法进行领域实体层级关系组织。实验结果表明,与传统的方法相比,所提方法的F值提高了近10%。  相似文献   

5.
本文探讨了如何利用领域文本集来自动构建领域本体的技术,以辅助知识工程师方便快捷地构建领域本体.文中提出一种利用概念之间的语义相似度,通过蚁群聚类算法对概念集进行聚类,最后利用知网的义原层次结构抽取分类关系的算法,通过非对称簇分析函数评价概念间的关联度,以提取非分类关系,最终生成领域本体.实验证明了该本体学习系统的有效性.  相似文献   

6.
基于网络资源与用户行为信息的领域术语提取   总被引:1,自引:0,他引:1  
领域术语是反映领域特征的词语.领域术语自动抽取是自然语言处理中的一项重要任务,可以应用在领域本体抽取、专业搜索、文本分类、类语言建模等诸多研究领域,利用互联网上大规模的特定领域语料来构建领域词典成为一项既有挑战性又有实际价值的工作.当前,领域术语提取工作所利用的网络语料主要是网页对应的正文,但是由于网页正文信息抽取所面临的难题会影响领域术语抽取的效果,那么利用网页的锚文本和查询文本替代网页正文进行领域术语抽取,则可以避免网页正文信息抽取所面临的难题.针对锚文本和查询文本所存在的文本长度过短、语义信息不足等缺点,提出一种适用于各种类型网络数据及网络用户行为数据的领域数据提取方法,并使用该方法基于提取到的网页正文数据、网页锚文本数据、用户查询信息数据、用户浏览信息数据等开展了领域术语提取工作,重点考察不同类型网络资源和用户行为信息对领域术语提取工作的效果差异.在海量规模真实网络数据上的实验结果表明,基于用户查询信息和用户浏览过的锚文本信息比基于网页正文提取技术得到的正文取得了更好的领域术语提取效果.  相似文献   

7.
一种基因本体术语间的语义相似度计算方法   总被引:2,自引:0,他引:2       下载免费PDF全文
计算基因本体中的术语的语义相似度是基因本体的一个重要应用。基于信息量和基于距离的语义相似度计算方法都只从各自的角度计算术语间语义相似度。提出了基于基因本体中术语所在有向无环图的计算方法。该方法既考虑了术语的祖先对其的信息量的影响,又考虑了术语所在的位置以及术语间的语义联系类型。实验结果表明该方法有较高的准确度。  相似文献   

8.
专利领域中术语抽取结果的好坏决定了本体构建的质量。提出一种自动生成过滤词典并结合词汇密集度等影响因子的术语抽取方法。首先在分词和词性标注的基础上,对文献匹配词性规则算法生成的模板得到候选长术语和单词型短术语集合,然后利用文档一致度生成的过滤词典过滤部分候选长术语集,最后针对长术语的构成特点,将词汇密集度、文档差比、文档一致度三个术语因子加权平均作为整个长术语的术语权重值,并按值高低排序。在8000篇专利摘要文献的基准语料上进行实验,随机选取五组实验数据,平均准确率达到86%。结果表明该方法在领域术语抽取方面是行之有效的。  相似文献   

9.
自动化地获取网络资源中的领域本体可以缩短本体的构建周期,但自动化的本体扩充还是本体工程中的一个挑战,其难点主要在于如何抽取术语并在新术语和已有本体之间建立映射关系。为此,提出了一个基于启发式规则的本体自动化扩充方法。该方法从网络资源中抽取自然语言文本,结合自然语言处理技术进行文本预处理,采用优先匹配对象属性的方式挖掘领域知识术语,然后通过启发式规则匹配术语的方式进行本体扩充,最后进行一致性检测。采用上述方法实现了一个基于Web的本体扩充工具。以城市景观信息核心本体作为研究案例进行了实验,结果显示本方法在扩充实例时具有较高的查准率和查全率,表明其具有有效性和可行性。  相似文献   

10.
基于语义网的本体技术应用   总被引:1,自引:0,他引:1  
为研究基于语义网系统的知识本体构建方法,提出并设计1个本体构造的框架.在本体的建立过程中,充分利用已大量存在的概念模型,从而在一定程度上减轻术语收集的工作量.为提高本体的构造速度,提出一系列本体构造辅助方法:借助转换准则,直接把已存在的概念模型元素转换为本体元素;利用概念量化技术,设计1个分类算法,通过求术语之间的语义亲和度、结构相关性以及术语类之间的联结紧密度,使本体构造中的术语分类可以自动进行,从而为领域本体的构造提供有效可行的方案.  相似文献   

11.
在分别研究了基于信息熵和基于词频分布变化的术语抽取方法的情况下,该文提出了一种信息熵和词频分布变化相结合的术语抽取方法。信息熵体现了术语的完整性,词频分布变化体现了术语的领域相关性。通过应用信息熵,即将信息熵结合到词频分布变化公式中进行术语抽取,且应用简单语言学规则过滤普通字符串。实验表明,在汽车领域的语料上,应用该方法抽取出1300个术语,其正确率达到73.7%。结果表明该方法对低频术语有更好的抽取效果,同时抽取出的术语结构更完整。  相似文献   

12.
The task of building an ontology from a textual corpus starts with the conceptualization phase, which extracts ontology concepts. These concepts are linked by semantic relationships. In this paper, we describe an approach to the construction of an ontology from an Arabic textual corpus, starting first with the collection and preparation of the corpus through normalization, removing stop words and stemming; then, to extract terms of our ontology, a statistical method for extracting simple and complex terms, called “the repeated segments method” are applied. To select segments with sufficient weight we apply the weighting method term frequency–inverse document frequency (TF–IDF), and to link these terms by semantic relationships we apply an automatic method of learning linguistic markers from text. This method requires a dataset of relationship pairs, which are extracted from two external resources: an Arabic dictionary of synonyms and antonyms and the lexical database Arabic WordNet. Finally, we present the results of our experimentation using our textual corpus. The evaluation of our approach shows encouraging results in terms of recall and precision.  相似文献   

13.
Nowadays, it is necessary that users have access to information in a concise form without losing any critical information. Document summarization is an automatic process of generating a short form from a document. In itemset-based document summarization, the weights of all terms are considered the same. In this paper, a new approach is proposed for multidocument summarization based on weighted patterns and term association measures. In the present study, the weights of the terms are not equal in the context and are computed based on weighted frequent itemset mining. Indeed, the proposed method enriches frequent itemset mining by weighting the terms in the corpus. In addition, the relationships among the terms in the corpus have been considered using term association measures. Also, the statistical features such as sentence length and sentence position have been modified and matched to generate a summary based on the greedy method. Based on the results of the DUC 2002 and DUC 2004 datasets obtained by the ROUGE toolkit, the proposed approach can outperform the state-of-the-art approaches significantly.  相似文献   

14.
采用改进重采样和BRF方法的定义抽取研究   总被引:1,自引:0,他引:1  
为了从专业领域语料中发现并获取所有的专业术语定义,该文提出了使用分类方法进行专业术语定义抽取的方法。该文采用一种基于实例距离分布信息的过采样方法,将其与随机欠采样方法结合用以建立平衡训练语料,并使用BRF(Balanced Random Forest)方法来获得C4.5决策树的聚合分类结果。该方法获得了最好65%的F1-measure成绩和78%的F2-measure成绩,超过了仅使用BRF方法取得的成绩。  相似文献   

15.
A fuzzy ontology and its application to news summarization.   总被引:7,自引:0,他引:7  
In this paper, a fuzzy ontology and its application to news summarization are presented. The fuzzy ontology with fuzzy concepts is an extension of the domain ontology with crisp concepts. It is more suitable to describe the domain knowledge than domain ontology for solving the uncertainty reasoning problems. First, the domain ontology with various events of news is predefined by domain experts. The document preprocessing mechanism will generate the meaningful terms based on the news corpus and the Chinese news dictionary defined by the domain expert. Then, the meaningful terms will be classified according to the events of the news by the term classifier. The fuzzy inference mechanism will generate the membership degrees for each fuzzy concept of the fuzzy ontology. Every fuzzy concept has a set of membership degrees associated with various events of the domain ontology. In addition, a news agent based on the fuzzy ontology is also developed for news summarization. The news agent contains five modules, including a retrieval agent, a document preprocessing mechanism, a sentence path extractor, a sentence generator, and a sentence filter to perform news summarization. Furthermore, we construct an experimental website to test the proposed approach. The experimental results show that the news agent based on the fuzzy ontology can effectively operate for news summarization.  相似文献   

16.
本文提出了一种基于伪相关反馈模型的领域词典自动生成算法。将领域词典生成过程视为领域术语的检索过程假设初始检索出来的前若干个字符串与领域相关,将这些字符串加到领域词典中,重新检索,如此迭代,直到生成的领域词典达到预先设定的规模。实验表明,本算法经过若干次迭代后生成的领域词典准确率高于已有领域词典生成算法。  相似文献   

17.
基于统计的中文分词方法由于训练语料领域的限制,导致其领域自适应性能力较差。相比分词训练语料,领域词典的获取要容易许多,而且能为分词提供丰富的领域信息。该文通过将词典信息以特征的方式融入到统计分词模型(该文使用CRF统计模型)中来实现领域自适应性。实验表明,这种方法显著提高了统计中文分词的领域自适应能力。当测试领域和训练领域相同时,分词的F-measure值提升了2%;当测试领域和训练领域不同时,分词的F-measure值提升了6%。  相似文献   

18.
Nowadays a vast amount of textual information is collected and stored in various databases around the world, including the Internet as the largest database of all. This rapidly increasing growth of published text means that even the most avid reader cannot hope to keep up with all the reading in a field and consequently the nuggets of insight or new knowledge are at risk of languishing undiscovered in the literature. Text mining offers a solution to this problem by replacing or supplementing the human reader with automatic systems undeterred by the text explosion. It involves analyzing a large collection of documents to discover previously unknown information. Text clustering is one of the most important areas in text mining, which includes text preprocessing, dimension reduction by selecting some terms (features) and finally clustering using selected terms. Feature selection appears to be the most important step in the process. Conventional unsupervised feature selection methods define a measure of the discriminating power of terms to select proper terms from corpus. However up to now the valuation of terms in groups has not been investigated in reported works. In this paper a new and robust unsupervised feature selection approach is proposed that evaluates terms in groups. In addition a new Modified Term Variance measuring method is proposed for evaluating groups of terms. Furthermore a genetic based algorithm is designed and implemented for finding the most valuable groups of terms based on the new measure. These terms then will be utilized to generate the final feature vector for the clustering process . In order to evaluate and justify our approach the proposed method and also a conventional term variance method are implemented and tested using corpus collection Reuters-21578. For a more accurate comparison, methods have been tested on three corpuses and for each corpus clustering task has been done ten times and results are averaged. Results of comparing these two methods are very promising and show that our method produces better average accuracy and F1-measure than the conventional term variance method.  相似文献   

19.
Ontology自动构建平台OntoAGS   总被引:3,自引:0,他引:3       下载免费PDF全文
李林  刘贺欢  刘椿年 《计算机工程》2006,32(13):212-214
设计并实现了一个Ontology自动构建平台OntoAGS。OntoAGS支持从给定的领域文集中抽取术语,并用领域无关文集过滤术语;支持概念学习,确定术语在其所属领域内的意义;支持实例学习,发现概念的实例;支持关系学习,找到概念间的关系。实验表明,与TextToOnto相比,用户运用OntoAGS能更准确有效地构建领域Ontology。  相似文献   

20.
A symbolic approach to automatic multiword term structuring   总被引:1,自引:0,他引:1  
This paper presents a three-level structuring of multiword terms basing on lexical inclusion, WordNet similarity and a clustering approach. Term clustering by automatic data analysis methods offers an interesting way of organizing a domain’s knowledge structure, useful for several information-oriented tasks like science and technology watch, textmining, computer-assisted ontology population, Question Answering (Q–A). This paper explores how this three-level term structuring brings to light the knowledge structures from a corpus of genomics and compares the mapping of the domain topics against a hand-built ontology (the GENIA ontology). Ways of integrating the results into a Q–A system are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号