首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
从文档集合的语义结构理解文档集合可以提高多文档摘要的质量。本文通过抽取中文多文档摘要文档集中的主-述-宾三元组结构构建文档语义图,再对语义图中的节点利用编辑距离进行语义聚类,并应用Page-Rank排序算法对语义图进行权重计算后,选取包含权重较高的节点及链接关系的三元组生成文档集合的多文档摘要。在摘要的评测阶段,将基于句子抽取的多文档摘要结果和基于文档语义图生成的多文档摘要分别与由评测员人工生成的摘要进行ROUGE相关度评测,并对利用编辑距离对语义图进行语义聚类前后的结果进行了比较。实验结果表明,基于文档语义图生成的多文档摘要与人工生成的摘要结果重叠度更高,而利用编辑距离对语义图进行聚类则进一步改进了摘要的质量。  相似文献   

2.
主题模型LDA的多文档自动文摘   总被引:3,自引:0,他引:3  
近年来使用概率主题模型表示多文档文摘问题受到研究者的关注.LDA (latent dirichlet allocation)是主题模型中具有代表性的概率生成性模型之一.提出了一种基于LDA的文摘方法,该方法以混乱度确定LDA模型的主题数目,以Gibbs抽样获得模型中句子的主题概率分布和主题的词汇概率分布,以句子中主题权重的加和确定各个主题的重要程度,并根据LDA模型中主题的概率分布和句子的概率分布提出了2种不同的句子权重计算模型.实验中使用ROUGE评测标准,与代表最新水平的SumBasic方法和其他2种基于LDA的多文档自动文摘方法在通用型多文档摘要测试集DUC2002上的评测数据进行比较,结果表明提出的基于LDA的多文档自动文摘方法在ROUGE的各个评测标准上均优于SumBasic方法,与其他基于LDA模型的文摘相比也具有优势.  相似文献   

3.
针对新闻文本领域,该文提出一种基于查询的自动文本摘要技术,更加有针对性地满足用户信息需求。根据句子的TF-IDF、与查询句的相似度等要素,计算句子权重,并根据句子指示的时间给定不同的时序权重系数,使得最近发生的新闻内容具有更高的权重,最后使用最大边界相关的方法选择摘要句。通过与基于TF-IDF、Text-Rank、LDA等六种方法的对比,该摘要方法ROUGE评测指标上优于其他方法。从结合评测结果及摘要示例可以看出,该文提出的方法可以有效地从新闻文档集中摘取核心信息,满足用户查询内容的信息需求。  相似文献   

4.
提出一种基于文本分割技术的多文档自动文摘方法。该方法使用HowNet作为概念获取工具,通过建立句子概念向量空间模型和利用改进的DotPlotting模型来进行文本分割。利用建立的句子概念向量空间模型计算句子重要度,并根据句子重要度、文本分割结果和文摘句相似度等因素产生文本摘要。使用ROUGE-N评测方法和F_Score作为评测指标对系统产生的文摘进行评测,结果显示使用文本分割技术进行多文档摘要是有效的。  相似文献   

5.
自动摘要是解决网络信息过载问题的关键技术之一.在对文本中旬子的特征和句子之间的语义距离分析的基础上,提出了一种基于句子特征和语义距离的自动文本摘要算法.首先计算文档中句子的各个特征权重,在此基础上决定句子的权重;然后,通过句子之间的语义距离计算,修改句子的权重,据此进行排序,权重大的作为文本的主题句;最后,对文摘句进行平滑处理,生成文字流畅的文本摘要.实验表明,该算法在不同的压缩率下生成的摘要接近于人工摘要,具有较好的性能.  相似文献   

6.
自动文摘技术的目标是致力于将冗长的文档内容压缩成较为简短的几段话,将信息全面、简洁地呈现给用户,提高用户获取信息的效率和准确率。所提出的方法在LDA(Latent Dirichlet Allocation)的基础上,使用Gibbs抽样估计主题在单词上的概率分布和句子在主题上的概率分布,结合LDA参数和谱聚类算法提取多文档摘要。该方法使用线性公式来整合句子权重,提取出字数为400字的多文档摘要。使用ROUGE自动摘要评测工具包对DUC2002数据集评测摘要质量,结果表明,该方法能有效地提高摘要的质量。  相似文献   

7.
针对基于图的多文档摘要,该文提出了一种在图排序中结合维基百科实体信息增强摘要质量的方法。首先抽取文档集合中高频实体的维基词条内容作为该文档集合的背景知识,然后采用PageRank算法对文档集合中的句子进行排序,之后采用改进的DivRank算法对文档集合和背景知识中的句子一起排序,最后根据两次排序结果的线性组合确定文档句子的最终排序以进行摘要句的选取。在DUC2005数据集上的评测结果表明该方法可以有效利用维基百科知识增强摘要的质量。  相似文献   

8.
近年来自动摘要方面的研究大多是关于多文档和Web网页的,而对网站自动摘要的研究较少。为此,基于主题模型隐含狄利克雷分布(LDA)和网站层次结构提出一个可以自动生成网站摘要的算法。该算法可获取整个网站内的网页信息并进行整合,根据提出的句子权重公式计算句子权重,选取权重最高的句子作为网站摘要。以20个商业和学术网站作为实验对象,使用ROUGE评测标准,结果表明,与仅使用主题模型LDA获取的网站摘要相比,不带停用词的ROUGE-1和ROUGE-L提高0.32,带停用词的ROUGE-1提高0.39,ROUGE-L提高0.38。与网站首页摘要相比,不带停用词的ROUGE-1提高0.03,ROUGE-L提高0.06,带停用词的ROUGE-1提高0.08,ROUGE-L提高0.07。  相似文献   

9.
为了让计算机能够对中文文章提取摘要,提出一种中文摘要自动生成算法。该算法基于Gensim自然语言处理框架实现,并在原有的基础上做出了改进,算法主要分为两个阶段。关键句生成阶段,对中文语料进行预处理,并放入Gensim框架中的Word2vec模型进行训练,修改TextRank算法使其能够接受词向量的输入生成无向图从而找到关键句;摘要生成框架构建阶段,根据文章结构与Gensim框架中的LDA主题模型所提取的关键词,赋予句子不同的权值,将分数高的几个句子组合生成文章摘要。Rouge摘要评测结果表明,该算法生成的摘要能够包含文章关键信息,相比于其他自动文摘算法,句意通顺程度得到了提升。  相似文献   

10.
一种基于主题词集的自动文摘方法*   总被引:1,自引:1,他引:0  
提出一种基于主题词集的文本自动文摘方法,用于自动提取文档文摘.该方法根据提取到的主题词集,由主题词权重进行加权计算各主题词所在的句子权重,从而得出主题词集对应的每个句子的总权重,再根据自动文摘比例选取句子权重较大的几个句子,最后按原文顺序输出文摘.实验在哈工大信息检索研究室单文档自动文摘语料库上进行,使用内部评测自动评...  相似文献   

11.
汉语语句主题语义倾向分析方法的研究   总被引:7,自引:0,他引:7  
本文介绍了如何识别汉语语句主题和主题与情感描述项之间的关系以及如何计算主题的语义倾向(极性)。我们利用领域本体来抽取语句主题以及它的属性,然后在句法分析的基础上,识别主题和情感描述项之间的关系,从而最终决定语句中每个主题的极性。实验结果显示,与手工标注的语料作为金标准进行比较,用于识别主题和主题极性的改进后的SBV极性传递算法的F度量达到了72.41%。它比原来的SBV极性传递算法和VOB极性传递算法的F度量分别提高了7.6%和2.09%。因此,所建议的改进的SBV极性传递算法是合理和有效的。  相似文献   

12.
Automatic keyword extraction from documents has long been used and proven its usefulness in various areas. Crowdsourced tagging for multimedia resources has emerged and looks promising to a certain extent. Automatic approaches for unstructured data, automatic keyword extraction and crowdsourced tagging are efficient but they all suffer from the lack of contextual understanding. In this paper, we propose a new model of extracting key contextual terms from unstructured data, especially from documents, with crowdsourcing. The model consists of four sequential processes: (1) term selection by frequency, (2) sentence building, (3) revised term selection reflecting the newly built sentences, and (4) sentence voting. Online workers read only a fraction of a document and participated in sentence building and sentence voting processes, and key sentences were generated as a result. We compared the generated sentences to the keywords entered by the author and to the sentences generated by offline workers who read the whole document. The results support the idea that sentence building process can help selecting terms with more contextual meaning, closing the gap between keywords from automated approaches and contextual understanding required by humans.  相似文献   

13.
通过C程序对分别从网上下载的中、英文字幕文件,借助于文件中字幕显示时间信息,对两文件进行排序合并,生成的文件拥有中英文逐句对照效果,对学习英语很有帮助。  相似文献   

14.
Most of text categorization techniques are based on word and/or phrase analysis of the text. Statistical analysis of a term frequency captures the importance of the term within a document only. However, two terms can have the same frequency in there documents, but one term contributes more to the meaning of its sentences than the other term. Thus, the underlying model should identify terms that capture the semantics of text. In this case, the model can capture terms that present the concepts of the sentence, which leads to discovering the topic of the document. A new concept‐based model that analyzes terms on the sentence, document, and corpus levels rather than the traditional analysis of document only is introduced. The concept‐based model can effectively discriminate between nonimportant terms with respect to sentence semantics and terms which hold the concepts that represent the sentence meaning. A set of experiments using the proposed concept‐based model on different datasets in text categorization is conducted in comparison with the traditional models. The results demonstrate the substantial enhancement of the categorization quality using the sentence‐based, document‐based and corpus‐based concept analysis.  相似文献   

15.
Most of the common techniques in text retrieval are based on the statistical analysis terms (words or phrases). Statistical analysis of term frequency captures the importance of the term within a document only. Thus, to achieve a more accurate analysis, the underlying model should indicate terms that capture the semantics of text. In this case, the model can capture terms that represent the concepts of the sentence, which leads to discovering the topic of the document. In this paper, a new concept-based retrieval model is introduced. The proposed concept-based retrieval model consists of conceptual ontological graph (COG) representation and concept-based weighting scheme. The COG representation captures the semantic structure of each term within a sentence. Then, all the terms are placed in the COG representation according to their contribution to the meaning of the sentence. The concept-based weighting analyzes terms at the sentence and document levels. This is different from the classical approach of analyzing terms at the document level only. The weighted terms are then ranked, and the top concepts are used to build a concept-based document index for text retrieval. The concept-based retrieval model can effectively discriminate between unimportant terms with respect to sentence semantics and terms which represent the concepts that capture the sentence meaning. Experiments using the proposed concept-based retrieval model on different data sets in text retrieval are conducted. The experiments provide comparison between traditional approaches and the concept-based retrieval model obtained by the combined approach of the conceptual ontological graph and the concept-based weighting scheme. The evaluation of results is performed using three quality measures, the preference measure (bpref), precision at 10 documents retrieved (P(10)) and the mean uninterpolated average precision (MAP). All of these quality measures are improved when the newly developed concept-based retrieval model is used, confirming that such model enhances the quality of text retrieval.  相似文献   

16.
应用图模型来研究多文档自动摘要是当前研究的一个热点,它以句子为顶点,以句子之间相似度为边的权重构造无向图结构。由于此模型没有充分考虑句子中的词项权重信息以及句子所属的文档信息,针对这个问题,该文提出了一种基于词项—句子—文档的三层图模型,该模型可充分利用句子中的词项权重信息以及句子所属的文档信息来计算句子相似度。在DUC2003和DUC2004数据集上的实验结果表明,基于词项—句子—文档三层图模型的方法优于LexRank模型和文档敏感图模型。  相似文献   

17.
Ontological reasoning for improving the treatment of emotions in text   总被引:2,自引:2,他引:0  
With the advent of affective computing, the task of adequately identifying, representing and processing the emotional connotations of text has acquired importance. Two problems facing this task are addressed in this paper: the composition of sentence emotion from word emotion, and a representation of emotion that allows easy conversion between existing computational representations. The emotion of a sentence of text should be derived by composition of the emotions of the words in the sentence, but no method has been proposed so far to model this compositionality. Of the various existing approaches for representing emotions, some are better suited for some problems and some for others, but there is no easy way of converting from one to another. This paper presents a system that addresses these two problems by reasoning with two ontologies implemented with Semantic Web technologies: one designed to represent word dependency relations within a sentence, and one designed to represent emotions. The ontology of word dependency relies on roles to represent the way emotional contributions project over word dependencies. By applying automated classification of mark-up results in terms of the emotion ontology the system can interpret unrestricted input in terms of a restricted set of concepts for which particular rules are provided. The rules applied at the end of the process provide configuration parameters for a system for emotional voice synthesis.  相似文献   

18.
Due to the exponential growth of textual information available on the Web, end users need to be able to access information in summary form – and without losing the most important information in the document when generating the summaries. Automatic generation of extractive summaries from a single document has traditionally been given the task of extracting the most relevant sentences from the original document. The methods employed generally allocate a score to each sentence in the document, taking into account certain features. The most relevant sentences are then selected, according to the score obtained for each sentence. These features include the position of the sentence in the document, its similarity to the title, the sentence length, and the frequency of the terms in the sentence. However, it has still not been possible to achieve a quality of summary that matches that performed by humans and therefore methods continue to be brought forward that aim to improve on the results. This paper addresses the generation of extractive summaries from a single document as a binary optimization problem where the quality (fitness) of the solutions is based on the weighting of individual statistical features of each sentence – such as position, sentence length and the relationship of the summary to the title, combined with group features of similarity between candidate sentences in the summary and the original document, and among the candidate sentences of the summary. This paper proposes a method of extractive single-document summarization based on genetic operators and guided local search, called MA-SingleDocSum. A memetic algorithm is used to integrate the own-population-based search of evolutionary algorithms with a guided local search strategy. The proposed method was compared with the state of the art methods UnifiedRank, DE, FEOM, NetSum, CRF, QCS, SVM, and Manifold Ranking, using ROUGE measures on the datasets DUC2001 and DUC2002. The results showed that MA-SingleDocSum outperforms the state of the art methods.  相似文献   

19.
单片机程序中,当Switch/Case语句分支较多、处理代码较长、处理情况较为复杂时,逻辑修改和程序调试均存在一定的困难。针对该问题,本文给出了使用函数指针替代Switch/Case语句的实现思路以及相对应的代码模型,为其他类似的代码实现提供参考。  相似文献   

20.
神经机器翻译自兴起以来,不断给机器翻译领域带来振奋人心的消息。但神经机器翻译没有显式地利用语言学知识对句子结构进行分析,因此对结构复杂的长句翻译效果不佳。该文基于分治法的思想,识别并抽取句子中的最长名词短语,保留特殊标识或核心词,与其余部分组成句子框架。通过神经机器翻译系统分别翻译最长名词短语和句子框架,再将译文重新组合,缓解了神经机器翻译对句子长度敏感的问题。实验结果表明,该方法获得的译文与基线系统相比,BLEU分值提升了0.89。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号