共查询到20条相似文献,搜索用时 46 毫秒
1.
针对现有多文档抽取方法不能很好地利用句子主题信息和语义信息的问题,提出一种融合多信息句子图模型的多文档摘要抽取方法。首先,以句子为节点,构建句子图模型;然后,将基于句子的贝叶斯主题模型和词向量模型得到的句子主题概率分布和句子语义相似度相融合,得到句子最终的相关性,结合主题信息和语义信息作为句子图模型的边权重;最后,借助句子图最小支配集的摘要方法来描述多文档摘要。该方法通过融合多信息的句子图模型,将句子间的主题信息、语义信息和关系信息相结合。实验结果表明,该方法能够有效地改进抽取摘要的综合性能。 相似文献
2.
3.
4.
5.
6.
阅读理解(reading comprehension,RC)任务的目的在于理解一篇文档并对提出的问题返回答案句.提出了一种充分利用外部资源来提高RC系统性能的方法,使得RC系统性能在Remedia和ChungHwa两种语料上均得到提高.特别地,在对基于Remedia语料RC系统的性能分析表明,24.1%的性能提高归因于基于Web的答案模式匹配的运用,11.1%的性能提高归因于语言学特征匹配策略运用.同时也进行了t-test,结果表明答案模式匹配、语言学特征匹配和词汇语义关联推理的运用所得到的性能提高是显著的. 相似文献
7.
针对多文档文摘生成过程中话题容易中断和文摘句子语义出现不连贯这两个研究难点, 分析了潜在语义分析聚类算法在句子排序中的应用, 以期提高文摘的生成质量。先采用潜在语义分析聚类算法将文摘句子聚类, 从而形成话题集, 以达到解决话题中断的目的。通过计算文档的文摘展现力, 挑选出文摘展现力最大的文档作为模板, 然后根据模板对文摘句子进行两趟排序。实验结果表明, 提出的算法是有效的, 该算法能够提高文摘的可读性。 相似文献
8.
I. V. Mashechkin M. I. Petrovskiy D. S. Popov D. V. Tsarev 《Programming and Computer Software》2011,37(6):299-305
In the paper, the most state-of-the-art methods of automatic text summarization, which build summaries in the form of generic
extracts, are considered. The original text is represented in the form of a numerical matrix. Matrix columns correspond to
text sentences, and each sentence is represented in the form of a vector in the term space. Further, latent semantic analysis
is applied to the matrix obtained to construct sentences representation in the topic space. The dimensionality of the topic
space is much less than the dimensionality of the initial term space. The choice of the most important sentences is carried
out on the basis of sentences representation in the topic space. The number of important sentences is defined by the length
of the demanded summary. This paper also presents a new generic text summarization method that uses nonnegative matrix factorization
to estimate sentence relevance. Proposed sentence relevance estimation is based on normalization of topic space and further
weighting of each topic using sentences representation in topic space. The proposed method shows better summarization quality
and performance than state-of-the-art methods on the DUC 2001 and DUC 2002 standard data sets. 相似文献
9.
10.
11.
针对常见问答系统采用的以词法分析为基础的浅层语义模型难以有效挖掘用户问句深层语义的问题,本文立足于旅游问答应用领域,采用组合范畴语法对旅游问句进行句法分析,使用Lambda演算式表示问句语义,以此构建旅游领域问句的语义模型,以便于通过精确的问句语义快速查找应答结果.研究首先进行旅游领域数据采集与语料标注的准备性工作,并针对语料对旅游问句的句式句法进行分析;然后采用基于概率的组合范畴语法的监督学习过程,通过训练获得较为可靠的旅游问句语义词典;最后根据语义词典及其他相关知识,学习用户问句语义,构建旅游自动应答语义分析系统,着重于问句解析和相应的语义模型的构建.通过在评测集上的验证,这种语义解析方法在解析效果上有比较明确的提升. 相似文献
12.
Novelty detection is to retrieve new information and filter redundancy from given sentences that are relevant to a specific topic. In TREC2003, the authors tried an approach to novelty detection with semantic distance computation. The motivation is to expand a sentence by introducing semantic information. Computation on semantic distance between sentences incorporates WordNet with statistical information. The novelty detection is treated as a binary classification problem: new sentence or not. The feature vector, used in the vector space model for classification, consists of various factors, including the semantic distance from the sentence to the topic and the distance from the sentence to the previous relevant context occurring before it. New sentences are then detected with Winnow and support vector machine classifiers, respectively. Several experiments are conducted to survey the relationship between different factors and performance. It is proved that semantic computation is promising in novelty detection. The ratio of new sentence size to relevant size is further studied given different relevant document sizes. It is found that the ratio reduced with a certain speed (about 0.86). Then another group of experiments is performed supervised with the ratio. It is demonstrated that the ratio is helpful to improve the novelty detection performance. 相似文献
13.
句子语义分析是语言研究深入发展的客观要求,也是当前制约语言信息处理技术深度应用的主要因素。在探索深层语义分析方法的基础上,该文根据汉语的特点,提出了一整套语义依存图的构建方法,并建立了一个包含30 000个句子的语义依存图库。以兼语句为重点研究对象,该文研究了语料库中所有纯粹的兼语句所对应的句模情况,进而试图构建基于语义依存图的句模系统,总结句型和句模的映射规则,从而为更好的建立语义自动分析模型提供相应的知识库。
相似文献
相似文献
14.
15.
针对汉语语句表意灵活复杂多变的特点,提出一种基于语义与情感的句子相似度计算方法,从表意层面计算句子相似度。该方法使用哈工大LTP平台对句子进行预处理,提取词语、词性、句法依存标记与语义角色标记,将语义角色标注结果作为句中语义独立成分赋予相似度权重系数,综合句法依存关系与词法关系计算两句相同标签语义独立成分相似度得到部分相似度,加权计算部分相似度得到句子整体相似度。另外,考虑到情感与句式因子,在整体相似度的基础上对满足条件的两句计算情感减益与句式减益。实验结果表明,该方法能有效提取出句子语义独立成分,从语义层面上计算句子相似度,解决了信息遗漏与句子组成成分不一致的问题,提高了句子相似度计算的准确率与鲁棒性。 相似文献
16.
阅读理解系统是通过对一篇自然语言文本的分析理解,对用户根据该文本所提的问题,自动抽取或者生成答案。本文提出一种利用浅层语义信息的英文阅读理解抽取方法,首先将问题和所有候选句的语义角色标注结果表示成树状结构,用树核(tree kernel)的方法计算问题和每个候选句之间的语义结构相似度,将该相似度值和词袋方法获得的词匹配数融合在一起,选择具有最高分值的候选句作为最终的答案句。在Remedia测试语料上,本文方法取得43.3%的HumSent准确率。 相似文献
17.
倾向性句子识别是文本倾向性分析的重要组成部分,其目的是识别文档中具有情感倾向的主观性句子。中文句子的倾向性不仅与倾向词有关,而且还跟句法、语义等因素有关,这使得倾向性句子识别不能简单地从词语的倾向性来统计得到。该文提出了一种基于N-gram超核的中文倾向性句子识别分类算法。该算法基于句子的句法、语义等特征构造N-gram超核函数,并采用基于该超核函数的支持向量机分类器识别中文倾向性句子。实验结果表明,与多项式核、N-gram核等单核函数相比,基于N-gram超核的中文倾向性句子识别算法在一定程度上能有效识别倾向性句子。 相似文献
18.
We have developed a broadcasting agent system, public opinion channel (POC) caster, which generates understandable conversational
form from text-based documents. The POC caster circulates the opinions of community members by using conversational form in
a broadcasting system on the Internet. We evaluated its transformation rules in two experiments. In experiment 1, we examined
our transformation rules for conversational form in relation to sentence length. Twenty-four participants listened to two
types of sentence (long sentences and short sentences) with conversational form or with single speech. In experiment 2, we
investigated the relationship between conversational form and the user’s knowledge level. Forty-two participants (21 with
a high knowledge level and 21 with a low knowledge level) were selected for a knowledge task and listened to two kinds of
sentence (sentences about a well-known topic or sentences about an unfamiliar topic). Our results indicate that the conversational
form aided comprehension, especially for long sentences and when users had little knowledge about the topic. We explore possible
explanations and implications of these results with regard to human cognition and text comprehension. 相似文献
19.
本文提出一种基于蕴含关系的图片语句匹配模型IRMatch,旨在解决图片语句两种不同模态语义之间的非对等匹配问题. 在利用卷积神经网络分别对图片和语句进行语义映射的基础上,IRMatch模型通过引入最大软间隔的学习策略挖掘图片与语句之间的蕴含关系,以强化相关图片语句对在公共语义空间中位置的邻近性,改善图片语句匹配得分的合理性. 基于IRMatch模型,本文实现一种图文双向检索方法,并在Flickr8k、Flickr30k以及Microsoft COCO数据集上与基于已有图片语句匹配模型的图文双向检索方法进行了比较. 实验结果表明,基于IRMatch模型的检索方法在上述三个数据集上的R@1,R@5,R@10以及Med r均优于基于已有模型的检索方法. 相似文献
20.
Sentence similarity based on semantic nets and corpus statistics 总被引:3,自引:0,他引:3
Li Y. McLean D. Bandar Z.A. O'Shea J.D. Crockett K. 《Knowledge and Data Engineering, IEEE Transactions on》2006,18(8):1138-1150
Sentence similarity measures play an increasingly important role in text-related research and applications in areas such as text mining, Web page retrieval, and dialogue systems. Existing methods for computing sentence similarity have been adopted from approaches used for long text documents. These methods process sentences in a very high-dimensional space and are consequently inefficient, require human input, and are not adaptable to some application domains. This paper focuses directly on computing the similarity between very short texts of sentence length. It presents an algorithm that takes account of semantic information and word order information implied in the sentences. The semantic similarity of two sentences is calculated using information from a structured lexical database and from corpus statistics. The use of a lexical database enables our method to model human common sense knowledge and the incorporation of corpus statistics allows our method to be adaptable to different domains. The proposed method can be used in a variety of applications that involve text knowledge representation and discovery. Experiments on two sets of selected sentence pairs demonstrate that the proposed method provides a similarity measure that shows a significant correlation to human intuition. 相似文献