首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
The Cross-Language Evaluation Forum has encouraged research in text retrieval methods for numerous European languages and has developed durable test suites that allow language-specific techniques to be investigated and compared. The labor associated with crafting a retrieval system that takes advantage of sophisticated linguistic methods is daunting. We examine whether language-neutral methods can achieve accuracy comparable to language-specific methods with less concomitant software complexity. Using the CLEF 2002 test set we demonstrate empirically how overlapping character n-gram tokenization can provide retrieval accuracy that rivals the best current language-specific approaches for European languages. We show that n = 4 is a good choice for those languages, and document the increased storage and time requirements of the technique. We report on the benefits of and challenges posed by n-grams, and explain peculiarities attendant to bilingual retrieval. Our findings demonstrate clearly that accuracy using n-gram indexing rivals or exceeds accuracy using unnormalized words, for both monolingual and bilingual retrieval.  相似文献   

2.
Most current methods for automatic text categorization are based on supervised learning techniques and, therefore, they face the problem of requiring a great number of training instances to construct an accurate classifier. In order to tackle this problem, this paper proposes a new semi-supervised method for text categorization, which considers the automatic extraction of unlabeled examples from the Web and the application of an enriched self-training approach for the construction of the classifier. This method, even though language independent, is more pertinent for scenarios where large sets of labeled resources do not exist. That, for instance, could be the case of several application domains in different non-English languages such as Spanish. The experimental evaluation of the method was carried out in three different tasks and in two different languages. The achieved results demonstrate the applicability and usefulness of the proposed method.  相似文献   

3.
Cross-language information retrieval (CLIR) has so far been studied with the assumption that some rich linguistic resources such as bilingual dictionaries or parallel corpora are available. But creation of such high quality resources is labor-intensive and they are not always at hand. In this paper we investigate the feasibility of using only comparable corpora for CLIR, without relying on other linguistic resources. Comparable corpora are text documents in different languages that cover similar topics and are often naturally attainable (e.g., news articles published in different languages at the same time period). We adapt an existing cross-lingual word association mining method and incorporate it into a language modeling approach to cross-language retrieval. We investigate different strategies for estimating the target query language models. Our evaluation results on the TREC Arabic–English cross-lingual data show that the proposed method is effective for the CLIR task, demonstrating that it is feasible to perform cross-lingual information retrieval with just comparable corpora.  相似文献   

4.
Multilingual retrieval (querying of multiple document collections each in a different language) can be achieved by combining several individual techniques which enhance retrieval: machine translation to cross the language barrier, relevance feedback to add words to the initial query, decompounding for languages with complex term structure, and data fusion to combine monolingual retrieval results from different languages. Using the CLEF 2001 and CLEF 2002 topics and document collections, this paper evaluates these techniques within the context of a monolingual document ranking formula based upon logistic regression. Each individual technique yields improved performance over runs which do not utilize that technique. Moreover the techniques are complementary, in that combining the best techniques outperforms individual technique performance. An approximate but fast document translation using bilingual wordlists created from machine translation systems is presented and evaluated. The fast document translation is as effective as query translation in multilingual retrieval. Furthermore, when fast document translation is combined with query translation in multilingual retrieval, the performance is significantly better than that of query translation or fast document translation.  相似文献   

5.
[目的 /意义]将海量学术文本观点提取工作由人工转向机器,提高效率的同时又能够保证观点提取的准确性、客观性。[方法 /过程]使用UniLM统一语言预训练模型,训练过程中对模型进行精调,以人工标注数据集进行机器学习。将学术文摘作为长度为a的文本序列,经过机器学习,生成长度为b的句子序列(a≥b),并且作为学术论文观点句输出。[结果 /结论 ]研究结果表明:UniLM模型对于规范型文摘、半规范型文摘、非规范型文摘观点生成精准度分别为94.36%、77.27%、57.43%,规范型文摘生成效果最好。将机器学习模型应用于长文本观点生成,为学术论文观点生成提供一种新方法。不足之处在于本文模型依赖文摘的结构性,对非规范型文摘观点生成效果有所欠缺。  相似文献   

6.
赵洪 《情报学报》2020,(3):330-344
自动文摘是文本挖掘的主要任务之一。相比于抽取式自动文摘,生成式自动文摘在思想上更接近人工摘要的过程,具有重要研究意义。近几年伴随着深度学习方法的发展,基于深层神经网络模型的生成式自动文摘也有了令人瞩目的发展。为了更全面地理解该类方法的思想和研究现状,本文从生成式自动文摘的任务描述入手,梳理了基于RNN (recurrent neural network,循环神经网络)的模型、基于CNN (convolutional neural network,卷积神经网络)的模型、基于RNN+CNN的模型、融合注意力机制的模型和融合强化学习的模型共五大类生成式自动文摘的深度学习方法。这类方法表明,在深层神经网络的训练下,特别是融合注意力机制和强化学习后,摘要效果得以明显提升。在生成式自动文摘研究的未来发展中,除深度学习方法本身的不断应用和改进外,还需关注如何有效实现篇章级语义理解下的摘要、面向不同文本对象特点的摘要和摘要结果自动评价等问题。此外,如何结合传统摘要研究中的成熟方法进一步提高摘要效果,也是一个很有价值的研究方向。  相似文献   

7.
[目的/意义] 基于新时代人民日报分词语料库从不同维度统计分析句子长度和词汇分布,有助于了解当代汉语文本的语言学特征,进而开展自然语言处理和文本挖掘研究。[方法/过程] 在2018年1月人民日报分词语料的基础上,结合1998年1月人民日报分词语料,确定统计中所使用的6种句子类别,统计和分析字与词单位上的句子长度分布,并基于齐普夫定律揭示词汇静态分布情况。[结果/结论] 从字词维度上的句子长度分布情况和词汇的齐普夫分布状态上看,随着时间的推移,在1998和2018两个语料上,句子的长度和词汇的分布均发生变化,但这种变化又是延续的、有关联的。  相似文献   

8.
本文系统性地研究面向查询的观点摘要任务,旨在构建一种查询式观点摘要模型框架,探究不同的摘要方法对摘要效果的影响。通过综合考虑情感倾向与句子相似度,从待检文档中抽取出待摘要语句,再结合神经网络和词嵌入技术生成摘要,进而构建面向查询的观点摘要框架。从Debatepedia网站上爬取议题和论述内容构建观点摘要实验数据集,将本文方法应用到该数据集上,以检验不同模型的效果。实验结果表明,在该数据集上,仅使用基于抽取式的方法生成的观点摘要质量更高,取得了最高的平均ROUGE分数、深度语义相似度分数和情感分数,较生成式方法分别提高6.58%、1.79%和11.52%,而比组合式方法提高了8.33%、2.80%和13.86%;同时,本文提出的句子深度语义相似度和情感分数评估指标有助于更好地评估面向查询的观点摘要模型效果。研究结果对于提升面向查询的观点摘要效果,促进观点摘要模型在情报学领域的应用具有重要意义。  相似文献   

9.
This paper deals with a supervised learning method devoted to producing categorization models of text documents. The goal of the method is to use a suitable numerical measurement of example similarity to find centroids describing different categories of examples. The centroids are not abstract or statistical models, but rather consist of bits of examples. The centroid-learning method is based on a Genetic Algorithm for Texts (GAT). The categorization system using this genetic algorithm infers a model by applying the genetic algorithm to each set of preclassified documents belonging to a category. The models thus obtained are the category centroids that are used to predict the category of a test document. The experimental results validate the utility of this approach for classifying incoming documents.  相似文献   

10.
Technical term translations are important for cross-lingual information retrieval. In many languages, new technical terms have a common origin rendered with different spelling of the underlying sounds, also known as cross-lingual spelling variants (CLSV). To find the best CLSV in a text database index, we contribute a formulation of the problem in a probabilistic framework, and implement this with an instance of the general edit distance using weighted finite-state transducers. Some training data is required when estimating the costs for the general edit distance. We demonstrate that after some basic training our new multilingual model is robust and requires little or no adaptation for covering additional languages, as the model takes advantage of language independent transliteration patterns. We train the model with medical terms in seven languages and test it with terms from varied domains in six languages. Two test languages are not in the training data. Against a large text database index, we achieve 64–78 % precision at the point of 100% recall. This is a relative improvement of 22% on the simple edit distance.  相似文献   

11.
We augment naive Bayes models with statistical n-gram language models to address short-comings of the standard naive Bayes text classifier. The result is a generalized naive Bayes classifier which allows for a local Markov dependence among observations; a model we refer to as the C hain A ugmented N aive Bayes (CAN) Bayes classifier. CAN models have two advantages over standard naive Bayes classifiers. First, they relax some of the independence assumptions of naive Bayes—allowing a local Markov chain dependence in the observed variables—while still permitting efficient inference and learning. Second, they permit straightforward application of sophisticated smoothing techniques from statistical language modeling, which allows one to obtain better parameter estimates than the standard Laplace smoothing used in naive Bayes classification. In this paper, we introduce CAN models and apply them to various text classification problems. To demonstrate the language independent and task independent nature of these classifiers, we present experimental results on several text classification problems—authorship attribution, text genre classification, and topic detection—in several languages—Greek, English, Japanese and Chinese. We then systematically study the key factors in the CAN model that can influence the classification performance, and analyze the strengths and weaknesses of the model.  相似文献   

12.
Significant progress has been made in information retrieval covering text semantic indexing and multilingual analysis. However, developments in Arabic information retrieval did not follow the extraordinary growth of Arabic usage in the Web during the ten last years. In the tasks relating to semantic analysis, it is preferable to directly deal with texts in their original language. Studies on topic models, which provide a good way to automatically deal with semantic embedded in texts, are not complete enough to assess the effectiveness of the approach on Arabic texts. This paper investigates several text stemming methods for Arabic topic modeling. A new lemma-based stemmer is described and applied to newspaper articles. The Latent Dirichlet Allocation model is used to extract latent topics from three Arabic real-world corpora. For supervised classification in the topics space, experiments show an improvement when comparing to classification in the full words space or with root-based stemming approach. In addition, topic modeling with lemma-based stemming allows us to discover interesting subjects in the press articles published during the 2007–2009 period.  相似文献   

13.
文本情感摘要技术的目的是以简洁的形式准确表达文章的核心情感内容。为解决不同的文档结构及内容特征等问题对摘要结果的影响,提出了一种基于主题的SE-TextRank 情感摘要方法。通过LDA 模型自动获取收敛后的文本主题,利用余弦距离算法进行主题句子分组,使用传统多特征融合以及SE-TextRank 情感摘要算法对组内中心句抽取,最终获取目的摘要。实验表明,采用此方法能够更为高效的获取新闻文本摘要结果。  相似文献   

14.
关键词自动标引是一种识别有意义且具有代表性片段或词汇的自动化技术。关键词自动标引可以为自动摘要、自动分类、自动聚类、机器翻译等应用提供辅助作用。本文利用基于知网的词语语义相关度算法对词汇链的构建算法进行了改进,并结合词频和词的位置等统计信息,进行关键词的自动标引。实验证明,该方法可以有效的进行关键词的自动标引。  相似文献   

15.
In recent years graph-ranking based algorithms have been proposed for single document summarization and generic multi-document summarization. The algorithms make use of the “votings” or “recommendations” between sentences to evaluate the importance of the sentences in the documents. This study aims to differentiate the cross-document and within-document relationships between sentences for generic multi-document summarization and adapt the graph-ranking based algorithm for topic-focused summarization. The contributions of this study are two-fold: (1) For generic multi-document summarization, we apply the graph-based ranking algorithm based on each kind of sentence relationship and explore their relative importance for summarization performance. (2) For topic-focused multi-document summarization, we propose to integrate the relevance of the sentences to the specified topic into the graph-ranking based method. Each individual kind of sentence relationship is also differentiated and investigated in the algorithm. Experimental results on DUC 2002–DUC 2005 data demonstrate the great importance of the cross-document relationships between sentences for both generic and topic-focused multi-document summarizations. Even the approach based only on the cross-document relationships can perform better than or at least as well as the approaches based on both kinds of relationships between sentences.
Xiaojun WanEmail:
  相似文献   

16.
Employing effective methods of sentence retrieval is essential for many tasks in Information Retrieval, such as summarization, novelty detection and question answering. The best performing sentence retrieval techniques attempt to perform matching directly between the sentences and the query. However, in this paper, we posit that the local context of a sentence can provide crucial additional evidence to further improve sentence retrieval. Using a Language Modeling Framework, we propose a novel reformulation of the sentence retrieval problem that extends previous approaches so that the local context is seamlessly incorporated within the retrieval models. In a series of comprehensive experiments, we show that localized smoothing and the prior importance of a sentence can improve retrieval effectiveness. The proposed models significantly and substantially outperform the state of the art and other competitive sentence retrieval baselines on recall-oriented measures, while remaining competitive on precision-oriented measures. This research demonstrates that local context plays an important role in estimating the relevance of a sentence, and that existing sentence retrieval language models can be extended to utilize this evidence effectively.  相似文献   

17.
[目的/意义]学术文本结构功能是对学术文献的结构和章节功能的概括,针对当前研究较少从学术文本多层次结构出发进行融合和传统方法依赖人工经验构建规则或特征的问题,本文在对学术文本层次结构进行解析的基础上,构建了多层次融合的学术文本结构功能识别模型。[方法/过程]以ScienceDirect数据集为例进行实验,该模型首先通过深度学习方法对不同层次学术文本进行结构功能识别,接着采用投票方法对不同层次和不同模型的识别结果进行融合。[结果/结论]研究结果表明各层次集成后的整体效果较单一模型均有不同程度提升,综合结果的整体准确率、召回率和F1值分别达到86%、84%和84%,并且深度学习算法在学术文本分类任务中的性能较传统机器学习算法SVM更优,最后对学术文本结构功能错分情况进行了分析,指出本研究潜在的应用领域和下一步的研究方向。  相似文献   

18.
在理解自动摘要处理流程和梳理国内外重要研究成果的基础上,重点对自动摘要研究在文本分词、冗余度控制、质量评价、短文本自动摘要以及多语言与跨语言文本自动摘要等方面所面临的若干基本问题及其主要解决方法进行归纳和总结,并对部分研究内容的发展方向进行展望,以期为未来的自动摘要和自然语言处理研究提供有意义的参考。  相似文献   

19.
曹洋  成颖  裴雷 《图书情报工作》2014,58(18):122-130
探讨基于机器学习的自动文摘研究中的特征选取、算法选择、模型训练、文摘提取和模型评测等主要过程;重点分析3种主要的机器学习算法:朴素贝叶斯、隐马尔科夫和条件随机场,阐释3种算法的基本思想,在对相关研究进行系统梳理的基础上,给出作者的思考;对3种机器学习算法在训练方法、协同训练与主动学习、类别平衡以及词汇分布等方面存在的共性问题进行深入讨论并提出未来的主要研究方向。  相似文献   

20.
Efficient algorithms for ranking with SVMs   总被引:1,自引:0,他引:1  
RankSVM (Herbrich et al. in Advances in large margin classifiers. MIT Press, Cambridge, MA, 2000; Joachims in Proceedings of the ACM conference on knowledge discovery and data mining (KDD), 2002) is a pairwise method for designing ranking models. SVMLight is the only publicly available software for RankSVM. It is slow and, due to incomplete training with it, previous evaluations show RankSVM to have inferior ranking performance. We propose new methods based on primal Newton method to speed up RankSVM training and show that they are 5 orders of magnitude faster than SVMLight. Evaluation on the Letor benchmark datasets after complete training using such methods shows that the performance of RankSVM is excellent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号