首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
Statistical approaches in speech technology, whether used for statistical language models, trees, hidden Markov models or neural networks, represent the driving forces for the creation of language resources (LR), e.g., text corpora, pronunciation and morphology lexicons, and speech databases. This paper presents a system architecture for the rapid construction of morphologic and phonetic lexicons, two of the most important written language resources for the development of ASR (automatic speech recognition) and TTS (text-to-speech) systems. The presented architecture is modular and is particularly suitable for the development of written language resources for inflectional languages. In this paper an implementation is presented for the Slovenian language. The integrated graphic user interface focuses on the morphological and phonetic aspects of language and allows experts to produce good performances during analysis. In multilingual TTS systems, many extensive external written language resources are used, especially in the text processing part. It is very important, therefore, that representation of these resources is time and space efficient. It is also very important that language resources for new languages can be easily incorporated into the system, without modifying the common algorithms developed for multiple languages. In this regard the use of large external language resources (e.g., morphology and phonetic lexicons) represent an important problem because of the required space and slow look-up time. This paper presents a method and its results for compiling large lexicons, using examples for compiling German phonetic and morphology lexicons (CISLEX), and Slovenian phonetic (SIflex) and morphology (SImlex) lexicons, into corresponding finite-state transducers (FSTs). The German lexicons consisted of about 300,000 words, SIflex consisted of about 60,000 and SImlex of about 600,000 words (where 40,000 words were used for representation using finite-state transducers). Representation of large lexicons using finite-state transducers is mainly motivated by considerations of space and time efficiency. A great reduction in size and optimal access time was achieved for all lexicons. The starting size for the German phonetic lexicon was 12.53 MB and 18.49 MB for the morphology lexicon. The starting size for the Slovenian phonetic lexicon was 1.8 MB and 1.4 MB for the morphology lexicon. The final size of the corresponding FSTs was 2.78 MB for the German phonetic lexicon, 6.33 MB for the German morphology lexicon, 253 KB for SIflex and 662 KB for the SImlex lexicon. The achieved look-up time is optimal, since it only depends on the length of the input word and not on the size of the lexicon. Integration of lexicons for new languages into the multilingual TTS system is easy when using such representations and does not require any changes in the algorithms used for such lexicons.  相似文献   

2.
Berkeley FrameNet is a lexico-semantic resource for English based on the theory of frame semantics. It has been exploited in a range of natural language processing applications and has inspired the development of framenets for many languages. We present a methodological approach to the extraction and generation of a computational multilingual FrameNet-based grammar and lexicon. The approach leverages FrameNet-annotated corpora to automatically extract a set of cross-lingual semantico-syntactic valence patterns. Based on data from Berkeley FrameNet and Swedish FrameNet, the proposed approach has been implemented in Grammatical Framework (GF), a categorial grammar formalism specialized for multilingual grammars. The implementation of the grammar and lexicon is supported by the design of FrameNet, providing a frame semantic abstraction layer, an interlingual semantic application programming interface (API), over the interlingual syntactic API already provided by GF Resource Grammar Library. The evaluation of the acquired grammar and lexicon shows the feasibility of the approach. Additionally, we illustrate how the FrameNet-based grammar and lexicon are exploited in two distinct multilingual controlled natural language applications. The produced resources are available under an open source license.  相似文献   

3.
Wordnets have been created in many languages, revealing both their lexical commonalities and diversity. The next challenge is to make multilingual wordnets fully interoperable. The EuroWordNet experience revealed the shortcomings of an interlingua based on a natural language. Instead, we propose a model based on the division of the lexicon and a language-independent, formal ontology that serves as the hub interlinking the language-specific lexicons. The ontology avoids the idiosyncracies of the lexicon and furthermore allows formal reasoning about the concepts it contains. We address the division of labor between ontology and lexicon. Finally, we illustrate our model in the context of a domain-specific multilingual information system based on a central ontology and interconnected wordnets in seven languages.  相似文献   

4.
5.

This paper proposes a multilingual audio information management system based on semantic knowledge in complex environments. The complex environment is defined by the limited resources (financial, material, human, and audio resources); the poor quality of the audio signal taken from an internet radio channel; the multilingual context (Spanish, French, and Basque that is in under-resourced situation in some areas); and the regular appearance of cross-lingual elements between the three languages. In addition to this, the system is also constrained by the requirements of the local multilingual industrial sector. We present the first evolutionary system based on a scalable architecture that is able to fulfill these specifications with automatic adaptation based on automatic semantic speech recognition, folksonomies, automatic configuration selection, machine learning, neural computing methodologies, and collaborative networks. As a result, it can be said that the initial goals have been accomplished and the usability of the final application has been tested successfully, even with non-experienced users.

  相似文献   

6.
提出了一个基于语义、面向自然语言处理的多文种信息处理平台的模型SMIPP.该模型主要由应用程序/用户接口层、文字输入层和文字输出层、信息处理服务层、语料库层、多文种代码体系SemaCode层和语言Ontology层组成,该平台把各种语言文字统一用具有自描述能力的SemaCode表示,并通过语言Ontology来表示词汇的语义以及在各个文种间的联系,再通过服务形式提供各种基于语料库的文字信息处理功能,是一个全新的多文种信息处理模型.  相似文献   

7.
情感词典自动构建方法综述   总被引:13,自引:1,他引:12  
王科  夏睿 《自动化学报》2016,42(4):495-511
情感词典作为判断词语和文本情感倾向的重要工具, 其自动构建方法已成为情感分析和观点挖掘领域的一项重要研究内容. 本文整理了现有的中、英文情感词典资源, 同时分别从知识库、语料库、以及两者结合的角度, 归纳现有英文和中文情感词典的构建方法, 分析了各种方法的优缺点, 并总结了情感词典构建中的若干难点问题. 之后, 我们回顾了情感词典性能评估方法及相关评测竞赛. 最后总结了情感词典构建任务的发展前景以及一些亟需解决的问题.  相似文献   

8.
Language Resources and Evaluation - This paper describes the development of a multilingual, manually annotated dataset for three under-resourced Dravidian languages generated from social media...  相似文献   

9.
Many tasks related to sentiment analysis rely on sentiment lexicons, lexical resources containing information about the emotional implications of words (e.g., sentiment orientation of words, positive or negative). In this work, we present an automatic method for building lemma-level sentiment lexicons, which has been applied to obtain lexicons for English, Spanish and other three official languages in Spain. Our lexicons are multi-layered, allowing applications to trade off between the amount of available words and the accuracy of the estimations. Our evaluations show high accuracy values in all cases. As a previous step to the lemma-level lexicons, we have built a synset-level lexicon for English similar to SentiWordNet 3.0, one of the most used sentiment lexicons nowadays. We have made several improvements in the original SentiWordNet 3.0 building method, reflecting significantly better estimations of positivity and negativity, according to our evaluations. The resource containing all the lexicons, ML-SentiCon, is publicly available.  相似文献   

10.
Parallel corpora encode extremely valuable linguistic knowledge about paired languages, both in terms of vocabulary and syntax. A professional translation of a text represents a series of linguistic decisions made by the translator in order to convey as faithfully as possible the meaning of the original text and to produce a natural text from the perspective of a native speaker of the target language. The naturalness of a translation implies not only the grammaticality of the translated text, but also style and cultural or social specificity.We describe a program that exploits the knowledge embedded in the parallel corpora and produces a set of translation equivalents (a translation lexicon). The program uses almost no linguistic knowledge, relying on statistical evidence and some simplifying assumptions. Our experiments were conducted on the MULTEXT-EAST multilingual parallel corpus (Orwell's 1984), and the evaluation of the system performance is presented in some detail in terms of precision, recall and processing time. We conclude by briefly mentioning some applications of the automatic extracted lexicons for text and speech processing.  相似文献   

11.
针对情感词典构建中只反映了语言知识,缺乏语用知识的问题,提出了一种从真实语料中获取词语间的共现关系,并结合词语同义关系、语素特征进行中文褒贬词典半监督构建的方法。利用点互信息从语料中构建了情感词语和评价对象之间的相关性矩阵,采用非负矩阵分解的方法将其分解为情感词语之间的共现矩阵及新的情感词语-评价对象关系矩阵;将关系矩阵结合同义、语素特征,利用标签传播算法进行词语的褒贬分类。实验结果表明,在相同的数据集上该方法提高了只考虑语素和语义特征词典的准确率和召回率。  相似文献   

12.
This paper addresses the problem of automatic acquisition of lexical knowledge for rapid construction of engines for machine translation and embedded multilingual applications. We describe new techniques for large-scale construction of a Chinese–English verb lexicon and we evaluate the coverage and effectiveness of the resulting lexicon. Leveraging off an existing Chinese conceptual database called How Net and a large, semantically rich English verb database, we use thematic-role information to create links between Chinese concepts and English classes. We apply the metrics of recall and precision to evaluate the coverage and effectiveness of the linguistic resources. The results of this work indicate that: (a) we are able to obtain reliable Chinese–English entries both with and without pre-existing semantic links between the two languages; (b) if we have pre-existing semantic links, we are able to produce a more robust lexical resource by merging these with our semantically rich English database; (c) in our comparisons with manual lexicon creation, our automatic techniques were shown to achieve 62% precision, compared to a much lower precision of 10% for arbitrary assignment of semantic links. This revised version was published online in November 2006 with corrections to the Cover Date.  相似文献   

13.
Language Resources and Evaluation - Comparable corpora can benefit the development of Neural Machine Translation models, in particular for under-resourced languages. We present a case study centred...  相似文献   

14.
基于上下文的双语词表构建方法是比较流行的基于可比较双语语料库的双语词表构建方法。特别地,依存上下文模型从句子的依存树上抽取词语的上下文特征,由于依存关系更能体现词语之间的共现关系,因而这种方法提高了构建双语词表的性能。该文在此基础上,进一步提出了依存关系映射模型, 即通过同时匹配依存树中的上下文词语、依存关系类型和方向来实现双语词表的构建。在FBIS语料库上的实验表明,该方法在中文—英文和英文—中文两个方向上的双语词表构建上均取得了较好的性能,这说明了依存关系映射模型在双语词表构建中的有效性。  相似文献   

15.
A Learned Lexicon-Driven Paradigm for Interactive Video Retrieval   总被引:2,自引:0,他引:2  
Effective video retrieval is the result of interplay between interactive query selection, advanced visualization of results, and a goal-oriented human user. Traditional interactive video retrieval approaches emphasize paradigms, such as query-by-keyword and query-by-example, to aid the user in the search for relevant footage. However, recent results in automatic indexing indicate that query-by-concept is becoming a viable resource for interactive retrieval also. We propose in this paper a new video retrieval paradigm. The core of the paradigm is formed by first detecting a large lexicon of semantic concepts. From there, we combine query-by-concept, query-by-example, query-by-keyword, and user interaction into the MediaMill semantic video search engine. To measure the impact of increasing lexicon size on interactive video retrieval performance, we performed two experiments against the 2004 and 2005 NIST TRECVID benchmarks, using lexicons containing 32 and 101 concepts, respectively. The results suggest that from all factors that play a role in interactive retrieval, a large lexicon of semantic concepts matters most. Indeed, by exploiting large lexicons, many video search questions are solvable without using query-by-keyword and query-by-example. In addition, we show that the lexicon-driven search engine outperforms all state-of-the-art video retrieval systems in both TRECVID 2004 and 2005  相似文献   

16.
17.
基于词典的名词性隐喻识别   总被引:1,自引:0,他引:1  
隐喻是用一个事物来类比另外一个事物的语言表达,在自然语言中非常普遍,要实现自然语言理解隐喻处理不可避免。该文针对最基本的隐喻类型——名词性隐喻,提出基于词典的识别方法。结合同义词词林的语义距离与HowNet的语义关系来识别隐喻,考察隐喻与语义距离及语义关系之间的关联。  相似文献   

18.
Sentiment analysis is an active research area in today’s era due to the abundance of opinionated data present on online social networks. Semantic detection is a sub-category of sentiment analysis which deals with the identification of sentiment orientation in any text. Many sentiment applications rely on lexicons to supply features to a model. Various machine learning algorithms and sentiment lexicons have been proposed in research in order to improve sentiment categorization. Supervised machine learning algorithms and domain specific sentiment lexicons generally perform better as compared to the unsupervised or semi-supervised domain independent lexicon based approaches. The core hindrance in the application of supervised algorithms or domain specific sentiment lexicons is the unavailability of sentiment labeled training datasets for every domain. On the other hand, the performance of algorithms based on general purpose sentiment lexicons needs improvement. This research is focused on building a general purpose sentiment lexicon in a semi-supervised manner. The proposed lexicon defines word semantics based on Expected Likelihood Estimate Smoothed Odds Ratio that are then incorporated with supervised machine learning based model selection approach. A comprehensive performance comparison verifies the superiority of our proposed approach.  相似文献   

19.
On the dependence of handwritten word recognizers on lexicons   总被引:1,自引:0,他引:1  
The performance of any word recognizer depends on the lexicon presented. Usually, large lexicons or lexicons containing similar entries pose difficulty for recognizers. However, the literature lacks any quantitative methodology of capturing the precise dependence between word recognizers and lexicons. This paper presents a performance model that views word recognition as a function of character recognition and statistically "discovers" the relation between a word recognizer and the lexicon. It uses model parameters that capture a recognizer's ability of distinguishing characters (of the alphabet) and its sensitivity to lexicon size. These parameters are determined by a multiple regression model which is derived from the performance model. Such a model is very useful in comparing word recognizers by predicting their performance based on the lexicon presented. We demonstrate the performance model with extensive experiments on five different word recognizers, thousands of images, and tens of lexicons. The results show that the model is a good fit not only on the training data but also in predicting the recognizers' performance on testing data.  相似文献   

20.
Lexicons     
The three lexicons used by KBMT-89 are described: A concept lexicon constitutes the sublanguage domain model for specifying semantic information; it is maintained by Ontos, a knowledge-acquisition and maintenance system. An analysis lexicon is a dictionary containing syntactic information and mapping rules required for semantic parsing. And a generation lexicon, similar to the analysis lexicon, is employed in the generation phase.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号