首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Spelling speech recognition can be applied for several purposes including enhancement of speech recognition systems and implementation of name retrieval systems. This paper presents a Thai spelling analysis to develop a Thai spelling speech recognizer. The Thai phonetic characteristics, alphabet system and spelling methods have been analyzed. As a training resource, two alternative corpora, a small spelling speech corpus and an existing large continuous speech corpus, are used to train hidden Markov models (HMMs). Then their recognition results are compared to each other. To solve the problem of utterance speed difference between spelling utterances and continuous speech utterances, the adjustment of utterance speed has been taken into account. Two alternative language models, bigram and trigram, are used for investigating performance of spelling speech recognition. Our approach achieves up to 98.0% letter correction rate, 97.9% letter accuracy and 82.8% utterance correction rate when the language model is trained based on trigram and the acoustic model is trained from the small spelling speech corpus with eight Gaussian mixtures.  相似文献   

2.
This paper presents work on developing speech corpora and recognition tools for Turkish by porting SONIC, a speech recognition tool developed initially for English at the Center for Spoken Language Research of the University of Colorado at Boulder. The work presented in this paper had two objectives: The first one is to collect a standard phonetically-balanced Turkish microphone speech corpus for general research use. A 193-speaker triphone-balanced audio corpus and a pronunciation lexicon for Turkish have been developed. The corpus has been accepted for distribution by the Linguistic Data Consortium (LDC) of the University of Pennsylvania in October 2005, and it will serve as a standard corpus for Turkish speech researchers. The second objective was to develop speech recognition tools (a phonetic aligner and a phone recognizer) for Turkish, which provided a starting point for obtaining a multilingual speech recognizer by porting SONIC to Turkish. This part of the work was the first port of this particular recognizer to a language other than English; subsequently, SONIC has been ported to over 15 languages. Using the phonetic aligner developed, the audio corpus has been provided with word, phone and HMM-state level alignments. For the phonetic aligner, it is shown that 92.6% of the automatically labeled phone boundaries are placed within 20 ms of manually labeled locations for the Turkish audio corpus. Finally, a phone recognition error rate of 29.2% is demonstrated for the phone recognizer.  相似文献   

3.
We compared the performance of an automatic speech recognition system using n-gram language models, HMM acoustic models, as well as combinations of the two, with the word recognition performance of human subjects who either had access to only acoustic information, had information only about local linguistic context, or had access to a combination of both. All speech recordings used were taken from Japanese narration and spontaneous speech corpora.Humans have difficulty recognizing isolated words taken out of context, especially when taken from spontaneous speech, partly due to word-boundary coarticulation. Our recognition performance improves dramatically when one or two preceding words are added. Short words in Japanese mainly consist of post-positional particles (i.e. wa, ga, wo, ni, etc.), which are function words located just after content words such as nouns and verbs. So the predictability of short words is very high within the context of the one or two preceding words, and thus recognition of short words is drastically improved. Providing even more context further improves human prediction performance under text-only conditions (without acoustic signals). It also improves speech recognition, but the improvement is relatively small.Recognition experiments using an automatic speech recognizer were conducted under conditions almost identical to the experiments with humans. The performance of the acoustic models without any language model, or with only a unigram language model, were greatly inferior to human recognition performance with no context. In contrast, prediction performance using a trigram language model was superior or comparable to human performance when given a preceding and a succeeding word. These results suggest that we must improve our acoustic models rather than our language models to make automatic speech recognizers comparable to humans in recognition performance under conditions where the recognizer has limited linguistic context.  相似文献   

4.
We present MARS (Multilingual Automatic tRanslation System), a research prototype speech-to-speech translation system. MARS is aimed at two-way conversational spoken language translation between English and Mandarin Chinese for limited domains, such as air travel reservations. In MARS, machine translation is embedded within a complex speech processing task, and the translation performance is highly effected by the performance of other components, such as the recognizer and semantic parser, etc. All components in the proposed system are statistically trained using an appropriate training corpus. The speech signal is first recognized by an automatic speech recognizer (ASR). Next, the ASR-transcribed text is analyzed by a semantic parser, which uses a statistical decision-tree model that does not require hand-crafted grammars or rules. Furthermore, the parser provides semantic information that helps further re-scoring of the speech recognition hypotheses. The semantic content extracted by the parser is formatted into a language-independent tree structure, which is used for an interlingua based translation. A Maximum Entropy based sentence-level natural language generation (NLG) approach is used to generate sentences in the target language from the semantic tree representations. Finally, the generated target sentence is synthesized into speech by a speech synthesizer.Many new features and innovations have been incorporated into MARS: the translation is based on understanding the meaning of the sentence; the semantic parser uses a statistical model and is trained from a semantically annotated corpus; the output of the semantic parser is used to select a more specific language model to refine the speech recognition performance; the NLG component uses a statistical model and is also trained from the same annotated corpus. These features give MARS the advantages of robustness to speech disfluencies and recognition errors, tighter integration of semantic information into speech recognition, and portability to new languages and domains. These advantages are verified by our experimental results.  相似文献   

5.
In speech recognition research,because of the variety of languages,corresponding speech recognition systems need to be constructed for different languages.Especially in a dialect speech recognition system,there are many special words and oral language features.In addition,dialect speech data is very scarce.Therefore,constructing a dialect speech recognition system is difficult.This paper constructs a speech recognition system for Sichuan dialect by combining a hidden Markov model(HMM)and a deep long short-term memory(LSTM)network.Using the HMM-LSTM architecture,we created a Sichuan dialect dataset and implemented a speech recognition system for this dataset.Compared with the deep neural network(DNN),the LSTM network can overcome the problem that the DNN only captures the context of a fixed number of information items.Moreover,to identify polyphone and special pronunciation vocabularies in Sichuan dialect accurately,we collect all the characters in the dataset and their common phoneme sequences to form a lexicon.Finally,this system yields a 11.34%character error rate on the Sichuan dialect evaluation dataset.As far as we know,it is the best performance for this corpus at present.  相似文献   

6.
This paper presents a new technique to enhance the performance of the input interface of spoken dialogue systems based on a procedure that combines during speech recognition the advantages of using prompt-dependent language models with those of using a language model independent of the prompts generated by the dialogue system. The technique proposes to create a new speech recognizer, termed contextual speech recognizer, that uses a prompt-independent language model to allow recognizing any kind of sentence permitted in the application domain, and at the same time, uses contextual information (in the form of prompt-dependent language models) to take into account that some sentences are more likely to be uttered than others at a particular moment of the dialogue. The experiments show the technique allows enhancing clearly the performance of the input interface of a previously developed dialogue system based exclusively on prompt-dependent language models. But most important, in comparison with a standard speech recognizer that uses just one prompt-independent language model without contextual information, the proposed recognizer allows increasing the word accuracy and sentence understanding rates by 4.09% and 4.19% absolute, respectively. These scores are slightly better than those obtained using linear interpolation of the prompt-independent and prompt-dependent language models used in the experiments.  相似文献   

7.
Statistical approaches in speech technology, whether used for statistical language models, trees, hidden Markov models or neural networks, represent the driving forces for the creation of language resources (LR), e.g., text corpora, pronunciation and morphology lexicons, and speech databases. This paper presents a system architecture for the rapid construction of morphologic and phonetic lexicons, two of the most important written language resources for the development of ASR (automatic speech recognition) and TTS (text-to-speech) systems. The presented architecture is modular and is particularly suitable for the development of written language resources for inflectional languages. In this paper an implementation is presented for the Slovenian language. The integrated graphic user interface focuses on the morphological and phonetic aspects of language and allows experts to produce good performances during analysis. In multilingual TTS systems, many extensive external written language resources are used, especially in the text processing part. It is very important, therefore, that representation of these resources is time and space efficient. It is also very important that language resources for new languages can be easily incorporated into the system, without modifying the common algorithms developed for multiple languages. In this regard the use of large external language resources (e.g., morphology and phonetic lexicons) represent an important problem because of the required space and slow look-up time. This paper presents a method and its results for compiling large lexicons, using examples for compiling German phonetic and morphology lexicons (CISLEX), and Slovenian phonetic (SIflex) and morphology (SImlex) lexicons, into corresponding finite-state transducers (FSTs). The German lexicons consisted of about 300,000 words, SIflex consisted of about 60,000 and SImlex of about 600,000 words (where 40,000 words were used for representation using finite-state transducers). Representation of large lexicons using finite-state transducers is mainly motivated by considerations of space and time efficiency. A great reduction in size and optimal access time was achieved for all lexicons. The starting size for the German phonetic lexicon was 12.53 MB and 18.49 MB for the morphology lexicon. The starting size for the Slovenian phonetic lexicon was 1.8 MB and 1.4 MB for the morphology lexicon. The final size of the corresponding FSTs was 2.78 MB for the German phonetic lexicon, 6.33 MB for the German morphology lexicon, 253 KB for SIflex and 662 KB for the SImlex lexicon. The achieved look-up time is optimal, since it only depends on the length of the input word and not on the size of the lexicon. Integration of lexicons for new languages into the multilingual TTS system is easy when using such representations and does not require any changes in the algorithms used for such lexicons.  相似文献   

8.
提取动态的高层语言学特征建立了改进的语种相关的、联合的GMM-LM语种辨识方案。该方案减小了不同语种的高斯混合模型和语言模型之间的相关性,也降低了训练的复杂度。还提出了基于特征提取层和判决层融合技术的语种辨识系统。该系统利用了不同类型的特征对区分不同语种的贡献来增加不同语种语料之间的差异,并使相同语种的语料之间的差异减小。实验表明,设计的语种辨识系统具有较好的扩展性;基于特征提取层和判决层的融合系统能够有效地提高系统识别率。  相似文献   

9.
This paper describes the use of a neural network language model for large vocabulary continuous speech recognition. The underlying idea of this approach is to attack the data sparseness problem by performing the language model probability estimation in a continuous space. Highly efficient learning algorithms are described that enable the use of training corpora of several hundred million words. It is also shown that this approach can be incorporated into a large vocabulary continuous speech recognizer using a lattice rescoring framework at a very low additional processing time. The neural network language model was thoroughly evaluated in a state-of-the-art large vocabulary continuous speech recognizer for several international benchmark tasks, in particular the Nist evaluations on broadcast news and conversational speech recognition. The new approach is compared to four-gram back-off language models trained with modified Kneser–Ney smoothing which has often been reported to be the best known smoothing method. Usually the neural network language model is interpolated with the back-off language model. In that way, consistent word error rate reductions for all considered tasks and languages were achieved, ranging from 0.4% to almost 1% absolute.  相似文献   

10.
在资源相对匮乏的自动语音识别(Automatic speech recognition, ASR)领域, 如面向电话交谈的语音识别系统中, 统计语言模型(Language model, LM)存在着严重的数据稀疏问题. 本文提出了一种基于等概率事件的采样语料生成算法, 自动生成领域相关的语料, 用来强化统计语言模型建模. 实验结果表明, 加入本算法生成的采样语料可以缓解语言模型的稀疏性, 从而提升整个语音识别系统的性能. 在开发集上语言模型的困惑度相对降低7.5%, 字错误率(Character error rate, CER)绝对降低0.2个点; 在测试集上语言模型的困惑度相对降低6%, 字错误率绝对降低0.4点.  相似文献   

11.
Most conventional approaches to Chinese character recognition attempt to recognize unknown Chinese characters per se. Hence the discriminative power of features employed play a crucial role in determining the ultimate recognition rate. However, high level linguistic knowledge such as the strong word context effects present in the Chinese language and the different frequencies of words in the daily use of the language, are not utilized in machine recognition of Chinese characters. In this paper, a word-oriented recognizer using the Interactive Activation and Competition model (IAC model) is proposed. Such a recognizer is tolerant to noise, size and font variations and it also possesses self-learning capability.  相似文献   

12.
多层DGMM识别器在中国手语识别中的应用   总被引:4,自引:0,他引:4  
吴江琴  高文  陈熙霖  马继涌 《软件学报》2000,11(11):1430-1439
手语是聋人使用的语言,是由手形动作辅之以表情姿势由符号构成的比较稳定的表达系统 ,是一种靠动作/视觉交际的语言.手语识别的研究目标是让机器“看懂”聋人的语言.手 语识别和手语合成相结合,构成一个“人-机手语翻译系统”,便于聋人与周围环境的交 流.手语识别问题是动态手势信号即手语信号的识别问题.考虑到系统的实时性及识别效率, 该系统选取Cyberglove型号数据手套作为手语输入设备,采用DGMM(dynamic Gaussian mixt ure model)作为系统的识别技术,并根据中国手语的具体特点,在识别模块中选取了多层识 别器,可识别中国手语字典中的274个词条,识别率为97.4%.与基于单个DGMM的识别系统比 较,这种模型的识别精度与单个DGMM模型的识别精度基本相同,但其识别速度比单个DGMM的 识别速度有明显的提高.  相似文献   

13.
14.
基于子带GMM-UBM的广播语音多语种识别   总被引:2,自引:0,他引:2  
提出了一种基于概率统计模型的与语言内容无关的语种识别方法,它不需要掌握各语种的专业语言学知识就可以实现几十种语言的语种识别;并针对广播语音噪声干扰大的特点,采用GMM-UBM模型作为语种模型,提高了系统的噪声鲁棒性;由于广播语音的背景噪声不是简单的全频带加性白噪声,因此本文构建了一种基于子带GMM-UBM模型的多子系统结构的语种识别系统,后端采用神经网络进行系统级融合。本文通过对37种语言及方言的识别实验,证明了子带GMM-UBM方法的有效性。  相似文献   

15.
There is significant lexical difference—words and usage of words-between spontaneous/colloquial language and the written language. This difference affects the performance of spoken language recognition systems that use statistical language models or context-free-grammars because these models are based on the written language rather than the spoken form. There are many filler phrases and colloquial phrases that appear solely or more often in spontaneous and colloquial speech. Chinese languages perhaps exemplify such a difference as many colloquial forms of the language, such as Cantonese, exist strictly in spoken forms and are different from the written standard Chinese, which is based on Mandarin. A conventional way of dealing with this issue is to add colloquial terms manually to the lexicon. However, this is time-consuming and expensive. Meanwhile, supervised learning requires manual tagging of large corpuses, which is also time-consuming. We propose an unsupervised learning method to find colloquial terms and classify filler and content phrases in spontaneous and colloquial Chinese, including Cantonese. We propose using frequency strength, and spread measures of character pairs and groups to extract automatically frequent, out-of-vocabulary colloquial terms to add to a standard Chinese lexicon. An unsegmented, and unannotated corpus is segmented with the augmented lexicon. We then propose a Markov classifier to classify Chinese characters into either content or filler phrases in an iterative training method. This method is task-independent and can extract even mixed language terms. We show the effectiveness of our method by both a natural language query processing task and an adaptive Cantonese language-modeling task. The precision for content phrase extraction and classification is around 80%, with a recall of 99%, and the precision for filler phrase extraction and classification is around 99.5% with a recall of approximately 89%. The web search precision using these extracted content words is comparable to that of the search results with content phrases selected by humans. We adapt a language model trained from written texts with the Hong Kong Newsgroup corpus. It outperforms both the standard Chinese language model and also the Cantonese language model. It also performs better than the language model trained a simply by concatenating two sets of standard and colloquial texts.  相似文献   

16.
基于最大熵方法的汉语词性标注   总被引:5,自引:0,他引:5  
最大熵模型的应用研究在自然语言处理领域中受到关注,文中利用语料库中词性标注的上下文信息建立基于最大熵方法的汉语词性系统。研究的重点在于其特征的选取,因为汉语不同于其它语言,有其特殊性,所以特征的选取上与英语有差别。实验结果证明该模型是有效的,词性标注正确率达到97.34%。  相似文献   

17.
We are developing a human–computer spoken language system capable of processing English, Cantonese and Putonghua (two dialects of Chinese). Such biliteracy and trilingualism characterize Hong Kong's language environment. Users can simply converse with the system to access real-time information in the foreign exchange domain. The system supports user calls with fixed-line as well as cellular phones, both of which have high penetration in Hong Kong. As such, we strive towards universal usability in developing speech and language interface technologies by (i) supporting multiple languages for user diversity; (ii) requiring no specialized training for system usage to avoid user knowledge gaps; and (iii) supporting device variety in telephone-based information access.  相似文献   

18.
语音识别的顽健性与语音库的建立   总被引:1,自引:0,他引:1  
汉语语音识别在近十几年有很大进展,现今已有一些系统投入实际应用,并初步商品化。但是一些系统的顽健性较差,因而这方面的问题将成为今后语音识别研究的一项主要任务。为此我们建立了一个适用于语音识别顽健性研究的汉语语音库,并详细介绍了它的构成、特点和试验结果等。  相似文献   

19.
基于最大似然估计(Maximum likelihood estimation,MLE)的语言模型(Language model,LM)数据增强方法由于存在暴露偏差问题而无法生成具有长时语义信息的采样数据.本文提出了一种基于对抗训练策略的语言模型数据增强的方法,通过一个辅助的卷积神经网络判别模型判断生成数据的真伪,从而引导递归神经网络生成模型学习真实数据的分布.语言模型的数据增强问题实质上是离散序列的生成问题.当生成模型的输出为离散值时,来自判别模型的误差无法通过反向传播算法回传到生成模型.为了解决此问题,本文将离散序列生成问题表示为强化学习问题,利用判别模型的输出作为奖励对生成模型进行优化,此外,由于判别模型只能对完整的生成序列进行评价,本文采用蒙特卡洛搜索算法对生成序列的中间状态进行评价.语音识别多候选重估实验表明,在有限文本数据条件下,随着训练数据量的增加,本文提出的方法可以进一步降低识别字错误率(Character error rate,CER),且始终优于基于MLE的数据增强方法.当训练数据达到6M词规模时,本文提出的方法使THCHS30数据集的CER相对基线系统下降5.0%,AISHELL数据集的CER相对下降7.1%.  相似文献   

20.
文语转换是中文信息处理中研究的热点,是实现人机语音通信的一项关键技术。文章对实现中文文语转换的整个过程进行了初步分析和研究,给出了基于语音数据库的文语转换方法和实现过程。具体介绍了语音库的建立,分析了文本录入、文本分词、文本正则化、语音标注、韵律处理和语音合成等各个环节处理的内容及技术难点。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号