首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
本文提出了一个基于n-gram语言模型进行文本表示,采用链状朴素贝叶斯分类器进行分类的中文文本分类系统。介绍了如何用n-gram语言模型进行文本表示,阐述了链状朴素贝叶斯分类器与n-gram语言模型相结合的优势,分析了n-gram语言模型参数的选取,讨论了分类系统的若干重要问题,研究了训练集的规模和质量对分类系统的影响。根据863计划文本分类测评组所提供的测试标准、训练集以及测试集对本文所设计的分类系统进行测试,实验结果表明该分类系统有良好的分类效果。  相似文献   

2.
语音识别中统计与规则结合的语言模型   总被引:2,自引:1,他引:1  
王轩  王晓龙  张凯 《自动化学报》1999,25(3):309-315
在分析语音识别系统中,基于规则方法和统计方法的语言模型,提出了一种对规则 进行量化的合成语言模型.该模型既避免了规则方法无法适应大规模真实文本处理的缺点, 同时也提高了统计模型处理远距离约束关系和语言递归现象的能力.合成语言模型使涵盖6 万词条的非特定人孤立词的语音识别系统的准确率比单独使用词的TRIGRAM模型提高了 4.9%(男声)和3.5%(女声).  相似文献   

3.
This paper describes the use of a neural network language model for large vocabulary continuous speech recognition. The underlying idea of this approach is to attack the data sparseness problem by performing the language model probability estimation in a continuous space. Highly efficient learning algorithms are described that enable the use of training corpora of several hundred million words. It is also shown that this approach can be incorporated into a large vocabulary continuous speech recognizer using a lattice rescoring framework at a very low additional processing time. The neural network language model was thoroughly evaluated in a state-of-the-art large vocabulary continuous speech recognizer for several international benchmark tasks, in particular the Nist evaluations on broadcast news and conversational speech recognition. The new approach is compared to four-gram back-off language models trained with modified Kneser–Ney smoothing which has often been reported to be the best known smoothing method. Usually the neural network language model is interpolated with the back-off language model. In that way, consistent word error rate reductions for all considered tasks and languages were achieved, ranging from 0.4% to almost 1% absolute.  相似文献   

4.
Statistical n-gram language modeling is popular for speech recognition and many other applications. The conventional n-gram suffers from the insufficiency of modeling long-distance language dependencies. This paper presents a novel approach focusing on mining long distance word associations and incorporating these features into language models based on linear interpolation and maximum entropy (ME) principles. We highlight the discovery of the associations of multiple distant words from training corpus. A mining algorithm is exploited to recursively merge the frequent word subsets and efficiently construct the set of association patterns. By combining the features of association patterns into n-gram models, the association pattern n-grams are estimated with a special realization to trigger pair n-gram where only the associations of two distant words are considered. In the experiments on Chinese language modeling, we find that the incorporation of association patterns significantly reduces the perplexities of n-gram models. The incorporation using ME outperforms that using linear interpolation. Association pattern n-gram is superior to trigger pair n-gram. The perplexities are further reduced using more association steps. Further, the proposed association pattern n-grams are not only able to elevate document classification accuracies but also improve speech recognition rates.  相似文献   

5.
在目前的电视台采访和录音中,有大量的文本任务需要使用语音识别软件进行从语音向文字的转换。如今语音识别的准确率虽然已经足够出色,但对于电视台等严谨的专业领域效果一般,其结果还不能完全信任。由于缺少自动有效地对识别结果进行校对的方法,电视台需要花费大量的人力和物力进行人工校对。因此,本文希望设计并开发一个录音采访文字校对软件来解决此问题。该软件开发的主要工作是构建通用领域和专业领域的语言模型、融合基于统计方法的N-Gram模型和基于特征与学习的Seq2Seq模型相结合的查错纠错算法、构建新闻播报和电视台录音采访等专业领域的查错规则库。  相似文献   

6.
n-Gram Statistics for Natural Language Understanding and Text Processing   总被引:1,自引:0,他引:1  
n-gram (n = 1 to 5) statistics and other properties of the English language were derived for applications in natural language understanding and text processing. They were computed from a well-known corpus composed of 1 million word samples. Similar properties were also derived from the most frequent 1000 words of three other corpuses. The positional distributions of n-grams obtained in the present study are discussed. Statistical studies on word length and trends of n-gram frequencies versus vocabulary are presented. In addition to a survey of n-gram statistics found in the literature, a collection of n-gram statistics obtained by other researchers is reviewed and compared.  相似文献   

7.
为了在未知一段语音所属语言种类的情况下将其转换为正确的字符序列,将语种辨识(language identification,LID)同语音识别集成在一起建立了中、英文大词汇量连续语音识别(large vocabulary continuous speech recognition,LVCSR)系统.为了在中、英文连续语音识别系统中能够尽早的对语音所属的语言种类做出判决以便进行识别,从而降低解码的计算量,对语种辨识过程中的语种剪枝进行了研究,表明采用合理的语种剪枝门限在不降低系统性能的情况下,可以有效的降低系统的计算量及识别时间.  相似文献   

8.
申广忠 《微计算机信息》2007,23(12):251-252
目前,蒙古语语音识别的研究尚处于空白阶段,因此蒙古语语音识别系统的研究与开发具有重要意义。而语言模型的确立是语音识别系统中最重要的环节之一。本文根据自己的实践,通过实验的方法最终确立了蒙古语、大量词汇语音识别系统中适宜的语言模型。  相似文献   

9.
10.
South African English is currently considered an under-resourced variety of English. Extensive speech resources are, however, available for North American (US) English. In this paper we consider the use of these US resources in the development of a South African large vocabulary speech recognition system. Specifically we consider two research questions. Firstly, we determine the performance penalties that are incurred when using US instead of South African language models, pronunciation dictionaries and acoustic models. Secondly, we determine whether US acoustic and language modelling data can be used in addition to the much more limited South African resources to improve speech recognition performance. In the first case we find that using a US pronunciation dictionary or a US language model in a South African system results in fairly small penalties. However, a substantial penalty is incurred when using a US acoustic model. In the second investigation we find that small but consistent improvements over a baseline South African system can be obtained by the additional use of US acoustic data. Larger improvements are obtained when complementing the South African language modelling data with US and/or UK material. We conclude that, when developing resources for an under-resourced variety of English, the compilation of acoustic data should be prioritised, language modelling data has a weaker effect on performance and the pronunciation dictionary the smallest.  相似文献   

11.
12.
N-gram models are the most widely used language models in large vocabulary continuous speech recognition. Since the size of the model grows rapidly with respect to the model order and available training data, many methods have been proposed for pruning the least relevant -grams from the model. However, correct smoothing of the N-gram probability distributions is important and performance may degrade significantly if pruning conflicts with smoothing. In this paper, we show that some of the commonly used pruning methods do not take into account how removing an -gram should modify the backoff distributions in the state-of-the-art Kneser-Ney smoothing. To solve this problem, we present two new algorithms: one for pruning Kneser-Ney smoothed models, and one for growing them incrementally. Experiments on Finnish and English text corpora show that the proposed pruning algorithm provides considerable improvements over previous pruning algorithms on Kneser-Ney smoothed models and is also better than the baseline entropy pruned Good-Turing smoothed models. The models created by the growing algorithm provide a good starting point for our pruning algorithm, leading to further improvements. The improvements in the Finnish speech recognition over the other Kneser-Ney smoothed models are statistically significant, as well.  相似文献   

13.
在基于语音识别的智能家居中,用于训练的语料库不完备且应用场景复杂,自然语言语音识别错误接受率远远高于小词汇的语音识别的错误接受率.作者在设计与实现基于自然语言的语音识别智能家居系统的过程中,深入研究了MAP、MLLR算法在基于HMM声学模型参数中的作用,提出了一种综合的自适应方法,并基于开源的语音识别工具CMU SPHIN最终完整的实现了该系统,结果表明所提出的自适应新算法可行有效,较好改善了系统在不同场景中的性能.  相似文献   

14.
识别短文本的语言种类是社交媒体中自然语言处理的重要前提,也是一个挑战性热点课题.由于存在集外词和不同语种相同词汇干扰的问题,传统基于n-gram的短文本语种识别方法(如Textcat、LIGA、logLIGA等)识别效果在不同的数据集上相差甚远,鲁棒性较差.本文提出了一种基于n-gram频率语种识别改进方法,根据训练数据不同特性,自动确定语言中特征词和共有词的权重,增强语种识别模型在不同数据集上的鲁棒性.实验结果证明了该方法的有效性.  相似文献   

15.
鉴于维吾尔语丰富的形态变化产生大量单词引起的集外词(out of vocabulary,OOV)问题,为了定量研究OOV对维吾尔语语音识别的影响,采用控制语料库测试集OOV的算法及最佳文本挑选算法对不同OOV的测试集进行实验,算法通过Python语言实现.应用该算法进行电话语音库的文本转写,构建了维吾尔语的电话语音库.实验结果表明,该控制测试集OOV的方法能够有效地提高维吾尔语语音识别率.  相似文献   

16.
We describe a system for highly accurate large-vocabulary Mandarin speech recognition. The prevailing hidden Markov model based technologies are essentially language independent and constitute the backbone of our system. These include minimum-phone-error discriminative training and maximum-likelihood linear regression adaptation, among others. Additionally, careful considerations are taken into account for Mandarin-specific issues including lexical word segmentation, tone modeling, phone set design, and automatic acoustic segmentation. Our system comprises two sets of acoustic models for the purposes of cross adaptation. The systems are designed to be complementary in terms of errors but with similar overall accuracy by using different phone sets and different combinations of discriminative learning. The outputs of the two subsystems are then rescored by an adapted n-gram language model. Final confusion network combination yielded 9.1% character error rate on the DARPA GALE 2007 official evaluation, the best Mandarin recognition system in that year.  相似文献   

17.
以维吾尔语为例研究自然语料缺乏的民族语言连续语音识别方法。采用HTK通过人工标注的少量语料生成种子模型,引导大语音数据构建声学模型,利用palmkit工具生成统计语言模型,以Julius工具实现连续语音识别。实验用64个维语母语者自由发话的6 400个 短句语音建立单音素声学模型,由100 MB文本、6万词词典生成基于词类的3-gram语言模型,测试结果表明,该方法的识别率为 72.5%,比单用HTK提高4.2个百分点。  相似文献   

18.
19.
We present a new generative model of natural language, the latent words language model. This model uses a latent variable for every word in a text that represents synonyms or related words in the given context. We develop novel methods to train this model and to find the expected value of these latent variables for a given unseen text. The learned word similarities help to reduce the sparseness problems of traditional n-gram language models. We show that the model significantly outperforms interpolated Kneser–Ney smoothing and class-based language models on three different corpora. Furthermore the latent variables are useful features for information extraction. We show that both for semantic role labeling and word sense disambiguation, the performance of a supervised classifier increases when incorporating these variables as extra features. This improvement is especially large when using only a small annotated corpus for training.  相似文献   

20.
近几年来,基于端到端模型的语音识别系统因其相较于传统混合模型的结构简洁性和易于训练性而得到广泛的应用,并在汉语和英语等大语种上取得了显著的效果.本文将自注意力机制和链接时序分类损失代价函数相结合,将这种端到端模型应用到维吾尔语语音识别上.考虑到维吾尔语属于典型的黏着语,其丰富的构词形式使得维吾尔语的词汇量异常庞大,本文引入字节对编码算法进行建模单元的生成,从而获得合适的端到端建模输出单元.在King-ASR450维吾尔语数据集上,提出的算法明显优于基于隐马尔可夫模型的经典混合系统和基于双向长短时记忆网络的端到端模型,最终识别词准确率为91.35%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号