首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 437 毫秒
1.
小波分析和支持向量机相融合的语音端点检测算法   总被引:1,自引:0,他引:1  
为了提高语音端点检测的适应性和鲁棒性,提出一种基于小波分析和支持向量机的语音端点检测算法。首先利用小波变换提取语音信号的特征量,然后将这些特征量作为支持向量机的输入进行训练和建模,最后判断出该信号的类别。仿真实验表明,相对于传统的语音端点检测算法,小波分析和支持向量机的检测算法提高了语音端点检测的正确率,有效降低了虚检率和漏检率,具有更好的适应性和鲁棒性,对不同信噪比的信号都有较好的检测能力。  相似文献   

2.
王忠民  刘戈  宋辉 《计算机工程》2019,45(8):248-254
在语音情感识别中提取梅尔频率倒谱系数(MFCC)会丢失谱特征信息,导致情感识别准确率较低。为此,提出一种结合MFCC和语谱图特征的语音情感识别方法。从音频信号中提取MFCC特征,将信号转换为语谱图,利用卷积神经网络提取图像特征。在此基础上,使用多核学习算法融合音频特征,并将生成的核函数应用于支持向量机进行情感分类。在2种语音情感数据集上的实验结果表明,与单一特征的分类器相比,该方法的语音情感识别准确率高达96%。  相似文献   

3.
在分析篡改音频特征变化的基础上,提出了一种语音被动取证方法。采用语音的美尔倒谱域参数及其动态特征参数和小波域统计矩特征来建立模型,并选取支持向量机(SVM)作为分类器以寻找最优分类平面,实现对可疑语音信号真实性的盲取证。实验结果表明,该方法对语音片段的删除、剪接和替换等改变语音内容真实性的篡改操作能够达到较高的检测准确率。  相似文献   

4.
人在不同情感下的语音信号其非平稳性尤为明显,传统的MFCC只能反映语音信号的静态特征,经验模态分解能够精细地刻画语音信号的非平稳特性。为提取情感语音的非平稳特征,用经验模态分解将情感语音信号分解为一系列固有模态函数分量,通过Mel滤波器后取其对数能量,进行DCT反变换后得到改进的MFCC作为情感识别的新特征,采用支持向量机对高兴、生气、厌烦和恐惧等四种语音情感识别。仿真实验结果表明:改进的MFCC识别率达到77.17%,在不同的信噪比下,识别率最大可提高3.26%。  相似文献   

5.
在正弦激励模型的线性预测(LP)残差转换的基础上,提出了一种改进语音特征转换性能的语音转换方法.基于线性预测分析和综合的构架,该方法一方面通过谱包络估计声码器提取源说话人的线性预测编码(LPC)倒谱包络,并使用双线性变换函数实现倒谱包络的转换;另一方面由谐波正弦模型对线性预测残差信号建模和分解,采用基音频率变换将源说话人的残差信号转换为近似目标说话人的残差信号.最后由修正后的残差信号激励时变滤波器得到转换语音,滤波器参数通过转换得到的LPC倒谱包络实时更新.实验结果表明,该方法在主观和客观测试中都具有良好的结果,能有效地转换说话人声音特征,获得高相似度的转换语音.  相似文献   

6.
通过对语音转换的研究,提出了一种把源说话人特征转换为目标说话人特征的方法。语音转换特征参数分为两类:(1)频谱特征参数;(2)基音和声调模式。分别描述信号模型和转换方法。频谱特征用基于音素的2维HMMS建模,F0轨迹用来表示基音和音调。用基音同步叠加法对基音周期﹑声调和语速进行变换。  相似文献   

7.
通过对语音转换的研究,提出了一种把源说话人特征转换为目标说话人特征的方法。语音转换特征参数分为两类:(1)频谱特征参数;(2)基音和声调模式。分别描述信号模型和转换方法。频谱特征用基于音素的2维HMMS建模,F0轨迹用来表示基音和音调。用基音同步叠加法对基音厨期、声调和语速进行变换。  相似文献   

8.
论文针对小波变换和语音信号的特点,把小波变换和形态滤波法结合应用于语音信号基音周期的提取,并在此基础上把小波变换和说话人声道特征参数相结合,用于声道特征的提取。最后在以上研究的基础上设计了一种用于公安侦破和司法鉴定的语音监测系统。  相似文献   

9.
严斌峰  朱小燕  张智江  张范 《软件学报》2006,17(12):2547-2553
提出了一种基于支持向量机的联合多种置信特征进行语音识别确认的判定方法.从待确认语音中提取出分段的后验概率和线性预测编码识别结果置信特征,其中后验概率根据垃圾模型近似计算得到;设计支持向量机分类器联合多种置信特征给出最终确认结果.实验结果表明,所提出的置信特征和支持向量机分类器取得了很好的确认效果.  相似文献   

10.
针对预先给定参数求解共同向量所存在的不足,提出了一种基于共同向量的非常态语音说话人识别算法,首先,通过系统识别率自适应调整求解共同向量的参数;然后,将系统识别率最高的参数视为最优参数,为测试语音提取共同向量,并用SVM分类器进行非常态语音说话人分类。实验结果表明:该算法所提取的共同向量,对轻微感冒语音说话人识别率为85.4%,比对特征不进行处理的GMM算法、SVM和结合共同向量的GMM算法的识别率分别提高了16.9%、15.2%和3.2%。  相似文献   

11.
Statistical Approach for Voice Personality Transformation   总被引:1,自引:0,他引:1  
A voice transformation method which changes the source speaker's utterances so as to sound similar to those of a target speaker is described. Speaker individuality transformation is achieved by altering the LPC cepstrum, average pitch period and average speaking rate. The main objective of the work involves building a nonlinear relationship between the parameters for the acoustical features of two speakers, based on a probabilistic model. The conversion rules involve the probabilistic classification and a cross correlation probability between the acoustic features of the two speakers. The parameters of the conversion rules are estimated by estimating the maximum likelihood of the training data. To obtain transformed speech signals which are perceptually closer to the target speaker's voice, prosody modification is also involved. Prosody modification is achieved by scaling excitation spectrum and time scale modification with appropriate modification factors. An evaluation by objective tests and informal listening tests clearly indicated the effectiveness of the proposed transformation method. We also confirmed that the proposed method leads to smoothly evolving spectral contours over time, which, from a perceptual standpoint, produced results that were superior to conventional vector quantization (VQ)-based methods  相似文献   

12.
语音的电子伪装是指采用变声设备或语音处理软件改变说话人的个性特征,以达到故意隐藏该说话人身份的目的。电子伪装语音还原是指通过技术手段将伪装语音变回原声,这对基于语音的身份鉴别具有重要意义。本文将频域和时域伪装语音的还原问题抽象为伪装因子的估计问题,通过基于i-vector的自动说话人确认方法估计伪装因子,并引入对称变换进一步提高估计效果。该方法借助于i-vector的噪声鲁棒性,提高了真实含噪场景下伪装因子的估计精度,从而改进了噪声条件下电子伪装语音的还原效果。在干净语音库TIMIT上训练i-vector并在含噪语音库VoxCeleb1上对本文方法进行测试,结果表明,伪装因子估计的错误率从基线系统的9.19%降低为4.49%,还原语音在自动说话人确认等错误率和听觉感知方面也取得了提升。  相似文献   

13.
In the last two decades, non-invasive methods through acoustic analysis of voice signal have been proved to be excellent and reliable tool to diagnose vocal fold pathologies. This paper proposes a new feature vector based on the wavelet packet transform and singular value decomposition for the detection of vocal fold pathology. k-means clustering based feature weighting is proposed to increase the distinguishing performance of the proposed features. In this work, two databases Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database and MAPACI speech pathology database are used. Four different supervised classifiers such as k-nearest neighbour (k-NN), least-square support vector machine, probabilistic neural network and general regression neural network are employed for testing the proposed features. The experimental results uncover that the proposed features give very promising classification accuracy of 100% for both MEEI database and MAPACI speech pathology database.  相似文献   

14.
李燕萍  曹盼  左宇涛  张燕  钱博 《自动化学报》2022,48(7):1824-1833
提出一种基于i向量和变分自编码相对生成对抗网络的语音转换方法, 实现了非平行文本条件下高质量的多对多语音转换. 性能良好的语音转换系统, 既要保持重构语音的自然度, 又要兼顾转换语音的说话人个性特征是否准确. 首先为了改善合成语音自然度, 利用生成性能更好的相对生成对抗网络代替基于变分自编码生成对抗网络模型中的Wasserstein生成对抗网络, 通过构造相对鉴别器的方式, 使得鉴别器的输出依赖于真实样本和生成样本间的相对值, 克服了Wasserstein生成对抗网络性能不稳定和收敛速度较慢等问题. 进一步为了提升转换语音的说话人个性相似度, 在解码阶段, 引入含有丰富个性信息的i向量, 以充分学习说话人的个性化特征. 客观和主观实验表明, 转换后的语音平均梅尔倒谱失真距离值较基准模型降低4.80%, 平均意见得分值提升5.12%, ABX 值提升8.60%, 验证了该方法在语音自然度和个性相似度两个方面均有显著的提高, 实现了高质量的语音转换.  相似文献   

15.
设计了一套基于LabVIEW的语音身份认证系统,以LabVIEW2009为开发平台,采用改进的美尔倒频谱系数法进行语音信号特征提取,采用矢量量化模型进行语音识别,实现了与文本、性别无关的声纹识别.实验结果表明该系统能够有效克服环境噪声、说话人声音变异带来的影响.  相似文献   

16.
The paper presents a novel automatic speaker age and gender identification approach which combines seven different methods at both acoustic and prosodic levels to improve the baseline performance. The three baseline subsystems are (1) Gaussian mixture model (GMM) based on mel-frequency cepstral coefficient (MFCC) features, (2) Support vector machine (SVM) based on GMM mean supervectors and (3) SVM based on 450-dimensional utterance level features including acoustic, prosodic and voice quality information. In addition, we propose four subsystems: (1) SVM based on UBM weight posterior probability supervectors using the Bhattacharyya probability product kernel, (2) Sparse representation based on UBM weight posterior probability supervectors, (3) SVM based on GMM maximum likelihood linear regression (MLLR) matrix supervectors and (4) SVM based on the polynomial expansion coefficients of the syllable level prosodic feature contours in voiced speech segments. Contours of pitch, time domain energy, frequency domain harmonic structure energy and formant for each syllable (segmented using energy information in the voiced speech segment) are considered for analysis in subsystem (4). The proposed four subsystems have been demonstrated to be effective and able to achieve competitive results in classifying different age and gender groups. To further improve the overall classification performance, weighted summation based fusion of these seven subsystems at the score level is demonstrated. Experiment results are reported on the development and test set of the 2010 Interspeech Paralinguistic Challenge aGender database. Compared to the SVM baseline system (3), which is the baseline system suggested by the challenge committee, the proposed fusion system achieves 5.6% absolute improvement in unweighted accuracy for the age task and 4.2% for the gender task on the development set. On the final test set, we obtain 3.1% and 3.8% absolute improvement, respectively.  相似文献   

17.
The dynamic use of voice qualities in spoken language can reveal useful information on a speakers attitude, mood and affective states. This information may be very desirable for a range of, both input and output, speech technology applications. However, voice quality annotation of speech signals may frequently produce far from consistent labeling. Groups of annotators may disagree on the perceived voice quality, but whom should one trust or is the truth somewhere in between? The current study looks first to describe a voice quality feature set that is suitable for differentiating voice qualities on a tense to breathy dimension. Further, the study looks to include these features as inputs to a fuzzy-input fuzzy-output support vector machine (F2SVM) algorithm, which is in turn capable of softly categorizing voice quality recordings. The F2SVM is compared in a thorough analysis to standard crisp approaches and shows promising results, while outperforming for example standard support vector machines with the sole difference being that the F2SVM approach receives fuzzy label information during training. Overall, it is possible to achieve accuracies of around 90% for both speaker dependent (cross validation) and speaker independent (leave one speaker out validation) experiments. Additionally, the approach using F2SVM performs at an accuracy of 82% for a cross corpus experiment (i.e. training and testing on entirely different recording conditions) in a frame-wise analysis and of around 97% after temporally integrating over full sentences. Furthermore, the output of fuzzy measures gave performances close to that of human annotators.  相似文献   

18.
Wu  Xing  Ji  Sihui  Wang  Jianjia  Guo  Yike 《Applied Intelligence》2022,52(13):14839-14852

Human beings are capable of imagining a person’s voice according to his or her appearance because different people have different voice characteristics. Although researchers have made great progress in single-view speech synthesis, there are few studies on multi-view speech synthesis, especially the speech synthesis using face images. On the basis of implicit relationship between the speaker’s face image and his or her voice, we propose a multi-view speech synthesis method called SSFE (Speech Synthesis with Face Embeddings). The proposed SSFE consists of three parts: a voice encoder, a face encoder and an improved multi-speaker text-to-speech (TTS) engine. On the one hand, the proposed voice encoder generates the voice embeddings from the speaker’s speech and the proposed face encoder extracts the voice features from the speaker’s face as f-voice embeddings. On the other hand, the multi-speaker TTS engine would synthesize the speech with voice embeddings and f-voice embeddings. We have conducted extensive experiments to evaluate the proposed SSFE on the synthesized speech quality and face-voice matching degree, in which the Mean Opinion Score of the SSFE is more than 3.7 and the matching degree is about 1.7. The experimental results prove that the proposed SSFE method outperforms state-of-the-art methods on the synthesized speech in terms of speech quality and face-voice matching degree.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号