首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The great majority of current voice technology applications rely on acoustic features, such as the widely used MFCC or LP parameters, which characterize the vocal tract response. Nonetheless, the major source of excitation, namely the glottal flow, is expected to convey useful complementary information. The glottal flow is the airflow passing through the vocal folds at the glottis. Unfortunately, glottal flow analysis from speech recordings requires specific and complex processing operations, which explains why it has been generally avoided. This paper gives a comprehensive overview of techniques for glottal source processing. Starting from analysis tools for pitch tracking, detection of glottal closure instant, estimation and modeling of glottal flow, this paper discusses how these tools and techniques might be properly integrated in various voice technology applications.  相似文献   

2.
为了在病理嗓音识别中为特征参数选择提供依据,提出声带非对称力学建模仿真病变声带并进行分析研究。依据声带的分层结构和组织特性,建立声带力学模型,耦合声门气流,求取模型输出的声门源激励波形。采用遗传粒子群 拟牛顿结合优化算法(Genetic particle swarm optimization based on quasi-Newton method, GPSO-QN)将模 型输出的声门源和实际目标声门波相匹配,提取优化模型参数。仿真实验结果表明,该声带模 型能产生与实际声门源相一致的声门波形,同时也证明了左右声带生理组织间的非对称性是产生病理嗓音的重要原因。  相似文献   

3.
This study used the actual laryngeal video stroboscope videos taken by physicians in clinical practice as the samples for experimental analysis. The samples were dynamic vocal fold videos. Image processing technology was used to automatically capture the image of the largest glottal area from the video to obtain the physiological data of the vocal folds. In this study, an automatic vocal fold disease identification system was designed, which can obtain the physiological parameters for normal vocal folds, vocal paralysis and vocal nodules from image processing according to the pathological features. The decision tree algorithm was used as the classifier of the vocal fold diseases. The identification rate was 92.6%, and the identification rate with an image recognition improvement processing procedure after classification can be improved to 98.7%. Hence, the proposed system has value in clinical practices.  相似文献   

4.
This paper describes a robust glottal source estimation method based on a joint source-filter separation technique. In this method, the Liljencrants-Fant (LF) model, which models the glottal flow derivative, is integrated into a time-varying ARX speech production model. These two models are estimated in a joint optimization procedure, in which a Kalman filtering process is embedded for adaptively identifying the vocal tract parameters. Since the formulated joint estimation problem is a multiparameter nonlinear optimization procedure, we separate the optimization procedure into two passes. The first pass initializes the glottal source and vocal tract models by solving a quasi-convex approximate optimization problem. Having robust initial values, the joint estimation procedure determines the accuracy of model estimation implemented with a trust-region descent optimization algorithm. Experiments with synthetic and real voice signals show that the proposed method is a robust glottal source parameter estimation method with a high degree of accuracy.  相似文献   

5.
《Advanced Robotics》2013,27(1-2):105-120
We developed a three-dimensional mechanical vocal cord model for Waseda Talker No. 7 (WT-7), an anthropomorphic talking robot, for generating speech sounds with various voice qualities. The vocal cord model is a cover model that has two thin folds made of thermoplastic material. The model self-oscillates by airflow exhausted from the lung model and generates the glottal sound source, which is fed into the vocal tract for generating the speech sound. Using the vocal cord model, breathy and creaky voices, as well as the modal (normal) voice, were produced in a manner similar to the human laryngeal control. The breathy voice is characterized by a noisy component mixed with the periodic glottal sound source and the creaky voice is characterized by an extremely low-pitch vibration. The breathy voice was produced by adjusting the glottal opening and generating the turbulence noise by the airflow just above the glottis. The creaky voice was produced by adjusting the vocal cord tension, the sub-glottal pressure and the vibration mass so as to generate a double-pitch vibration with a long pitch interval. The vocal cord model used to produce these voice qualities was evaluated in terms of the vibration pattern as measured by a high-speed camera, the glottal airflow and the acoustic characteristics of the glottal sound source, as compared to the data for a human.  相似文献   

6.
Glottal stop sounds in Amharic are produced due to abrupt closure of the glottis without any significant gesture in the accompanying articulatory organs in the vocal tract system. It is difficult to observe the features of the glottal stop through spectral analysis, as the spectral features emphasize mostly the features of the vocal tract system. In order to spot the glottal stop sounds in continuous speech, it is necessary to extract the features of the source of excitation also, which may require some non-spectral methods for analysis. In this paper the linear prediction (LP) residual is used as an approximation to the excitation source signal, and the excitation features are extracted from the LP residual using zero frequency filtering (ZFF). The glottal closure instants (GCIs) or epoch are identified from the ZFF signal. At each GCI, the cross-correlation coefficients of successive glottal cycles of the LP residual, the normalized jitter and the logarithm of the peak normalized excitation strength (LPNES) are calculated. Further, the parameters of Gaussian approximation models are derived from the distributions of the excitation parameters. These model parameters are used to identify the regions of the glottal stop sounds in continuous speech. For the database used in this study 92.89% of the glottal stop regions are identified correctly, with 8.50% false indications.  相似文献   

7.
This study deals with a numerical solution of a 2D unsteady flow of a compressible viscous fluid in a channel for low inlet airflow velocity. The unsteadiness of the flow is caused by a prescribed periodic motion of a part of the channel wall with large amplitudes, nearly closing the channel during oscillations. The channel is a simplified model of the glottal space in the human vocal tract and the flow can represent a model of airflow coming from the trachea, through the glottal region with periodically vibrating vocal folds, and to the human vocal tract.  相似文献   

8.
Vocal fry (also called creak, creaky voice, and pulse register phonation) is a voice quality that carries important linguistic or paralinguistic information, depending on the language. We propose a set of acoustic measures and a method for automatically detecting vocal fry segments in speech utterances. A glottal pulse-synchronized method is proposed to deal with the very low fundamental frequency properties of vocal fry segments, which cause problems in the classic short-term analysis methods. The proposed acoustic measures characterize power, aperiodicity, and similarity properties of vocal fry signals. The basic idea of the proposed method is to scan for local power peaks in a ldquovery short-termrdquo power contour for obtaining glottal pulse candidates, check for periodicity properties, and evaluate a similarity measure between neighboring glottal pulse candidates for deciding the possibility of being vocal fry pulses. In the periodicity analysis, autocorrelation peak properties are taken into account for avoiding misdetection of periodicity in vocal fry segments. Evaluation of the proposed acoustic measures in the automatic detection resulted in 74% correct detection, with an insertion error rate of 13%.  相似文献   

9.
We propose a pitch synchronous approach to design the voice conversion system taking into account the correlation between the excitation signal and vocal tract system characteristics of speech production mechanism. The glottal closure instants (GCIs) also known as epochs are used as anchor points for analysis and synthesis of the speech signal. The Gaussian mixture model (GMM) is considered to be the state-of-art method for vocal tract modification in a voice conversion framework. However, the GMM based models generate overly-smooth utterances and need to be tuned according to the amount of available training data. In this paper, we propose the support vector machine multi-regressor (M-SVR) based model that requires less tuning parameters to capture a mapping function between the vocal tract characteristics of the source and the target speaker. The prosodic features are modified using epoch based method and compared with the baseline pitch synchronous overlap and add (PSOLA) based method for pitch and time scale modification. The linear prediction residual (LP residual) signal corresponding to each frame of the converted vocal tract transfer function is selected from the target residual codebook using a modified cost function. The cost function is calculated based on mapped vocal tract transfer function and its dynamics along with minimum residual phase, pitch period and energy differences with the codebook entries. The LP residual signal corresponding to the target speaker is generated by concatenating the selected frame and its previous frame so as to retain the maximum information around the GCIs. The proposed system is also tested using GMM based model for vocal tract modification. The average mean opinion score (MOS) and ABX test results are 3.95 and 85 for GMM based system and 3.98 and 86 for the M-SVR based system respectively. The subjective and objective evaluation results suggest that the proposed M-SVR based model for vocal tract modification combined with modified residual selection and epoch based model for prosody modification can provide a good quality synthesized target output. The results also suggest that the proposed integrated system performs slightly better than the GMM based baseline system designed using either epoch based or PSOLA based model for prosody modification.  相似文献   

10.
This paper presents a technique to transform high-effort voices into breathy voices using adaptive pre-emphasis linear prediction (APLP). The primary benefit of this technique is that it estimates a spectral emphasis filter that can be used to manipulate the perceived vocal effort. The other benefit of APLP is that it estimates a formant filter that is more consistent across varying voice qualities. This paper describes how constant pre-emphasis linear prediction (LP) estimates a voice source with a constant spectral envelope even though the spectral envelope of the true voice source varies over time. A listening experiment demonstrates how differences in vocal effort and breathiness are audible in the formant filter estimated by constant pre-emphasis LP. APLP is presented as a technique to estimate a spectral emphasis filter that captures the combined influence of the glottal source and the vocal tract upon the spectral envelope of the voice. A final listening experiment demonstrates how APLP can be used to effectively transform high-effort voices into breathy voices. The techniques presented here are relevant to researchers in voice conversion, voice quality, singing, and emotion.  相似文献   

11.
Youngmin Bae 《Computers & Fluids》2008,37(10):1332-1343
In this study, an INS/PCE splitting method is exploited to compute vocal sound generated within the glottis by a pulsating air jet at maximum speed less than Mach number of 0.1. The acoustic field is computed by solving the perturbed compressible equations (PCE), with acoustic sources acquired from the transient hydrodynamic solutions obtained by the incompressible Navier-Stokes equations (INS). The governing equations are spatially discretized with a sixth-order compact scheme and time-integrated by a four-stage Runge-Kutta method. The computed results show that a voice quality is closely related to the vortical structure in the shear layer of the pulsating jet and the jet characteristics are determined by its local Reynolds number, pulsating frequency (or fundamental frequency), and glottis closure. It is also found that the rotational motion of the glottis controls the glottal impedance by changing the flow separation points between the leading- and trailing-edge of the vocal folds and this increases the mechanical efficiency of the glottis as a sound generator in the phonation process.  相似文献   

12.
The objective of voice conversion system is to formulate the mapping function which can transform the source speaker characteristics to that of the target speaker. In this paper, we propose the General Regression Neural Network (GRNN) based model for voice conversion. It is a single pass learning network that makes the training procedure fast and comparatively less time consuming. The proposed system uses the shape of the vocal tract, the shape of the glottal pulse (excitation signal) and long term prosodic features to carry out the voice conversion task. In this paper, the shape of the vocal tract and the shape of source excitation of a particular speaker are represented using Line Spectral Frequencies (LSFs) and Linear Prediction (LP) residual respectively. GRNN is used to obtain the mapping function between the source and target speakers. The direct transformation of the time domain residual using Artificial Neural Network (ANN) causes phase change and generates artifacts in consecutive frames. In order to alleviate it, wavelet packet decomposed coefficients are used to characterize the excitation of the speech signal. The long term prosodic parameters namely, pitch contour (intonation) and the energy profile of the test signal are also modified in relation to that of the target (desired) speaker using the baseline method. The relative performances of the proposed model are compared to voice conversion system based on the state of the art RBF and GMM models using objective and subjective evaluation measures. The evaluation measures show that the proposed GRNN based voice conversion system performs slightly better than the state of the art models.  相似文献   

13.
声学分析是一种非常有前景的嗓音病理诊断方法,它采用连续小波分析方法提取嗓音特征参数.文章提出了一种基于SVM的病态嗓音分类算法,通过选择径向基函数RBF,可使分类的正确率达到97%.  相似文献   

14.
This paper presents a new glottal inverse filtering (GIF) method that utilizes a Markov chain Monte Carlo (MCMC) algorithm. First, initial estimates of the vocal tract and glottal flow are evaluated by an existing GIF method, iterative adaptive inverse filtering (IAIF). Simultaneously, the initially estimated glottal flow is synthesized using the Rosenberg–Klatt (RK) model and filtered with the estimated vocal tract filter to create a synthetic speech frame. In the MCMC estimation process, the first few poles of the initial vocal tract model and the RK excitation parameter are refined in order to minimize the error between the synthetic and original speech signals in the time and frequency domain. MCMC approximates the posterior distribution of the parameters, and the final estimate of the vocal tract is found by averaging the parameter values of the Markov chain. Experiments with synthetic vowels produced by a physical modeling approach show that the MCMC-based GIF method gives more accurate results compared to two known reference methods.  相似文献   

15.
Acoustical parameters extracted from the recorded voice samples are actively pursued for accurate detection of vocal fold pathology. Most of the system for detection of vocal fold pathology uses high quality voice samples. This paper proposes a hybrid expert system approach to detect vocal fold pathology using the compressed/low quality voice samples which includes feature extraction using wavelet packet transform, clustering based feature weighting and classification. In order to improve the robustness and discrimination ability of the wavelet packet transform based features (raw features), we propose clustering based feature weighting methods including k-means clustering (KMC), fuzzy c-means (FCM) clustering and subtractive clustering (SBC). We have investigated the effectiveness of raw and weighted features (obtained after applying feature weighting methods) using four different classifiers: Least Square Support Vector Machine (LS-SVM) with radial basis kernel, k-means nearest neighbor (kNN) classifier, probabilistic neural network (PNN) and classification and regression tree (CART). The proposed hybrid expert system approach gives a promising classification accuracy of 100% using the feature weighting methods and also it has potential application in remote detection of vocal fold pathology.  相似文献   

16.
声门激励信号是语音信号的源信号,可用于语音特征参数的有效提取。研究了从观测语音获取声门激励的两种方法——线性预测法和倒谱法;用实际录制的语音做计算机仿真实验,比较了两种方法的性能和特点。结果表明倒谱法获取声门激励、由它提取基因周期等激励特征参数的精度高,但计算量相对较大;线性预测法由于采用高效算法,不仅获取声门激励的速度快,而且可同时获取声道模型参数、语音功率谱等重要参数,是获取声门激励的常用方法。  相似文献   

17.
This study deals with the numerical solution of a 2D unsteady flow of a compressible viscous fluid in a channel for low inlet airflow velocity. The unsteadiness of the flow is caused by a prescribed periodic motion of a part of the channel wall with large amplitudes, nearly closing the channel during oscillations. The channel is a simplified model of the glottal space in the human vocal tract and the flow can represent a model of airflow coming from the trachea, through the glottal region with periodically vibrating vocal folds to the human vocal tract.The flow is described by the system of Navier–Stokes equations for laminar flows. The numerical solution is implemented using the finite volume method (FVM) and the predictor–corrector MacCormack scheme with Jameson artificial viscosity using a grid of quadrilateral cells. Due to the motion of the grid, the basic system of conservation laws is considered in the Arbitrary Lagrangian–Eulerian (ALE) form.The authors present the numerical simulations of flow fields in the channel, acquired from a program developed exclusively for this purpose. The numerical results for unsteady flows in the channel are presented for inlet Mach number M = 0.012, Reynolds number Re = 4.5 × 103 and the wall motion frequency 20 and 100 Hz.  相似文献   

18.
Laryngeal high-speed videoendoscopy is a state-of-the-art technique to examine physiological vibrational patterns of the vocal folds. With sampling rates of thousands of frames per second, high-speed videoendoscopy produces a large amount of data that is difficult to analyze subjectively. In order to visualize high-speed video in a straightforward and intuitive way, many methods have been proposed to condense the three-dimensional data into a few static images that preserve characteristics of the underlying vocal fold vibratory patterns. In this paper, we propose the “glottaltopogram,” which is based on principal component analysis of changes over time in the brightness of each pixel in consecutive video images. This method reveals the overall synchronization of the vibrational patterns of the vocal folds over the entire laryngeal area. Experimental results showed that this method is effective in visualizing pathological and normal vocal fold vibratory patterns.  相似文献   

19.
This paper explores the excitation source features of speech production mechanism for characterizing and recognizing the emotions from speech signal. The excitation source signal is obtained from speech signal using linear prediction (LP) analysis, and it is also known as LP residual. Glottal volume velocity (GVV) signal is also used to represent excitation source, and it is derived from LP residual signal. Speech signal has high signal to noise ratio around the instants of glottal closure (GC). These instants of glottal closure are also known as epochs. In this paper, the following excitation source features are proposed for characterizing and recognizing the emotions: sequence of LP residual samples and their phase information, parameters of epochs and their dynamics at syllable and utterance levels, samples of GVV signal and its parameters. Auto-associative neural networks (AANN) and support vector machines (SVM) are used for developing the emotion recognition models. Telugu and Berlin emotion speech corpora are used to evaluate the developed models. Anger, disgust, fear, happy, neutral and sadness are the six emotions considered in this study. About 42 % to 63 % of average emotion recognition performance is observed using different excitation source features. Further, the combination of excitation source and spectral features has shown to improve the emotion recognition performance up to 84 %.  相似文献   

20.
The phonetogram represents an area limited by piano and forte contours of the sound pressure levels along the vocal range. Phonetograms are used in the diagnosis of voice status. Apart from reference phonetograms and the extraction of single parameters, phonetograms have not been evaluated quantitatively in medical practice. The present computer program divided the phonetogram into subareas which were approximated by simple patterns (ellipses). The ellipse parameters were used to evaluate voice efficiency, to recognize voice categories, and to derive diagnostic comments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号