首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In the age of digital information, audio data has become an important part in many modern computer applications. Audio classification and indexing has been becoming a focus in the research of audio processing and pattern recognition. In this paper, we propose effective algorithms to automatically classify audio clips into one of six classes: music, news, sports, advertisement, cartoon and movie. For these categories a number of acoustic features that include linear predictive coefficients, linear predictive cepstral coefficients and mel-frequency cepstral coefficients are extracted to characterize the audio content. The autoassociative neural network model (AANN) is used to capture the distribution of the acoustic feature vectors. Then the proposed method uses a Gaussian mixture model (GMM)-based classifier where the feature vectors from each class were used to train the GMM models for those classes. During testing, the likelihood of a test sample belonging to each model is computed and the sample is assigned to the class whose model produces the highest likelihood. Audio clip extraction, feature extraction, creation of index, and retrieval of the query clip are the major issues in automatic audio indexing and retrieval. A method for indexing the classified audio using LPCC features and k-means clustering algorithm is proposed.  相似文献   

2.
Today, digital audio applications are part of our everyday lives. Audio classification can provide powerful tools for content management. If an audio clip automatically can be classified it can be stored in an organised database, which can improve the management of audio dramatically. In this paper, we propose effective algorithms to automatically classify audio clips into one of six classes: music, news, sports, advertisement, cartoon and movie. For these categories a number of acoustic features that include linear predictive coefficients, linear predictive cepstral coefficients and mel-frequency cepstral coefficients are extracted to characterize the audio content. The autoassociative neural network model (AANN) is used to capture the distribution of the acoustic feature vectors. The AANN model captures the distribution of the acoustic features of a class, and the backpropagation learning algorithm is used to adjust the weights of the network to minimize the mean square error for each feature vector. The proposed method also compares the performance of AANN with a Gaussian mixture model (GMM) wherein the feature vectors from each class were used to train the GMM models for those classes. During testing, the likelihood of a test sample belonging to each model is computed and the sample is assigned to the class whose model produces the highest likelihood.  相似文献   

3.
Recently, lots of research has been directed towards natural language processing. However, the baby's cry, which serves as the primary means of communication for infants, has not yet been extensively explored, because it is not a language that can be easily understood. Since cry signals carry information about a babies' wellbeing and can be understood by experienced parents and experts to an extent, recognition and analysis of an infant's cry is not only possible, but also has profound medical and societal applications. In this paper, we obtain and analyze audio features of infant cry signals in time and frequency domains. Based on the related features, we can classify given cry signals to specific cry meanings for cry language recognition. Features extracted from audio feature space include linear predictive coding (LPC), linear predictive cepstral coefficients (LPCC), Bark frequency cepstral coefficients (BFCC), and Mel frequency cepstral coefficients (MFCC). Compressed sensing technique was used for classification and practical data were used to design and verify the proposed approaches. Experiments show that the proposed infant cry recognition approaches offer accurate and promising results.   相似文献   

4.
Machine-learning based classification of speech and music   总被引:2,自引:0,他引:2  
The need to classify audio into categories such as speech or music is an important aspect of many multimedia document retrieval systems. In this paper, we investigate audio features that have not been previously used in music-speech classification, such as the mean and variance of the discrete wavelet transform, the variance of Mel-frequency cepstral coefficients, the root mean square of a lowpass signal, and the difference of the maximum and minimum zero-crossings. We, then, employ fuzzy C-means clustering to the problem of selecting a viable set of features that enables better classification accuracy. Three different classification frameworks have been studied:Multi-Layer Perceptron (MLP) Neural Networks, radial basis functions (RBF) Neural Networks, and Hidden Markov Model (HMM), and results of each framework have been reported and compared. Our extensive experimentation have identified a subset of features that contributes most to accurate classification, and have shown that MLP networks are the most suitable classification framework for the problem at hand.  相似文献   

5.
Because it is a non-invasive, easy to apply and reliable technique, transcranial doppler (TCD) study of the adult intracerebral circulation has increased enormously in the last 10 years. In this study, a biomedical system has been implemented in order to classify the TCD signals recorded from the temporal region of the brain of 82 patients as well as of 24 healthy people. The diseases were investigated cerebral aneurysm, brain hemorrhage, cerebral oedema and brain tumor. The system is composed of feature extraction and classification parts, basically. In the feature extraction stage, the linear predictive coding analysis and cepstral analysis were applied in order to extract the cepstral and delta-cepstral coefficients in frame level as feature vectors. In the classification stage, discrete hidden Markov model (DHMM) based methods were used. In order to avoid loosing information due to vector quantization and to increase the classification performance, a fuzzy approach based similarity was applied to implement the DHMM. The performance of the proposed Fuzzy DHMM (FDHMM) was compared with some methods such as DHMM, artificial neural network (ANN), neuro-fuzzy approaches and obtained better classification performance than these methods.  相似文献   

6.
Feature extraction is an important component of pattern classification and speech recognition. Extracted features should discriminate classes from each other while being robust to environmental conditions such as noise. For this purpose, several feature transformations are proposed which can be divided into two main categories: data-dependent transformation and classifier-dependent transformation. The drawback of data-dependent transformation is that its optimization criteria are different from the measure of classification error which can potentially degrade the classifier’s performance. In this paper, we propose a framework to optimize data-dependent feature transformations such as PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis) and HLDA (Heteroscedastic LDA) using minimum classification error (MCE) as the main objective. The classifier itself is based on Hidden Markov Model (HMM). In our proposed HMM minimum classification error technique, the transformation matrices are modified to minimize the classification error for the mapped features, and the dimension of the feature vector is not changed. To evaluate the proposed methods, we conducted several experiments on the TIMIT phone recognition and the Aurora2 isolated word recognition tasks. The experimental results show that the proposed methods improve performance of PCA, LDA and HLDA transformation for mapping Mel-frequency cepstral coefficients (MFCC).  相似文献   

7.
The main objective of this paper is to develop the system of speaker identification. Speaker identification is a technology that allows a computer to automatically identify the person who is speaking, based on the information received from speech signal. One of the most difficult problems in speaker recognition is dealing with noises. The performance of speaker recognition using close speaking microphone (CSM) is affected in background noises. To overcome this problem throat microphone (TM) which has a transducer held at the throat resulting in a clean signal and unaffected by background noises is used. Acoustic features namely linear prediction coefficients, linear prediction cepstral coefficients, Mel frequency cepstral coefficients and relative spectral transform-perceptual linear prediction are extracted. These features are classified using RBFNN and AANN and their performance is analyzed. A new method was proposed for identification of speakers in clean and noisy using combined CSM and TM. The identification performance of the combined system is increased than individual system due to complementary nature of CSM and TM.  相似文献   

8.
语音/音乐自动分类中的特征分析   总被引:16,自引:0,他引:16  
综合分析了语音和音乐的区别性特征,包括音调,亮度,谐度等感觉特征与MFCC(Mel-Frequency Cepstral Coefficients)系数等,提出一种left-right DHMM(Discrete Hidden Markov Model)的分类器,以极大似然作为判别规则,用于语音,音乐以及它们的混合声音的分类,并且考察了上述特征集合在该分类器中的分类性能,实验结果表明,文中提出的音频特征有效,合理,分类性能较好。  相似文献   

9.
Acoustic analysis is a noninvasive technique based on the digital processing of the speech signal. Acoustic analysis based techniques are an effective tool to support vocal and voice disease screening and especially in their early detection and diagnosis. Modern lifestyle has increased the risk of pathological voice problems. This work focuses on a robust, rapid and accurate system for automatic detection of normal and pathological speech and also to detect the type of pathology. This system employs non-invasive, inexpensive and fully automated measures of vocal tract characteristics and excitation information. Mel-frequency cepstral coefficients and linear prediction cepstral coefficients are used as acoustic features. The system uses Gaussian mixture model and hidden Markov model classifiers. Cerebral palsy, dysarthria, hearing impairments, laryngectomy, mental retardation, left side paralysis, quadriparesis, stammering, stroke, tumour in vocal tract are the types of pathologies considered in our experiments. From the experimental results, it is observed that to classify normal and pathological voice hidden Markov model with mel frequency cepstral coefficients with delta and acceleration coefficients is giving 94.44% efficiency. Likewise to identify the type of pathology Gaussian mixture model with mel frequency cepstral coefficients with delta and acceleration coefficients is giving 95.74% efficiency.  相似文献   

10.
Content-based audio signal classification into broad categories such as speech, music, or speech with noise is the first step before any further processing such as speech recognition, content-based indexing, or surveillance systems. In this paper, we propose an efficient content-based audio classification approach to classify audio signals into broad genres using a fuzzy c-means (FCM) algorithm. We analyze different characteristic features of audio signals in time, frequency, and coefficient domains and select the optimal feature vector by employing a noble analytical scoring method to each feature. We utilize an FCM-based classification scheme and apply it on the extracted normalized optimal feature vector to achieve an efficient classification result. Experimental results demonstrate that the proposed approach outperforms the existing state-of-the-art audio classification systems by more than 11% in classification performance.  相似文献   

11.
Airborne imaging spectroscopy data (AISA Eagle and HyMap) were applied to classify the sediments of a sandy beach in seven sand type classes. On the AISA‐Eagle data, several classification strategies were tried out and compared with each other. The best classification results were obtained applying a linear discriminant classifier (LDC) in combination with feature selection based on sequential floating forward search (SFFS). The statistical LDC was used in a multiple binary approach. In the first step, the original bands were used in the classification, but transformation of the bands to wavelet coefficients enhanced the accuracy obtained. The combination of LDC with SFFS resulted in an overall accuracy of 82% (using three wavelet coefficients). Replacing the LDC with the non‐statistical SAM algorithm reduced the overall accuracy to 74% (using all bands or wavelet coefficients). When applying LDC, the optimal number of bands/wavelet coefficients to be used was defined: using more than two bands or three wavelet coefficients did not result in a higher classification accuracy. Finally, the HyMap data, featuring 126 bands in the VNIR‐SWIR range, were used to demonstrate that the VNIR range outperforms the SWIR range for this application.  相似文献   

12.
In this paper, a curve fitting space (CFS) is presented to map non-linearly separable data to linearly separable ones. A linear or quadratic transformation maps data into a new space for better classification, if the transformation method is properly guessed. This new CFS space can be of high or low dimensionality, and the number of dimensions is generally low, and it is equal to the number of classes. The CFS method is based on fitting a hyperplane or curve to the learning data or enclosing them into a hypersurface. In the proposed method, the hyperplanes, curves, or cortex become the axis of the new space. In the new space, a linear support vector machine multi-class classifier is applied to classify the learn data.  相似文献   

13.
Content based music genre classification is a key component for next generation multimedia search agents. This paper introduces an audio classification technique based on audio content analysis. Artificial Neural Networks (ANNs), specifically multi-layered perceptrons (MLPs) are implemented to perform the classification task. Windowed audio files of finite length are analyzed to generate multiple feature sets which are used as input vectors to a parallel neural architecture that performs the classification. This paper examines a combination of linear predictive coding (LPC), mel frequency cepstrum coefficients (MFCCs), Haar Wavelet, Daubechies Wavelet and Symlet coefficients as feature sets for the proposed audio classifier. Parallel to MLP, a Gaussian radial basis function (GRBF) based ANN is also implemented and analyzed. The obtained prediction accuracy of 87.3% in determining the audio genres claims the efficiency of the proposed architecture. The ANN prediction values are processed by a rule based inference engine (IE) that presents the final decision.  相似文献   

14.
基于混沌扩频和能量系数差的音频盲水印算法   总被引:2,自引:2,他引:0       下载免费PDF全文
孔华锋 《计算机工程》2010,36(5):131-133
提出一种基于混沌扩频和能量系数差的小波域数字音频盲水印算法。采用混沌扩频序列对原始水印进行加密,小波算法对原始音频信号进行分解,引入心理声学模型选取适当的阈值,根据高频系数分量和低频系数分量的能量差值与阈值的大小关系,实现对水印信息的嵌入、提取和盲检测。在提取过程中结合线性伸缩恢复的方法消除时间轴上线性伸缩带来的影响。实验结果表明,该算法对多种音频文件的操作和攻击均具有良好的鲁棒性。  相似文献   

15.
Over the past decade, frog biodiversity has rapidly declined due to many problems including habitat loss and degradation, introduced invasive species, and environmental pollution. Frogs are greatly important to improve the global ecosystem and it is ever more necessary to monitor frog biodiversity. One way to monitor frog biodiversity is to record audio of frog calls. Various methods have been developed to classify these calls. However, to the best of our knowledge, there is still no paper that reviews and summarizes currently developed methods. This survey gives a quantitative and detailed analysis of frog call classification. To be specific, a frog call classification system consists of signal pre-processing, feature extraction, and classification. Signal pre-processing is made up of signal processing, noise reduction, and syllable segmentation. Following signal preprocessing, the next step is feature extraction, which is the most crucial step for improving classification performance. Features used for frog call classification are categorized into four types: (1) time domain and frequency domain features (we classify time domain and frequency domain features into one type because they are often combined together to achieve higher classification accuracy), (2) time-frequency features, (3) cepstral features, and (4) other features. For the classification step, different classifiers and evaluation criteria used for frog call classification are investigated. In conclusion, we discuss future work for frog call classification.  相似文献   

16.
为了提高利用高压水射流靶物反射声信号识别靶物材质的效率,针对地雷探测过程常见的地雷、石块、砖块和木块4种靶物,采用不同的特征提取方法来识别靶物材质。在分析Mel频率倒谱系数及小波包变换倒谱系数基本原理的基础上,结合靶物反射声信号的特点,提出了一种基于Mel频率倒谱和小波包变换倒谱特征融合的特征提取方法:利用小波包变换将原始靶物反射声信号划分为若干子频段,选取其中一个子频段作为低频和高频的划分层;低频部分提取Mel频率倒谱系数作为特征值,高频部分则提取小波包变换倒谱系数作为特征值,将2组特征值线性合并为一组新的特征向量,用于靶物材质的识别。采用最小二乘支持向量机建立多分类模型,验证基于单一特征和基于特征融合的特征提取方法的识别率。实验结果表明,在取得低频与高频的最佳划分层时,基于特征融合的特征提取方法的平均识别率达到82.812 5%,较单一的利用Mel频率倒谱系数或小波包变换倒谱系数作为特征向量时的平均识别率分别提高了10.312 5%和7.812 5%。  相似文献   

17.
Many data mining applications involve the task of building a model for predictive classification. The goal of this model is to classify data instances into classes or categories of the same type. The use of variables not related to the classes can reduce the accuracy and reliability of classification or prediction model. Superfluous variables can also increase the costs of building a model particularly on large datasets. The feature selection and hyper-parameters optimization problem can be solved by either an exhaustive search over all parameter values or an optimization procedure that explores only a finite subset of the possible values. The objective of this research is to simultaneously optimize the hyper-parameters and feature subset without degrading the generalization performances of the induction algorithm. We present a global optimization approach based on the use of Cross-Entropy Method to solve this kind of problem.  相似文献   

18.
Joint scene classification and segmentation based on hidden Markov model   总被引:2,自引:0,他引:2  
Scene classification and segmentation are fundamental steps for efficient accessing, retrieving and browsing large amount of video data. We have developed a scene classification scheme using a Hidden Markov Model (HMM)-based classifier. By utilizing the temporal behaviors of different scene classes, HMM classifier can effectively classify presegmented clips into one of the predefined scene classes. In this paper, we describe three approaches for joint classification and segmentation based on HMM, which search for the most likely class transition path by using the dynamic programming technique. All these approaches utilize audio and visual information simultaneously. The first two approaches search optimal scene class transition based on the likelihood values computed for short video segment belonging to a particular class but with different search constrains. The third approach searches the optimal path in a super HMM by concatenating HMM's for different scene classes.  相似文献   

19.
We are interested in recovering aspects of vocal tract's geometry and dynamics from speech, a problem referred to as speech inversion. Traditional audio-only speech inversion techniques are inherently ill-posed since the same speech acoustics can be produced by multiple articulatory configurations. To alleviate the ill-posedness of the audio-only inversion process, we propose an inversion scheme which also exploits visual information from the speaker's face. The complex audiovisual-to-articulatory mapping is approximated by an adaptive piecewise linear model. Model switching is governed by a Markovian discrete process which captures articulatory dynamic information. Each constituent linear mapping is effectively estimated via canonical correlation analysis. In the described multimodal context, we investigate alternative fusion schemes which allow interaction between the audio and visual modalities at various synchronization levels. For facial analysis, we employ active appearance models (AAMs) and demonstrate fully automatic face tracking and visual feature extraction. Using the AAM features in conjunction with audio features such as Mel frequency cepstral coefficients (MFCCs) or line spectral frequencies (LSFs) leads to effective estimation of the trajectories followed by certain points of interest in the speech production system. We report experiments on the QSMT and MOCHA databases which contain audio, video, and electromagnetic articulography data recorded in parallel. The results show that exploiting both audio and visual modalities in a multistream hidden Markov model based scheme clearly improves performance relative to either audio or visual-only estimation.   相似文献   

20.
Audio pattern classification represents a particular statistical classification task and includes, for example, speaker recognition, language recognition, emotion recognition, speech recognition and, recently, video genre classification. The feature being used in all these tasks is generally based on a short-term cepstral representation. The cepstral vectors contain at the same time useful information and nuisance variability, which are difficult to separate in this domain. Recently, in the context of GMM-based recognizers, a novel approach using a Factor Analysis (FA) paradigm has been proposed for decomposing the target model into a useful information component and a session variability component. This approach is called Joint Factor Analysis (JFA), since it models jointly the nuisance variability and the useful information, using the FA statistical method. The JFA approach has even been combined with Support Vector Machines, known for their discriminative power. In this article, we successfully apply this paradigm to three automatic audio processing applications: speaker verification, language recognition and video genre classification. This is done by applying the same process and using the same free software toolkit. We will show that this approach allows for a relative error reduction of over 50% in all the aforementioned audio processing tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号