首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
为更好地对听视觉情感信息之间的关联关系进行建模,提出一种三流混合动态贝叶斯网络情感识别模型(T_AsyDBN)。采用MFCC特征及基于基频和短时能量的局域韵律特征作为听觉输入流,在状态层同步。将面部几何特征和面部动作参数特征作为视觉输入流,与听觉输入流在状态层异步。实验结果表明,该模型优于有状态异步约束的听视觉双流DBN模型,6种情感的平均识别率从 52.14%提高到63.71%。  相似文献   

2.
情感标签标注是情感计算中的一个重要领域。该领域中针对音频、图像和多媒体内容的情感标签标注已有多个相关工作发表。为分析某个基于脑电图的大脑编码的多媒体情感标签标注中音频信号的重要性,情感计算公开数据库DEAP被用作测试基准。基于DEAP数据库的多媒体刺激,共提取了音频特征和三类视频特征。首先仅使用视频特征基于该框架进行多媒体标签标注任务,之后联合使用音频和视频特征进行同样的工作。实验结果表明,与仅使用视频特征的结果相比,联合使用音视频特征可以提高标注准确率,并且没有因为增加特征维数造成性能损失。  相似文献   

3.
近年来,利用计算机技术实现基于多模态数据的情绪识别成为自然人机交互和人工智能领域重要 的研究方向之一。利用视觉模态信息的情绪识别工作通常都将重点放在脸部特征上,很少考虑动作特征以及融合 动作特征的多模态特征。虽然动作与情绪之间有着紧密的联系,但是从视觉模态中提取有效的动作信息用于情绪 识别的难度较大。以动作与情绪的关系作为出发点,在经典的 MELD 多模态情绪识别数据集中引入视觉模态的 动作数据,采用 ST-GCN 网络模型提取肢体动作特征,并利用该特征实现基于 LSTM 网络模型的单模态情绪识别。 进一步在 MELD 数据集文本特征和音频特征的基础上引入肢体动作特征,提升了基于 LSTM 网络融合模型的多 模态情绪识别准确率,并且结合文本特征和肢体动作特征提升了上下文记忆模型的文本单模态情绪识别准确率, 实验显示虽然肢体动作特征用于单模态情绪识别的准确度无法超越传统的文本特征和音频特征,但是该特征对于 多模态情绪识别具有重要作用。基于单模态和多模态特征的情绪识别实验验证了人体动作中含有情绪信息,利用 肢体动作特征实现多模态情绪识别具有重要的发展潜力。  相似文献   

4.
The speech signal consists of linguistic information and also paralinguistic one such as emotion. The modern automatic speech recognition systems have achieved high performance in neutral style speech recognition, but they cannot maintain their high recognition rate for spontaneous speech. So, emotion recognition is an important step toward emotional speech recognition. The accuracy of an emotion recognition system is dependent on different factors such as the type and number of emotional states and selected features, and also the type of classifier. In this paper, a modular neural-support vector machine (SVM) classifier is proposed, and its performance in emotion recognition is compared to Gaussian mixture model, multi-layer perceptron neural network, and C5.0-based classifiers. The most efficient features are also selected by using the analysis of variations method. It is noted that the proposed modular scheme is achieved through a comparative study of different features and characteristics of an individual emotional state with the aim of improving the recognition performance. Empirical results show that even by discarding 22% of features, the average emotion recognition accuracy can be improved by 2.2%. Also, the proposed modular neural-SVM classifier improves the recognition accuracy at least by 8% as compared to the simulated monolithic classifiers.  相似文献   

5.
Recognizing Human Emotional State From Audiovisual Signals   总被引:1,自引:0,他引:1  
Machine recognition of human emotional state is an important component for efficient human-computer interaction. The majority of existing works address this problem by utilizing audio signals alone, or visual information only. In this paper, we explore a systematic approach for recognition of human emotional state from audiovisual signals. The audio characteristics of emotional speech are represented by the extracted prosodic, Mel-frequency Cepstral Coefficient (MFCC), and formant frequency features. A face detection scheme based on HSV color model is used to detect the face from the background. The visual information is represented by Gabor wavelet features. We perform feature selection by using a stepwise method based on Mahalanobis distance. The selected audiovisual features are used to classify the data into their corresponding emotions. Based on a comparative study of different classification algorithms and specific characteristics of individual emotion, a novel multiclassifier scheme is proposed to boost the recognition performance. The feasibility of the proposed system is tested over a database that incorporates human subjects from different languages and cultural backgrounds. Experimental results demonstrate the effectiveness of the proposed system. The multiclassifier scheme achieves the best overall recognition rate of 82.14%.  相似文献   

6.
The recognition of emotion in human speech has gained increasing attention in recent years due to the wide variety of applications that benefit from such technology. Detecting emotion from speech can be viewed as a classification task. It consists of assigning, out of a fixed set, an emotion category e.g. happiness, anger, to a speech utterance. In this paper, we have tackled two emotions namely happiness and anger. The parameters extracted from speech signal depend on speaker, spoken word as well as emotion. To detect the emotion, we have kept the spoken utterance and the speaker constant and only the emotion is changed. Different features are extracted to identify the parameters responsible for emotion. Wavelet packet transform (WPT) is found to be emotion specific. We have performed the experiments using three methods. Method uses WPT and compares the number of coefficients greater than threshold in different bands. Second method uses energy ratios of different bands using WPT and compares the energy ratios in different bands. The third method is a conventional method using MFCC. The results obtained using WPT for angry, happy and neutral mode are 85 %, 65 % and 80 % respectively as compared to results obtained using MFCC i.e. 75 %, 45 % and 60 % respectively for the three emotions. Based on WPT features a model is proposed for emotion conversion namely neutral to angry and neutral to happy emotion.  相似文献   

7.
Hand gesture recognition provides an alternative way to many devices for human computer interaction. In this work, we have developed a classifier fusion based dynamic free-air hand gesture recognition system to identify the isolated gestures. Different users gesticulate at different speed for the same gesture. Hence, when comparing different samples of the same gesture, variations due to difference in gesturing speed should not contribute to the dissimilarity score. Thus, we have introduced a two-level speed normalization procedure using DTW and Euclidean distance-based techniques. Three features such as ‘orientation between consecutive points’, ‘speed’ and ‘orientation between first and every trajectory points’ were used for the speed normalization. Moreover, in feature extraction stage, 44 features were selected from the existing literatures. Use of total feature set could lead to overfitting, information redundancy and may increase the computational complexity due to higher dimension. Thus, we have tried to overcome this difficulty by selecting optimal set of features using analysis of variance and incremental feature selection techniques. The performance of the system was evaluated using this optimal set of features for different individual classifiers such as ANN, SVM, k-NN and Naïve Bayes. Finally, the decisions of the individual classifiers were combined using classifier fusion model. Based on the experimental results it may be concluded that classifier fusion provides satisfactory results compared to other individual classifiers. An accuracy of 94.78 % was achieved using the classifier fusion technique as compared to baseline CRF (85.07 %) and HCRF (89.91 %) models.  相似文献   

8.
The multi-modal emotion recognition lacks the explicit mapping relation between emotion state and audio and image features, so extracting the effective emotion information from the audio/visual data is always a challenging issue. In addition, the modeling of noise and data redundancy is not solved well, so that the emotion recognition model is often confronted with the problem of low efficiency. The deep neural network (DNN) performs excellently in the aspects of feature extraction and highly non-linear feature fusion, and the cross-modal noise modeling has great potential in solving the data pollution and data redundancy. Inspired by these, our paper proposes a deep weighted fusion method for audio-visual emotion recognition. Firstly, we conduct the cross-modal noise modeling for the audio and video data, which eliminates most of the data pollution in the audio channel and the data redundancy in visual channel. The noise modeling is implemented by the voice activity detection(VAD), and the data redundancy in the visual data is solved through aligning the speech area both in audio and visual data. Then, we extract the audio emotion features and visual expression features via two feature extractors. The audio emotion feature extractor, audio-net, is a 2D CNN, which accepting the image-based Mel-spectrograms as input data. On the other hand, the facial expression feature extractor, visual-net, is a 3D CNN to which facial expression image sequence is feeded. To train the two convolutional neural networks on the small data set efficiently, we adopt the strategy of transfer learning. Next, we employ the deep belief network(DBN) for highly non-linear fusion of multi-modal emotion features. We train the feature extractors and the fusion network synchronously. And finally the emotion classification is obtained by the support vector machine using the output of the fusion network. With consideration of cross-modal feature fusion, denoising and redundancy removing, our fusion method show excellent performance on the selected data set.  相似文献   

9.
To date, most of the human emotion recognition systems are intended to sense the emotions and their dominance individually. This paper discusses a fuzzy model for multilevel affective computing based on the dominance dimensional model of emotions. This model can detect any other possible emotions simultaneously at the time of recognition. One hundred and thirty volunteers from various countries with different cultural backgrounds were selected to record their emotional states. These volunteers have been selected from various races and different geographical locations. Twenty-seven different emotions with their strengths in a scale of 5 were questioned through a survey. Recorded emotions were analyzed with the other possible emotions and their levels of dominance to build the fuzzy model. Then this model was integrated into a fuzzy emotion recognition system using three input devices of mouse, keyboard and the touch screen display. Support vector machine classifier detected the other possible emotions of the users along with the directly sensed emotion. The binary system (non-fuzzy) sensed emotions with an incredible accuracy of 93 %. However, it only could sense limited emotions. By integrating this model, the system was able to detect more possible emotions at a time with slightly lower recognition accuracy of 86 %. The recorded false positive rates of this model for four emotions were measured at 16.7 %. The resulted accuracy and its false positive rate are among the top three accurate human emotion recognition (affective computing) systems.  相似文献   

10.
This paper addresses a model-based audio content analysis for classification of speech-music mixed audio signals into speech and music. A set of new features is presented and evaluated based on sinusoidal modeling of audio signals. The new feature set, including variance of the birth frequencies and duration of the longest frequency track in sinusoidal model, as a measure of the harmony and signal continuity, is introduced and discussed in detail. These features are used and compared to typical features as inputs to an audio classifier. Performance of these sinusoidal model features is evaluated through classification of audio into speech and music using both the GMM (Gaussian Mixture Model) and the SVM (Support Vector Machine) classifiers. Experimental results show that the proposed features are quite successful in speech/music discrimination. By using only a set of two sinusoidal model features, extracted from 1-s segments of the signal, we achieved 96.84% accuracy in the audio classification. Experimental comparisons also confirm superiority of the sinusoidal model features to the popular time domain and frequency domain features in audio classification.  相似文献   

11.
针对传统的二分类音频隐写分析方法对未知隐写方法的适应性较差的问题,提出了一种基于模糊C均值(FCM)聚类与单类支持向量机(OC-SVM)的音频隐写分析方法。在训练过程中,首先对训练音频进行特征提取,包括短时傅里叶变换(STFT)频谱的统计特征和基于音频质量测度的特征,然后对所提取的特征进行FCM聚类得到C个聚类,最后送入多个超球面的OC-SVM分类器进行训练;检测过程中,对测试音频进行特征提取,根据多个超球面OC-SVM分类器的边界对待测音频进行检测。实验结果表明,该隐写分析方法对于几种典型的音频隐写方法能够较为正确地检测,满容量嵌入时,测试音频的总体检测率达到85.1%,与K-means聚类方法相比,所提方法的检测正确率提高了至少2%。该隐写分析方法比二分类的隐写分析方法更具有通用性,更适用于隐写方法事先未知情况下的隐写音频的检测。  相似文献   

12.
13.
We present a novel, real-time, markerless vision-based tracking system, employing a rigid orthogonal configuration of two pairs of opposing cameras. Our system uses optical flow over sparse features to overcome the limitation of vision-based systems that require markers or a pre-loaded model of the physical environment. We show how opposing cameras enable cancellation of common components of optical flow leading to an efficient tracking algorithm that captures five degrees of freedom including direction of translation and angular velocity. Experiments comparing our device with an electromagnetic tracker show that its average tracking accuracy is 80 % over 185 frames, and it is able to track large range motions even in outdoor settings. We also present how our tracking system can be used for gesture recognition by combining it with a simple linear classifier over a set of 15 gestures. Experimental results show that we are able to achieve 86.7 % gesture recognition accuracy.  相似文献   

14.
近年,情绪识别研究已经不再局限于面部和语音识别,基于脑电等生理信号的情绪识别日趋火热.但由于特征信息提取不完整或者分类模型不适应等问题,使得情绪识别分类效果不佳.基于此,本文提出一种微分熵(DE)、卷积神经网络(CNN)和门控循环单元(GRU)结合的混合模型(DE-CNN-GRU)进行基于脑电的情绪识别研究.将预处理后的脑电信号分成5个频带,分别提取它们的DE特征作为初步特征,输入到CNN-GRU模型中进行深度特征提取,并结合Softmax进行分类.在SEED数据集上进行验证,该混合模型得到的平均准确率比单独使用CNN或GRU算法的平均准确率分别高出5.57%与13.82%.  相似文献   

15.
Automatic audio content recognition has attracted an increasing attention for developing multimedia systems, for which the most popular approaches combine frame-based features with statistic models or discriminative classifiers. The existing methods are effective for clean single-source event detection but may not perform well for unstructured environmental sounds, which have a broad noise-like flat spectrum and a diverse variety of compositions. We present an automatic acoustic scene understanding framework that detects audio events through two hierarchies, acoustic scene recognition and audio event recognition, in which the former is preceded by following dominant audio sources and in turn helps infer non-dominant audio events within the same scene through modeling their occurrence correlations. On the scene recognition hierarchy, we perform adaptive segmentation and feature extraction for every input acoustic scene stream through Eigen-audiospace and an optimized feature subspace, respectively. After filtering background, scene streams are recognized by modeling the observation density of dominant features using a two-level hidden Markov model. On the audio event recognition hierarchy, scene knowledge is characterized by an audio context model that essentially describes the occurrence correlations of dominant and non-dominant audio events within this scene. Monte Carlo integration and gradient descent techniques are employed to maximize the likelihood and correctly tag each audio event. To the best of our knowledge, this is the first work that models event correlations as scene context for robust audio event detection from complex and noisy environments. Note that according to the recent report, the mean accuracy for the acoustic scene classification task by human listeners is only around 71 % on the data collected in office environments from the DCASE dataset. None of the existing methods performs well on all scene categories and the average accuracy of the best performances of the recent 11 methods is 53.8 %. The proposed method averagely achieves an accuracy of 62.3 % on the same dataset. Additionally, we create a 10-CASE dataset by manually collecting 5,250 audio clips of 10 scene types and 21 event categories. Our experimental results on 10-CASE show that the proposed method averagely achieves the enhanced performance of 78.3 %, and the average accuracy of audio event recognition can be effectively improved by capturing dominant audio sources and reasoning non-dominant events from the dominant ones through acoustic context modeling. In the future work, exploring the interactions between acoustic scene recognition and audio event detection, and incorporating other modalities to improve the accuracy are required to further advance the proposed framework.  相似文献   

16.
In this paper, a novel wavelet based contourlet transform for texture extraction is presented. The visual quality recognition of nonwovens based on image processing approach can be considered as a special case of the application of computer vision and pattern recognition on industrial inspection. For concreteness, the method proposed in this paper can be divided into two stages, i.e., the feature extraction is solved by wavelet based contourlet transform, which is followed by the grade recognition with support vector machine (SVM). For the texture analysis, we propose a novel wavelet based contourlet transform, which can be considered as a simplified but more sufficient for texture analysis for nonwoven image compared with version of the one introduced by Eslami in theory view. In experiment, nonwoven images of five different visual quality grades are decomposed using wavelet based contourlet transform with ‘PKVA’ filter as the default filter of Laplacian Pyramid (LP) and Directional Filter Bank (DFB) at 3 levels firstly. Then, two energy-based features, norm-1?L 1 and norm-2?L 2, are calculated from the wavelet coefficients at the first level and contourlet coefficients of each high frequency subband. Finally, the SVM is designed to be a classifier to be trained and tested with the samples selected from the feature set. Experimental results indicate that when the nonwoven images are decomposed at 3 levels and 16?L 2s are extracted, with 500 samples to train the SVM, the average recognition accuracy of test set is 99.2 %, which is superior to the comparative method using wavelet texture analysis.  相似文献   

17.
A system for the detection, segmentation and recognition of multi-class hand postures against complex natural backgrounds is presented. Visual attention, which is the cognitive process of selectively concentrating on a region of interest in the visual field, helps human to recognize objects in cluttered natural scenes. The proposed system utilizes a Bayesian model of visual attention to generate a saliency map, and to detect and identify the hand region. Feature based visual attention is implemented using a combination of high level (shape, texture) and low level (color) image features. The shape and texture features are extracted from a skin similarity map, using a computational model of the ventral stream of visual cortex. The skin similarity map, which represents the similarity of each pixel to the human skin color in HSI color space, enhanced the edges and shapes within the skin colored regions. The color features used are the discretized chrominance components in HSI, YCbCr color spaces, and the similarity to skin map. The hand postures are classified using the shape and texture features, with a support vector machines classifier. A new 10 class complex background hand posture dataset namely NUS hand posture dataset-II is developed for testing the proposed algorithm (40 subjects, different ethnicities, various hand sizes, 2750 hand postures and 2000 background images). The algorithm is tested for hand detection and hand posture recognition using 10 fold cross-validation. The experimental results show that the algorithm has a person independent performance, and is reliable against variations in hand sizes and complex backgrounds. The algorithm provided a recognition rate of 94.36 %. A comparison of the proposed algorithm with other existing methods evidences its better performance.  相似文献   

18.
With an essential demand of human emotional behavior understanding and human machine interaction for the recent electronic applications, speaker emotion recognition is a key component which has attracted a great deal of attention among the researchers. Even though a handful of works are available in the literature for speaker emotion classification, the important challenges such as, distinct emotions, low quality recording, and independent affective states are still need to be addressed with good classifier and discriminative features. Accordingly, a new classifier, called fractional deep belief network (FDBN) is developed by combining deep belief network (DBN) and Fractional Calculus. This new classifier is trained with the multiple features such as tonal power ratio, spectral flux, pitch chroma and Mel frequency cepstral coefficients (MFCC) to make the emotional classes more separable through the spectral characteristics. The proposed FDBN classifier with integrated feature vectors is tested using two databases such as, Berlin database of emotional speech and real time Telugu database. The performance of the proposed FDBN and existing DBN classifiers are validated using False Acceptance Rate (FAR), False Rejection Rate (FRR) and Accuracy. The experimental results obtained by the proposed FDBN shows the accuracy of 98.39 and 95.88 % in Berlin and Telugu database.  相似文献   

19.
Human emotion recognition using brain signals is an active research topic in the field of affective computing. Music is considered as a powerful tool for arousing emotions in human beings. This study recognized happy, sad, love and anger emotions in response to audio music tracks from electronic, rap, metal, rock and hiphop genres. Participants were asked to listen to audio music tracks of 1 min for each genre in a noise free environment. The main objectives of this study were to determine the effect of different genres of music on human emotions and indicating age group that is more responsive to music. Thirty men and women of three different age groups (15–25 years, 26–35 years and 36–50 years) underwent through the experiment that also included self reported emotional state after listening to each type of music. Features from three different domains i.e., time, frequency and wavelet were extracted from recorded EEG signals, which were further used by the classifier to recognize human emotions. It has been evident from results that MLP gives best accuracy to recognize human emotion in response to audio music tracks using hybrid features of brain signals. It is also observed that rock and rap genres generated happy and sad emotions respectively in subjects under study. The brain signals of age group (26–35 years) gave best emotion recognition accuracy in accordance to the self reported emotions.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号