首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
机器的情感是通过融入具有情感能力的智能体实现的,虽然目前在人机交互领域已经有大量研究成果,但有关智能体情感计算方面的研究尚处起步阶段,深入开展这项研究对推动人机交互领域的发展具有重要的科学和应用价值。本文通过检索Scopus数据库选择有代表性的文献,重点关注情感在智能体和用户之间的双向流动,分别从智能体对用户的情绪感知和对用户情绪调节的角度开展分析总结。首先梳理了用户情绪的识别方法,即通过用户的表情、语音、姿态、生理信号和文本信息等多通道信息分析用户的情绪状态,归纳了情绪识别中的一些机器学习方法。其次从用户体验角度分析具有情绪表现力的智能体对用户的影响,总结了智能体的情绪生成和表现技术,指出智能体除了通过表情之外,还可以通过注视、姿态、头部运动和手势等非言语动作来表现情绪。并且梳理了典型的智能体情绪架构,举例说明了强化学习在智能体情绪设计中的作用。同时为了验证模型的准确性,比较了已有的情感评估手段和评价指标。最后指出智能体情感计算急需解决的问题。通过对现有研究的总结,智能体情感计算研究是一个很有前景的研究方向,希望本文能够为深入开展相关研究提供借鉴。  相似文献   

2.
A growing body of research suggests that affective computing has many valuable applications in enterprise systems research and e-businesses. This paper explores affective computing techniques for a vital sub-area in enterprise systems—consumer satisfaction measurement. We propose a linguistic-based emotion analysis and recognition method for measuring consumer satisfaction. Using an annotated emotion corpus (Ren-CECps), we first present a general evaluation of customer satisfaction by comparing the linguistic characteristics of emotional expressions of positive and negative attitudes. The associations in four negative emotions are further investigated. After that, we build a fine-grained emotion recognition system based on machine learning algorithms for measuring customer satisfaction; it can detect and recognize multiple emotions using customers’ words or comments. The results indicate that blended emotion recognition is able to gain rich feedback data from customers, which can provide more appropriate follow-up for customer relationship management.  相似文献   

3.
This research applies an innovative way to measure and identify user’s emotion with different ingredient color. How to find an intuitive way to understand human emotion is the key point in this research. The RGB color system that is widely used of all forms computer system is an accumulative color system in which red, green, and blue light are added together showing entire color. This study was based on Thayer’s emotion model which classifies the emotions with two vectors, valence and arousal, and gathers the emotion color with RGB as input for calculating and forecasting user’s emotion. In this experiment, using 320 data divide to quarter into emotion groups to train the weight in the neural network and uses 160 data to prove the accuracy. The result reveals that this model can be valid reckon the emotion by reply color response from user. In other hand, this experiment found that trend of the different ingredient of color on Cartesian coordinate system figures out the distinguishing intensity in RGB color system. Via the foregoing detect emotion model is going to design an affective computing intelligence framework try to embed the emotion component in it.  相似文献   

4.
认知视角下的文本情感计算   总被引:1,自引:0,他引:1  
徐琳宏  林鸿飞 《计算机科学》2010,37(12):182-185
尝试将认知语用学和情绪心理学相关知识引入到文本情感计算中,以拉扎勒斯的认知一评价理论和认知语用学的认知语境为理论背景,提出一个新的文本情感认知模型。它从情感的发生机制出发,以多种情感图式为基础,考虑否定状态下情感的极性关系等多方面因素,提高了文本情感识别的准确率。情感认知模型是从一个新的视角研究情感识别问题,拓宽了研究的维度和理论背景,提供了新的研究思路。实验证明该模型是有效的。  相似文献   

5.
本文介绍了语音情感识别领域的最新进展和今后的发展方向,特别是介绍了结合实际应用的实用语音情感识别的研究状况。主要内容包括:对情感计算研究领域的历史进行了回顾,探讨了情感计算的实际应用;对语音情感识别的一般方法进行了总结,包括情感建模、情感数据库的建立、情感特征的提取,以及情感识别算法等;结合具体应用领域的需求,对实用语音情感识别方法进行了重点分析和探讨;分析了实用语音情感识别中面临的困难,针对烦躁等实用情感,总结了实用情感语音语料库的建立、特征分析和实用语音情感建模的方法等。最后,对实用语音情感识别研究的未来发展方向进行了展望,分析了今后可能面临的问题和解决的途径。  相似文献   

6.
为解决基于视觉的情感识别无法捕捉人物所处环境和与周围人物互动对情感识别的影响、单一情感种类无法更丰富地描述人物情感、无法对未来情感进行合理预测的问题,提出了融合背景上下文特征的视觉情感识别与预测方法。该方法由融合背景上下文特征的情感识别模型(Context-ER)和基于GRU与Valence-Arousal连续情感维度的情感预测模型(GRU-mapVA)组成。Context-ER同时综合了面部表情、身体姿态和背景上下文(所处环境、与周围人物互动行为)特征,进行26种离散情感类别的多标签分类和3个连续情感维度的回归。GRU-mapVA根据所提映射规则将Valence-Arousal的预测值投影到改进的Valence-Arousal模型上,使得情感预测类间差异更为明显。Context-ER在Emotic数据集上进行了测试,结果表明,识别情感的平均精确率比现有最优方法提高4%以上;GRU-mapVA在三段视频样本上进行了测试,结果表明情感预测效果相较于现有方法有很大提升。  相似文献   

7.
情感在感知、决策、逻辑推理和社交等一系列智能活动中起到核心作用,是实现人机交互和机器智能的重要元素。近年来,随着多媒体数据爆发式增长及人工智能的快速发展,情感计算与理解引发了广泛关注。情感计算与理解旨在赋予计算机系统识别、理解、表达和适应人的情感的能力来建立和谐人机环境,并使计算机具有更高、更全面的智能。根据输入信号的不同,情感计算与理解包含不同的研究方向。本文全面回顾了多模态情感识别、孤独症情感识别、情感图像内容分析以及面部表情识别等不同情感计算与理解方向在过去几十年的研究进展并对未来的发展趋势进行展望。对于每个研究方向,首先介绍了研究背景、问题定义和研究意义;其次从不同角度分别介绍了国际和国内研究现状,包括情感数据标注、特征提取、学习算法、部分代表性方法的性能比较和分析以及代表性研究团队等;然后对国内外研究进行了系统比较,分析了国内研究的优势和不足;最后讨论了目前研究存在的问题及未来的发展趋势与展望,例如考虑个体情感表达差异问题和用户隐私问题等。  相似文献   

8.
Extracting and understanding of emotion is of high importance for the interaction between human and machine communication systems. The most expressive way to display the human’s emotion is through facial expression analysis. This paper proposes a multiple emotion recognition system that can recognize combinations of up to a maximum of three different emotions using an active appearance model (AAM), the proposed classification standard, and a k-nearest neighbor (k-NN) classifier in mobile environments. AAM can take the expression of variations that are calculated by the proposed classification standard according to changes in human expressions in real time. The proposed k-NN can classify basic emotions (normal, happy, sad, angry, surprise) as well as more ambiguous emotions by combining the basic emotions in real time, and each recognized emotion that can be subdivided has strength. Whereas most previous methods of emotion recognition recognize various kind of a single emotion, this paper recognizes various emotions with a combination of the five basic emotions. To be easily understood, the recognized result is presented in three ways on a mobile camera screen. The result of the experiment was an average 85 % recognition rate and a 40 % performance showed optimized emotions. The implemented system can be represented by one of the example for augmented reality on displaying combination of real face video and virtual animation with user’s avatar.  相似文献   

9.
基于人脸表情特征的情感交互系统*   总被引:1,自引:1,他引:0  
徐红  彭力 《计算机应用研究》2012,29(3):1111-1115
设计了一套基于人脸表情特征的情感交互系统(情感虚拟人),关键技术分别为情感识别、情感计算、情感合成与输出三个方面。情感识别部分首先采用特征块的方法对面部静态表情图形进行预处理,然后利用二维主元分析(2DPCA)提取特征,最后利用多级量子神经网络分类器实现七类表情识别分类;在情感计算部分建立了隐马尔可夫情感模型(HMM),并且用改进的遗传算法估计模型中的参数;在情感合成与输出阶段,首先采用NURBS曲面和面片相结合的算法,建立人脸三维网格模型,然后采用关键帧技术,实现了符合人类行为规律的连续表情动画。最后完成了基于人脸表情特征的情感交互系统的设计。  相似文献   

10.
Automatically recognizing human emotions from spontaneous and non-prototypical real-life data is currently one of the most challenging tasks in the field of affective computing. This article presents our recent advances in assessing dimensional representations of emotion, such as arousal, expectation, power, and valence, in an audiovisual human–computer interaction scenario. Building on previous studies which demonstrate that long-range context modeling tends to increase accuracies of emotion recognition, we propose a fully automatic audiovisual recognition approach based on Long Short-Term Memory (LSTM) modeling of word-level audio and video features. LSTM networks are able to incorporate knowledge about how emotions typically evolve over time so that the inferred emotion estimates are produced under consideration of an optimal amount of context. Extensive evaluations on the Audiovisual Sub-Challenge of the 2011 Audio/Visual Emotion Challenge show how acoustic, linguistic, and visual features contribute to the recognition of different affective dimensions as annotated in the SEMAINE database. We apply the same acoustic features as used in the challenge baseline system whereas visual features are computed via a novel facial movement feature extractor. Comparing our results with the recognition scores of all Audiovisual Sub-Challenge participants, we find that the proposed LSTM-based technique leads to the best average recognition performance that has been reported for this task so far.  相似文献   

11.
An important trend in the development of Intelligent tutoring systems (ITSs) has been that providing the student with a more personalized and friendly environment for learning. Many researchers now feel strongly that the ITSs would significantly improve performance if they could adapt to the affective state of the learner. This idea has spawned the developing field of affective tutoring systems (ATSs): ATSs are ITSs that are able to adapt to the affective state of students. However, ATSs are not widely employed in the tutoring system market. In this paper, a survey was conducted to investigate the critical factors affecting learner’s satisfaction in ATSs based on an ATS developed by us. The results revealed that learner’s attitude toward affective computing, agent tutor’s expressiveness, emotion recognition accuracy, number of emotions recognized by agent tutor, pedagogical action and easy of the use of the system have significant influence on learner’s satisfaction. The results indicate institutions how to further strengthen the ATSs’ implementation.  相似文献   

12.
Human emotion recognition using brain signals is an active research topic in the field of affective computing. Music is considered as a powerful tool for arousing emotions in human beings. This study recognized happy, sad, love and anger emotions in response to audio music tracks from electronic, rap, metal, rock and hiphop genres. Participants were asked to listen to audio music tracks of 1 min for each genre in a noise free environment. The main objectives of this study were to determine the effect of different genres of music on human emotions and indicating age group that is more responsive to music. Thirty men and women of three different age groups (15–25 years, 26–35 years and 36–50 years) underwent through the experiment that also included self reported emotional state after listening to each type of music. Features from three different domains i.e., time, frequency and wavelet were extracted from recorded EEG signals, which were further used by the classifier to recognize human emotions. It has been evident from results that MLP gives best accuracy to recognize human emotion in response to audio music tracks using hybrid features of brain signals. It is also observed that rock and rap genres generated happy and sad emotions respectively in subjects under study. The brain signals of age group (26–35 years) gave best emotion recognition accuracy in accordance to the self reported emotions.  相似文献   

13.
How we design and evaluate for emotions depends crucially on what we take emotions to be. In affective computing, affect is often taken to be another kind of information—discrete units or states internal to an individual that can be transmitted in a loss-free manner from people to computational systems and back. While affective computing explicitly challenges the primacy of rationality in cognitivist accounts of human activity, at a deeper level it often relies on and reproduces the same information-processing model of cognition. Drawing on cultural, social, and interactional critiques of cognition which have arisen in human–computer interaction (HCI), as well as anthropological and historical accounts of emotion, we explore an alternative perspective on emotion as interaction: dynamic, culturally mediated, and socially constructed and experienced. We demonstrate how this model leads to new goals for affective systems—instead of sensing and transmitting emotion, systems should support human users in understanding, interpreting, and experiencing emotion in its full complexity and ambiguity. In developing from emotion as objective, externally measurable unit to emotion as experience, evaluation, too, alters focus from externally tracking the circulation of emotional information to co-interpreting emotions as they are made in interaction.  相似文献   

14.
During Covid pandemic, many individuals are suffering from suicidal ideation in the world. Social distancing and quarantining, affects the patient emotionally. Affective computing is the study of recognizing human feelings and emotions. This technology can be used effectively during pandemic for facial expression recognition which automatically extracts the features from the human face. Monitoring system plays a very important role to detect the patient condition and to recognize the patterns of expression from the safest distance. In this paper, a new method is proposed for emotion recognition and suicide ideation detection in COVID patients. This helps to alert the nurse, when patient emotion is fear, cry or sad. The research presented in this paper has introduced Image Processing technology for emotional analysis of patients using Machine learning algorithm. The proposed Convolution Neural Networks (CNN) architecture with DnCNN preprocessing enhances the performance of recognition. The system can analyze the mood of patients either in real time or in the form of video files from CCTV cameras. The proposed method accuracy is more when compared to other methods. It detects the chances of suicide attempt based on stress level and emotional recognition.  相似文献   

15.
In this paper, we present human emotion recognition systems based on audio and spatio-temporal visual features. The proposed system has been tested on audio visual emotion data set with different subjects for both genders. The mel-frequency cepstral coefficient (MFCC) and prosodic features are first identified and then extracted from emotional speech. For facial expressions spatio-temporal features are extracted from visual streams. Principal component analysis (PCA) is applied for dimensionality reduction of the visual features and capturing 97 % of variances. Codebook is constructed for both audio and visual features using Euclidean space. Then occurrences of the histograms are employed as input to the state-of-the-art SVM classifier to realize the judgment of each classifier. Moreover, the judgments from each classifier are combined using Bayes sum rule (BSR) as a final decision step. The proposed system is tested on public data set to recognize the human emotions. Experimental results and simulations proved that using visual features only yields on average 74.15 % accuracy, while using audio features only gives recognition average accuracy of 67.39 %. Whereas by combining both audio and visual features, the overall system accuracy has been significantly improved up to 80.27 %.  相似文献   

16.
The recognition of emotion in human speech has gained increasing attention in recent years due to the wide variety of applications that benefit from such technology. Detecting emotion from speech can be viewed as a classification task. It consists of assigning, out of a fixed set, an emotion category e.g. happiness, anger, to a speech utterance. In this paper, we have tackled two emotions namely happiness and anger. The parameters extracted from speech signal depend on speaker, spoken word as well as emotion. To detect the emotion, we have kept the spoken utterance and the speaker constant and only the emotion is changed. Different features are extracted to identify the parameters responsible for emotion. Wavelet packet transform (WPT) is found to be emotion specific. We have performed the experiments using three methods. Method uses WPT and compares the number of coefficients greater than threshold in different bands. Second method uses energy ratios of different bands using WPT and compares the energy ratios in different bands. The third method is a conventional method using MFCC. The results obtained using WPT for angry, happy and neutral mode are 85 %, 65 % and 80 % respectively as compared to results obtained using MFCC i.e. 75 %, 45 % and 60 % respectively for the three emotions. Based on WPT features a model is proposed for emotion conversion namely neutral to angry and neutral to happy emotion.  相似文献   

17.
This paper presents a fuzzy relational approach to human emotion recognition from facial expressions and its control. The proposed scheme uses external stimulus to excite specific emotions in human subjects whose facial expressions are analyzed by segmenting and localizing the individual frames into regions of interest. Selected facial features such as eye opening, mouth opening, and the length of eyebrow constriction are extracted from the localized regions, fuzzified, and mapped onto an emotion space by employing Mamdani-type relational models. A scheme for the validation of the system parameters is also presented. This paper also provides a fuzzy scheme for controlling the transition of emotion dynamics toward a desired state. Experimental results and computer simulations indicate that the proposed scheme for emotion recognition and control is simple and robust, with good accuracy.  相似文献   

18.
近年来,情感计算已经成为自然语言处理与人工智能领域的一个研究热点,而文本情感分析是情感计算的一个重要组成部分.提出了一个基于主题特征与三支决策理论相融合的多标记情感分类方法.首先采用基于主题的情感识别模型判断句子的多标记情感类别,在此基础上结合三支决策理论,最终实现对文本篇章的多标记情感分类.实验结果表明,该方法在文本篇章的多标记情感类别识别上取得了令人满意的结果.  相似文献   

19.
学业情绪能够影响和调节学习者的注意、记忆、思维等认知活动,情绪自动识别是智慧学习环境中情感交互和教学决策的基础。目前情绪识别研究主要集中在离散情绪的识别,其在时间轴上是非连续的,无法精准刻画学生学业情绪演变过程,为解决这个问题,基于众包方法建立真实在线学习情境中的中学生学习维度情感数据集,设计基于连续维度情感预测的深度学习分析模型。实验中根据学生学习风格确定触发学生学业情绪的学习材料,并招募32位实验人员进行自主在线学习,实时采集被试面部图像,获取157个学生学业情绪视频;对每个视频进行情感Arousal和Valence二维化,建立包含2 178张学生面部表情的维度数据库;建立基于ConvLSTM网络的维度情感模型,并在面向中学生的维度情感数据库上进行实验,得到一致性相关系数(Concordance Correlation Coefficient,CCC)均值为0.581,同时在Aff-Wild公开数据集上进行实验,得到的一致相关系数均值为0.222。实验表明,提出的基于维度情感模型在Aff-Wild公开数据集维度情绪识别中CCC相关度系数指标提升了7.6%~43.0%。  相似文献   

20.

Autism spectrum disorder (ASD), which since 2013 is considered as an umbrella term for several disorders such as autistic syndrome, Asperger’s disorder and pervasive developmental disorder, is characterized, among other aspects, by deficits in social-emotion reciprocity. This deficit manifests itself as a reduced sharing of emotions and an increased difficulty in interpreting emotions other people are feeling, which in the end leads to more impairments in social communication. Since it is possible to help a person with ASD (especially children) to improve their ability to understand and detect emotions, we have developed a proposal which integrates emotion recognition technologies, often used in the field of HCI, to try to overcome this difficulty. In this paper, we present a novel software application developed as a serious game to teach children with autism spectrum disorder (ASD) to identify and express emotions. The system incorporates cutting-edge technology to support novel interaction mechanisms based on tangible user interfaces (TUIs) and emotion recognition from facial expressions. In this way, children interact with the system in a natural way by simply grasping objects with their hands and using their faces. The system has been assessed on the premises of an association with children with ASD. The outcomes of the evaluation are very positive and support the validity of the proposal.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号