首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the current era of the internet, people use online media for conversation, discussion, chatting, and other similar purposes. Analysis of such material where more than one person is involved has a spate challenge as compared to other text analysis tasks. There are several approaches to identify users’ emotions from the conversational text for the English language, however regional or low resource languages have been neglected. The Urdu language is one of them and despite being used by millions of users across the globe, with the best of our knowledge there exists no work on dialogue analysis in the Urdu language. Therefore, in this paper, we have proposed a model which utilizes deep learning and machine learning approaches for the classification of users’ emotions from the text. To accomplish this task, we have first created a dataset for the Urdu language with the help of existing English language datasets for dialogue analysis. After that, we have preprocessed the data and selected dialogues with common emotions. Once the dataset is prepared, we have used different deep learning and machine learning techniques for the classification of emotion. We have tuned the algorithms according to the Urdu language datasets. The experimental evaluation has shown encouraging results with 67% accuracy for the Urdu dialogue datasets, more than 10, 000 dialogues are classified into five emotions i.e., joy, fear, anger, sadness, and neutral. We believe that this is the first effort for emotion detection from the conversational text in the Urdu language domain.  相似文献   

2.
In this paper we investigate information-theoretic image coding techniques that assign longer codes to improbable, imprecise and non-distinct intensities in the image. The variable length coding techniques when applied to cropped facial images of subjects with different facial expressions, highlight the set of low probability intensities that characterize the facial expression such as the creases in the forehead, the widening of the eyes and the opening and closing of the mouth. A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effective platform for future emotion categorization experiments.  相似文献   

3.
In this article, attention-based mechanism with the enhancement on biologically inspired network for emotion recognition is proposed. Existing bio-inspired models use multiscale and multiorientation architecture to gain discriminative power and to extract meticulous visual features. Prevailing HMAX model represents S2 layers by randomly selected prototype patches from training samples that increase the computational complexity and degrade the discerning ability. As eyes and mouth regions are the most powerful and reliable cues in determining facial emotions, they serve as the prototype patches for S2 layer in HMAX model. Audio code 4 book is constructed from mel-frequency cepstral coefficients, temporal and spectral features processed by principal component analysis. Audio and video data features are fused to train support vector machine classifier. The attained results on eNTERFACE, surrey audio-visual expressed emotion and acted facial expressions in the wild database datasets ascertain the efficiency of the proposed architecture for emotion recognition.  相似文献   

4.
As a key link in human-computer interaction, emotion recognition can enable robots to correctly perceive user emotions and provide dynamic and adjustable services according to the emotional needs of different users, which is the key to improve the cognitive level of robot service. Emotion recognition based on facial expression and electrocardiogram has numerous industrial applications. First, three-dimensional convolutional neural network deep learning architecture is utilized to extract the spa...  相似文献   

5.
目的随着机器人交互技术的不断发展,研究者们开始针对机器人与人之间的情感交流展开探索,机器人面部表情设计成为研究的热点。为了推动人与机器人的互动交流,研究者需要构造能够被人们接受的机器人面部表达方式。然而,到目前为止,关于人对机器人面部表情感知的评估机制还缺乏相对系统的分析和总结,因此,希望通过对机器人面部表情评估量表的分析和归纳,为设计师在不同情境下选择合适的评估方法,提供指导建议。方法采用文献调研方法,选取五十一篇相关文献进行综述分析,从而梳理和分析适用于机器人面部表情评估的量表。文献关键词包含机器人面部表情设计、机器人面部表情评估、机器人情绪评估量表等,文献主要来自于ACM、Springer、IEEE等数据库,文献发表的年度范围是1977~2019年。通过文献调研分析,提取了六类最重要的机器人面部表情评估量表,包括同理心量表、心理状态三维度量表、机器人焦虑量表、积极-消极自我报告量表、测量个体对机器人感知的量表和李克特量表等,详细阐述了每类量表中的具体形式及其变种。从设计要点、来源领域及评估角度三个方面分析了六类评估量表的特点。最后形成了机器人面部表情评估量表的适用建议。结论对机器人面部表情评估量表进行归类、总结,并且提出了每个量表的适用阶段和情境。  相似文献   

6.
谢启思  陆定邦 《包装工程》2023,44(24):84-90
目的 基于用户旅程图的分析视角,研究机器人通过动作展现复杂社会情感的方法。方法 首先,对社会情感的定义和内容进行分析,将社会情感拆分成个体自身的情感和面向他人的情感;其次,梳理用户旅程图的实施方法,提取事件、用户、触点、阶段和步骤的要素类比映射机器人情感表达,得到角色、情景、刺激、情感表达流程和情感表达步骤的研究要素;最后,基于上述内容生成了机器人社会情感动作表达的方法,并通过宠物机器人的情感表达任务实验验证了识别度。结论 研究提出的机器人社会情感动作表达方法具有模块化表达、系统化设计和流程化分析的优势,包括能够将社会情感拆分成简单的基本情感模块,优化机器人复杂情感表达问题;能够通过系统化设计呈现情感变化过程,展现出机器人的交互意图;能够以流程化分析的方法将情感变化过程细化成具体的组合动作,为机器人情感表达动作的设计提供指导。  相似文献   

7.
Studies on user emotion as a vital part of the user experience have been increasing over the last decade. These studies indicate that it is important to take the user emotions into account when developing usable interfaces. Previous studies demonstrated that user's positive emotion towards a system could lead to good user experience and the system success. Although, users' emotions on Information Systems (IS) have been studied in the literature widely, this is not the case for Student Information Systems (SIS). For these reasons, 324 undergraduate students' emotions on a Turkish University's SIS were collected through Turkish version of Emotion Words Prompt List (EWPL-TR) to investigate whether students' emotions can differ based on their demographic information, their dis/liked features on a SIS or their recommendations for increasing usability of a SIS. Analysing users' emotional variations amongst different user characteristics is valuable because focusing on these variations has the potential to raise the quality of the user experience and learning experience with an IS. The results of this study will be important for developers who want to support their users by increasing the quality of their users' experience with a system, particularly in a learning environment.  相似文献   

8.
In computer vision, emotion recognition using facial expression images is considered an important research issue. Deep learning advances in recent years have aided in attaining improved results in this issue. According to recent studies, multiple facial expressions may be included in facial photographs representing a particular type of emotion. It is feasible and useful to convert face photos into collections of visual words and carry out global expression recognition. The main contribution of this paper is to propose a facial expression recognition model (FERM) depending on an optimized Support Vector Machine (SVM). To test the performance of the proposed model (FERM), AffectNet is used. AffectNet uses 1250 emotion-related keywords in six different languages to search three major search engines and get over 1,000,000 facial photos online. The FERM is composed of three main phases: (i) the Data preparation phase, (ii) Applying grid search for optimization, and (iii) the categorization phase. Linear discriminant analysis (LDA) is used to categorize the data into eight labels (neutral, happy, sad, surprised, fear, disgust, angry, and contempt). Due to using LDA, the performance of categorization via SVM has been obviously enhanced. Grid search is used to find the optimal values for hyperparameters of SVM (C and gamma). The proposed optimized SVM algorithm has achieved an accuracy of 99% and a 98% F1 score.  相似文献   

9.
陈颖  肖仲喆 《声学技术》2018,37(4):380-387
建立了一个将离散情感标签与维度情感空间结合起来的汉语情感语音数据库。该数据库由16名母语为汉语的说话人对情感语音进行表演型录制。语音样本是根据中性、愉悦、高兴、沮丧、愤怒、哀伤,以及悲伤等七种离散的情感标签采集而得,每名说话人有336条语音样本。随后由三名标注人在维度空间上对每条语音样本进行标注。最后,根据标注所得的数据来研究这七种情感在维度空间的分布情况,并分析了情感在一致性、集中性和差异性方面的性能。除此以外,还计算了这七种情感的情感识别率。结果显示,三名标注人对该数据库标注的一致性都达到了80%以上,情感之间的可区分度较高,并且七种情感的情感识别率均高于基线水平。因此,该数据库具有较好的情感质量,能够为离散情感标签到维度情感空间的转化提供重要的研究依据。  相似文献   

10.
People occasionally interact with each other through conversation. In particular, we communicate through dialogue and exchange emotions and information from it. Emotions are essential characteristics of natural language. Conversational artificial intelligence is an integral part of all the technologies that allow computers to communicate like humans. For a computer to interact like a human being, it must understand the emotions inherent in the conversation and generate the appropriate responses. However, existing dialogue systems focus only on improving the quality of understanding natural language or generating natural language, excluding emotions. We propose a chatbot based on emotion, which is an essential element in conversation. EP-Bot (an Empathetic PolarisX-based chatbot) is an empathetic chatbot that can better understand a person’s utterance by utilizing PolarisX, an auto-growing knowledge graph. PolarisX extracts new relationship information and expands the knowledge graph automatically. It is helpful for computers to understand a person’s common sense. The proposed EP-Bot extracts knowledge graph embedding using PolarisX and detects emotion and dialog act from the utterance. Then it generates the next utterance using the embeddings. EP-Bot could understand and create a conversation, including the person’s common sense, emotion, and intention. We verify the novelty and accuracy of EP-Bot through the experiments.  相似文献   

11.
Due to the widespread usage of social media in our recent daily lifestyles, sentiment analysis becomes an important field in pattern recognition and Natural Language Processing (NLP). In this field, users’ feedback data on a specific issue are evaluated and analyzed. Detecting emotions within the text is therefore considered one of the important challenges of the current NLP research. Emotions have been widely studied in psychology and behavioral science as they are an integral part of the human nature. Emotions describe a state of mind of distinct behaviors, feelings, thoughts and experiences. The main objective of this paper is to propose a new model named BERT-CNN to detect emotions from text. This model is formed by a combination of the Bidirectional Encoder Representations from Transformer (BERT) and the Convolutional Neural networks (CNN) for textual classification. This model embraces the BERT to train the word semantic representation language model. According to the word context, the semantic vector is dynamically generated and then placed into the CNN to predict the output. Results of a comparative study proved that the BERT-CNN model overcomes the state-of-art baseline performance produced by different models in the literature using the semeval 2019 task3 dataset and ISEAR datasets. The BERT-CNN model achieves an accuracy of 94.7% and an F1-score of 94% for semeval2019 task3 dataset and an accuracy of 75.8% and an F1-score of 76% for ISEAR dataset.  相似文献   

12.
In this paper, a simple and computationally efficient approach is proposed for person independent facial emotion recognition. The proposed approach is based on the significant features of an image, i.e., the collection of few largest eigenvalues (LE). Further, a Levenberg–Marquardt algorithm-based neural network (LMNN) is applied for multiclass emotions classification. This leads to a new facial emotion recognition approach (LE-LMNN) which is systematically examined on JAFFE and Cohn–Kanade databases. Experimental results illustrate that the LE-LMNN approach is effective and computationally efficient for facial emotion recognition. The robustness of the proposed approach is also tested on low-resolution facial emotion images. The performance of the proposed approach is found to be superior as compared to the various existing methods.  相似文献   

13.
毕强  赵锋  陈金亮 《包装工程》2018,39(8):80-83
目的研究基于场景理论的交互设计一般流程,并探究情绪调节理论与用户场景理论在交互设计中的应用,探索新的交互设计原则和方法。方法详细阐述交互设计场景理论中客观场景、设计场景和测试场景的概念和使用方法,通过构建各阶段用户场景获取用户的真实需求;在此基础上,引入情绪这一重要元素,洞悉用户的情感需求,并以Gross的情绪调节理论为设计准则,在产品中为用户设计不同的情绪调节触发条件,使用户的情绪处于中等唤醒水平,以保证用户与产品交互的流畅性。结论以Carroll的场景理论为基础,丰富了基于场景理论的交互设计流程,并提出了一套基于情绪调节理论的用户场景理论的交互设计框架,为交互设计提供新的思路和方法。  相似文献   

14.
产品造型情感类型与情感价值的研究框架   总被引:6,自引:5,他引:1  
赵丹华 《包装工程》2016,37(20):1-8
目的为了帮助设计师有效处理情感设计与用户认知的关系。方法项目研究以情感要素为情感类型的分析基础,以情感基准为情感价值的判定依据,提出满足用户情感期望的产品设计方法。情感模糊性和典型性是研究的难点。结论首先,以"特征-角色-语义"三要素为基础,研究造型特征、情感角色和情感语义,抽取产品造型情感的特征信息,获得产品情感要素集;研究情感类别与情感情境,确定产品造型情感类型;最后,研究情感评价基准、情感价值判定和设计迭代求解,提出满足用户情感期望的设计方法,帮助设计师有效处理情感设计与用户认知的关系。在情感要素完备性、情感价值判定与情感价值权衡方面取得理论创新。  相似文献   

15.
Machine analysis of facial emotion recognition is a challenging and an innovative research topic in human–computer interaction. Though a face displays different facial expressions, which can be immediately recognized by human eyes, it is very hard for a computer to extract and use the information content from these expressions. This paper proposes an approach for emotion recognition based on facial components. The local features are extracted in each frame using Gabor wavelets with selected scales and orientations. These features are passed on to an ensemble classifier for detecting the location of face region. From the signature of each pixel on the face, the eye and the mouth regions are detected using the ensemble classifier. The eye and the mouth features are extracted using normalized semi-local binary patterns. The multiclass Adaboost algorithm is used to select and classify these discriminative features for recognizing the emotion of the face. The developed methods are deployed on the RML, CK and CMU-MIT databases, and they exhibit significant performance improvement owing to their novel features when compared with the existing techniques.  相似文献   

16.
Recent increase in the number of digital photos in the content sharing and social networking websites has created an endless demand for techniques to analyze, navigate, and summarize these images. In this paper, we focus on image collection summarization. Earlier methods in image collection summarization consider representativeness and diversity criteria while recent ones also consider other criteria such as image quality, aesthetic or appeal. In this paper, we propose a multi-criteria context-sensitive approach for social image collection summarization. In the proposed method, two different sets of features are combined while each one looks at different criteria for image collection summarization: social attractiveness features and semantic features. The first feature set considers different aspects that make an image appealing such as image quality, aesthetic, and emotion to create attractiveness score for input images while the second one covers semantic content of images and assigns semantic score to them. We use social network infrastructure to identify attractiveness features and domain ontology for extracting ontology features. The final summarization is provided by integrating the attractiveness and semantic features of input images. The experimental results on a collection of human generated summaries on a set of Flickr images demonstrate the effectiveness of the proposed image collection summarization approach.  相似文献   

17.
Emotion detection from the text is a challenging problem in the text analytics. The opinion mining experts are focusing on the development of emotion detection applications as they have received considerable attention of online community including users and business organization for collecting and interpreting public emotions. However, most of the existing works on emotion detection used less efficient machine learning classifiers with limited datasets, resulting in performance degradation. To overcome this issue, this work aims at the evaluation of the performance of different machine learning classifiers on a benchmark emotion dataset. The experimental results show the performance of different machine learning classifiers in terms of different evaluation metrics like precision, recall ad f-measure. Finally, a classifier with the best performance is recommended for the emotion classification.  相似文献   

18.
In social networks, user attention affects the user’s decision-making, resulting in a performance alteration of the recommendation systems. Existing systems make recommendations mainly according to users’ preferences with a particular focus on items. However, the significance of users’ attention and the difference in the influence of different users and items are often ignored. Thus, this paper proposes an attention-based multi-layer friend recommendation model to mitigate information overload in social networks. We first constructed the basic user and item matrix via convolutional neural networks (CNN). Then, we obtained user preferences by using the relationships between users and items, which were later inputted into our model to learn the preferences between friends. The error performance of the proposed method was compared with the traditional solutions based on collaborative filtering. A comprehensive performance evaluation was also conducted using large-scale real-world datasets collected from three popular location-based social networks. The experimental results revealed that our proposal outperforms the traditional methods in terms of recommendation performance.  相似文献   

19.
This paper proposes a novel, efficient and affordable approach to detect the students’ engagement levels in an e-learning environment by using webcams. Our method analyzes spatiotemporal features of e-learners’ micro body gestures, which will be mapped to emotions and appropriate engagement states. The proposed engagement detection model uses a three-dimensional convolutional neural network to analyze both temporal and spatial information across video frames. We follow a transfer learning approach by using the C3D model that was trained on the Sports-1M dataset. The adopted C3D model was used based on two different approaches; as a feature extractor with linear classifiers and a classifier after applying fine-tuning to the pre-trained model. Our model was tested and its performance was evaluated and compared to the existing models. It proved its effectiveness and superiority over the other existing methods with an accuracy of 94%. The results of this work will contribute to the development of smart and interactive e-learning systems with adaptive responses based on users’ engagement levels.  相似文献   

20.
王海宁  胡家丽 《包装工程》2020,41(4):153-159
目的研究互联网移动端应用软件中的下拉刷新加载页的加载时长、交互动效及加载动画对时间知觉和情绪感受的影响。方法为了更深入研究不同因素的影响效应,针对下拉刷新加载页的加载时长、交互动效、加载动画三个不同因素分别进行了两次实验。在实验一中,采用3(加载时长为2 s、5 s、10 s)×3(交互动效类型为A、B、C)被试内设计,在实验二中,采用单因素(加载动画:概念加载模型,情趣化动效图,交互性场景)被试内设计。通过两次实验得出,加载时长、交互动效、加载动画对时间知觉及情绪感受均有显著影响。结论有效控制加载时长,结合可以缓解用户负面情绪、缩短用户时距估计的C类型交互动效,以及C类型交互性场景加载动画设计,最终构成的下拉刷新加载页面是改善用户等待体验的有效手段。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号