首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Manifold learning has been successfully applied to facial expression recognition by modeling different expressions as a smooth manifold embedded in a high dimensional space. However, the assumption of single manifold is still arguable and therefore does not necessarily guarantee the best classification accuracy. In this paper, a generalized framework for modeling and recognizing facial expressions on multiple manifolds is presented which assumes that different expressions may reside on different manifolds of possibly different dimensionalities. The intrinsic features of each expression are firstly learned separately and the genetic algorithm (GA) is then employed to obtain the nearly optimal dimensionality of each expression manifold from the classification viewpoint. Classification is performed under a newly defined criterion that is based on the minimum reconstruction error on manifolds. Extensive experiments on both the Cohn-Kanade and Feedtum databases show the effectiveness of the proposed multiple manifold based approach.  相似文献   

2.
3.
In this paper, an efficient method for human facial expression recognition is presented. We first propose a representation model for facial expressions, namely the spatially maximum occurrence model (SMOM), which is based on the statistical characteristics of training facial images and has a powerful representation capability. Then the elastic shape-texture matching (ESTM) algorithm is used to measure the similarity between images based on the shape and texture information. By combining SMOM and ESTM, the algorithm, namely SMOM-ESTM, can achieve a higher recognition performance level. The recognition rates of the SMOM-ESTM algorithm based on the AR database and the Yale database are 94.5% and 94.7%, respectively.  相似文献   

4.
For effective interaction between humans and socially adept, intelligent service robots, a key capability required by this class of sociable robots is the successful interpretation of visual data. In addition to crucial techniques like human face detection and recognition, an important next step for enabling intelligence and empathy within social robots is that of emotion recognition. In this paper, an automated and interactive computer vision system is investigated for human facial expression recognition and tracking based on the facial structure features and movement information. Twenty facial features are adopted since they are more informative and prominent for reducing the ambiguity during classification. An unsupervised learning algorithm, distributed locally linear embedding (DLLE), is introduced to recover the inherent properties of scattered data lying on a manifold embedded in high-dimensional input facial images. The selected person-dependent facial expression images in a video are classified using the DLLE. In addition, facial expression motion energy is introduced to describe the facial muscle’s tension during the expressions for person-independent tracking for person-independent recognition. This method takes advantage of the optical flow which tracks the feature points’ movement information. Finally, experimental results show that our approach is able to separate different expressions successfully.  相似文献   

5.
Ruicong  Qiuqi 《Neurocomputing》2008,71(7-9):1730-1734
In this paper, a novel method called two-dimensional discriminant locality preserving projections (2D-DLPP) is proposed. By introducing between-class scatter constraint and label information into two-dimensional locality preserving projections (2D-LPP) algorithm, 2D-DLPP successfully finds the subspace which can best discriminate different pattern classes. So the subspace obtained by 2D-DLPP has more discriminant power than 2D-LPP, and is more suitable for recognition tasks. The proposed method was applied to facial expression recognition tasks on JAFFE and Cohn-Kanade database and compared with other three widely used two-dimensional methods: 2D-PCA, 2D-LDA and 2D-LPP. The high recognition rates show the effectiveness of the proposed algorithm.  相似文献   

6.
Emotion is an important driver of human decision-making and communication. With the recent rise of human–computer interaction, affective computing has become a trending research topic, aiming to develop computational systems that can understand human emotions and respond to them. A systematic review has been conducted to fill these gaps since previous reviews regarding machine-enabled automated visual emotion recognition neglect important methodological aspects, including emotion models and hardware usage. 467 relevant papers were initially found and examined. After the screening process with specific inclusion and exclusion criteria, 30 papers were selected. Methodological aspects including emotion models, devices, architectures, and classification techniques employed by the selected studies were analyzed, and the most popular techniques and current trends in visual emotion recognition were identified. This review not only offers a comprehensive and up-to-date overview of the topic but also provides researchers with insights regarding methodological aspects like emotion models employed, devices used, and classification techniques for automated visual emotion recognition. By identifying current trends, like the increased use of deep learning algorithms and the need for further study on body gestures, this review advocates the advantages of implementing emotion recognition with the use of visual data and builds a solid foundation for applying relevant techniques in different fields.  相似文献   

7.
Facial Expression Recognition (FER) is an important subject of human–computer interaction and has long been a research area of great interest. Accurate Facial Expression Sequence Interception (FESI) and discriminative expression feature extraction are two enormous challenges for the video-based FER. This paper proposes a framework of FER for the intercepted video sequences by using feature point movement trend and feature block texture variation. Firstly, the feature points are marked by Active Appearance Model (AAM) and the most representative 24 of them are selected. Secondly, facial expression sequence is intercepted from the face video by determining two key frames whose emotional intensities are minimum and maximum, respectively. Thirdly, the trend curve which represents the Euclidean distance variations between any two selected feature points is fitted, and the slopes of specific points on the trend curve are calculated. Finally, combining Slope Set which is composed by the calculated slopes with the proposed Feature Block Texture Difference (FBTD) which refers to the texture variation of facial patch, the final expressional feature are formed and inputted to One-dimensional Convolution Neural Network (1DCNN) for FER. Five experiments are conducted in this research, and three average FER rates 95.2%, 96.5%, and 97% for Beihang University (BHU) facial expression database, MMI facial expression database, and the combination of two databases, respectively, have shown the significant advantages of the proposed method over the existing ones.  相似文献   

8.
Automatic facial expression analysis aims to analyse human facial expressions and classify them into discrete categories. Methods based on existing work are reliant on extracting information from video sequences and employ either some form of subjective thresholding of dynamic information or attempt to identify the particular individual frames in which the expected behaviour occurs. These methods are inefficient as they require either additional subjective information, tedious manual work or fail to take advantage of the information contained in the dynamic signature from facial movements for the task of expression recognition.  相似文献   

9.
基于分类器联合的表情识别   总被引:1,自引:1,他引:0       下载免费PDF全文
提出了一种基于分类器联合的人脸表情识别方法。首先采用CKFD算法在双决策子空间中提取两类表情特征并融合;分别利用最近邻、最小距离和神经网络三种子分类器进行识别;最后运用模糊积分对子分类器的识别结果进行融合。基于JAFFE的实验结果表明,它是一种有效的表情识别方法。  相似文献   

10.
针对传统LBP特征提取方法对非单调光线变化比较敏感且无法对全局特征进行稀疏表示的缺陷,提出一种自适应加权局部格雷码模式(Local Gray Code Patterns,LGCP)与快速稀疏表示相结合的特征提取方法。先对原始图像应用边缘检测算子最大化边缘值,以克服光线变化对特征描述的影响。采用LGCP编码得到八位格雷码并转换为十进制,然后对图像进行分块加权级联,使描述子能够对局部特征进行最优表征;同时,为了得到更好的全局特征的稀疏表示,将级联后的直方图分布特征描述子作为原子构造字典;最后,使用一种快速稀疏表示方法作为分类器进行分类识别。基于扩展Cohn-Kanade(CK+)表情数据集进行多组实验,结果表明该方法的识别速度更快,识别率可达94%。  相似文献   

11.
针对人脸表情时空域特征信息的有效提取,本文提出了一种CBP-TOP(Centralized Binary Patterns From Three Orthogonal Panels)特征和SVM分类器相结合的人脸表情识别新方法。该方法首先将原始图像序列进行图像预处理,包括人脸检测、图像截取和图像尺度归一化,然后用CBP-TOP算子对图像序列进行分块提取特征,最后采用SVM分类器进行表情识别。实验结果表明,该方法能更有效提取图像序列的运动特征和动态纹理信息,提高了表情识别的准确率。和VLBP特征相比, CBP-TOP特征在表情识别中具有更高的识别率和更快的识别速度。  相似文献   

12.
采用Adaboost算法进行面部表情识别   总被引:3,自引:0,他引:3  
Adaboost是一种有效的分类器组合方法,它能够提高弱分类器的分类性能。利用Adaboost方法对面部表情进行识别,探讨了Adaboost与主成分分析法的几种结合方案。仿真结果表明这几种方案可行,且识别效果较好。  相似文献   

13.
自适应加权完全局部二值模式的表情识别   总被引:2,自引:0,他引:2       下载免费PDF全文
为了有效地提取局部特征和全局特征以提高表情识别的性能,提出自适应加权的完全局部二值模式(Adaptively Weighted Compound Local Binary Pattern,AWCLBP)的人脸表情识别算法。首先对人脸表情图像进行预处理分离出表情子区域,与此同时生成表情子区域的贡献度图谱(Contribution Map,CM);然后对表情子区域和整幅表情图像做完全局部二值模式变换提取三种特征(差值符号特征CLBP_S、差值幅值特征CLBP_M、中心像素特征CLBP_C)并连接三种特征生成级联直方图,并根据CM对表情子区域的级联直方图进行加权和整张图像的直方图进行融合;最后用卡方距离和最近邻方法进行分类识别。本算法在JAFFE库上做了实验并和LBP、Gabor小波、活动外观模型进行了比较,验证了本算法的有效性。  相似文献   

14.
目的 大量标注数据和深度学习方法极大地提升了图像识别性能。然而,表情识别的标注数据缺乏,训练出的深度模型极易过拟合,研究表明使用人脸识别的预训练网络可以缓解这一问题。但是预训练的人脸网络可能会保留大量身份信息,不利于表情识别。本文探究如何有效利用人脸识别的预训练网络来提升表情识别的性能。方法 本文引入持续学习的思想,利用人脸识别和表情识别之间的联系来指导表情识别。方法指出网络中对人脸识别整体损失函数的下降贡献最大的参数与捕获人脸公共特征相关,对表情识别来说为重要参数,能够帮助感知面部特征。该方法由两个阶段组成:首先训练一个人脸识别网络,同时计算并记录网络中每个参数的重要性;然后利用预训练的模型进行表情识别的训练,同时通过限制重要参数的变化来保留模型对于面部特征的强大感知能力,另外非重要参数能够以较大的幅度变化,从而学习更多表情特有的信息。这种方法称之为参数重要性正则。结果 该方法在RAF-DB(real-world affective faces database),CK+(the extended Cohn-Kanade database)和Oulu-CASIA这3个数据集上进行了实验评估。在主流数据集RAF-DB上,该方法达到了88.04%的精度,相比于直接用预训练网络微调的方法提升了1.83%。其他数据集的实验结果也表明了该方法的有效性。结论 提出的参数重要性正则,通过利用人脸识别和表情识别之间的联系,充分发挥人脸识别预训练模型的作用,使得表情识别模型更加鲁棒。  相似文献   

15.
针对表情识别中存在人脸semi-Markov models,HSMM)的人脸表情识别模型.该模型具有每个状态产生多个观察值、允许观察值缺省等特性,据此识别那些由于局部被遮挡或其它原因引起的丢失特征的人脸表情.实验结果表明,该模型提高了部分遮挡人脸的表情识别效果,同时对无遮挡人脸的表情识别也有所改善.  相似文献   

16.
This study presents a facial expression recognition system which separates the non-rigid facial expression from the rigid head rotation and estimates the 3D rigid head rotation angle in real time. The extracted trajectories of the feature points contain both rigid head motion components and non-rigid facial expression motion components. A 3D virtual face model is used to obtain accurate estimation of the head rotation angle such that the non-rigid motion components can be precisely separated to enhance the facial expression recognition performance. The separation performance of the proposed system is further improved through the use of a restoration mechanism designed to recover feature points lost during large pan rotations. Having separated the rigid and non-rigid motions, hidden Markov models (HMMs) are employed to recognize a prescribed set of facial expressions defined in terms of facial action coding system (FACS) action units (AUs).  相似文献   

17.
提出了一种新的视频人脸表情识别方法. 该方法将识别过程分成人脸表情特征提取和分类2个部分,首先采用基于点跟踪的活动形状模型(ASM)从视频人脸中提取人脸表情几何特征;然后,采用一种新的局部支撑向量机分类器对表情进行分类. 在Cohn2Kanade数据库上对KNN、SVM、KNN2SVM和LSVM 4种分类器的比较实验结果验证了所提出方法的有效性.  相似文献   

18.
提出了一种结合Gabor变换和FastICA技术的人脸表情特征提取方法。Gabor小波具有很好的空频局部性和多方向选择性,因此更有利于表情细节信息的提取。FastICA技术能够消除信号间的高阶统计冗余。对图像进行Gabor变换,把得到的系数排列成Gabor特征矢量,用FastICA对Gabor特征矢量进行特征提取,用K-近邻分类器进行分类。JAFFE表情库中的实验证明该方法的有效性。  相似文献   

19.
针对传统稀疏表示方法构建的字典不具备判别性的问题,以K-SVD算法为基础,对判别字典的构建和分类求解进行了研究,提出一种基于层次结构化字典学习的表情识别方法。先将训练样本切割出眼眉、脸颊和嘴三部分,对分割的各部分利用K-SVD算法得到块字典向量,再用层次分析法的权重赋值方法求块字典向量的权重值,构成各类子字典。将所有的子字典进行联合,用结构化字典学习算法求解。测试样本的归类取决于求解结果重构的效果。在JAFFE和CK表情库上的实验表明,该算法在保证了字典判别性的同时,也达到了较高的识别率。  相似文献   

20.
为了解决在面部表情特征提取过程中卷积神经网络CNN和局部二值模式LBP只能提取面部表情图像的单一特征,难以提取与面部变化高度相关的精确特征的问题,提出了一种基于深度学习的特征融合的表情识别方法。该方法将LBP特征和CNN卷积层提取的特征通过加权的方式结合在改进的VGG-16网络连接层中,最后将融合特征送入Softmax分类器获取各类特征的概率,完成基本的6种表情分类。实验结果表明,所提方法在CK+和JAFFE数据集上的平均识别准确率分别达到了97.5%和97.62%,利用融合特征得到的识别结果明显优于利用单一特征识别的效果。与其他方法相比较,该方法能有效提高表情识别准确率,对光照变化更加鲁棒。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号