首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
基于特征点表情变化的3维人脸识别   总被引:1,自引:1,他引:0       下载免费PDF全文
目的 为克服表情变化对3维人脸识别的影响,提出一种基于特征点提取局部区域特征的3维人脸识别方法。方法 首先,在深度图上应用2维图像的ASM(active shape model)算法粗略定位出人脸特征点,再根据Shape index特征在人脸点云上精确定位出特征点。其次,提取以鼻中为中心的一系列等测地轮廓线来表征人脸形状;然后,提取具有姿态不变性的Procrustean向量特征(距离和角度)作为识别特征;最后,对各条等测地轮廓线特征的分类结果进行了比较,并对分类结果进行决策级融合。结果 在FRGC V2.0人脸数据库分别进行特征点定位实验和识别实验,平均定位误差小于2.36 mm,Rank-1识别率为98.35%。结论 基于特征点的3维人脸识别方法,通过特征点在人脸近似刚性区域提取特征,有效避免了受表情影响较大的嘴部区域。实验证明该方法具有较高的识别精度,同时对姿态、表情变化具有一定的鲁棒性。  相似文献   

2.
Anthropometric 3D Face Recognition   总被引:1,自引:0,他引:1  
We present a novel anthropometric three dimensional (Anthroface 3D) face recognition algorithm, which is based on a systematically selected set of discriminatory structural characteristics of the human face derived from the existing scientific literature on facial anthropometry. We propose a novel technique for automatically detecting 10 anthropometric facial fiducial points that are associated with these discriminatory anthropometric features. We isolate and employ unique textural and/or structural characteristics of these fiducial points, along with the established anthropometric facial proportions of the human face for detecting them. Lastly, we develop a completely automatic face recognition algorithm that employs facial 3D Euclidean and geodesic distances between these 10 automatically located anthropometric facial fiducial points and a linear discriminant classifier. On a database of 1149 facial images of 118 subjects, we show that the standard deviation of the Euclidean distance of each automatically detected fiducial point from its manually identified position is less than 2.54 mm. We further show that the proposed Anthroface 3D recognition algorithm performs well (equal error rate of 1.98% and a rank 1 recognition rate of 96.8%), out performs three of the existing benchmark 3D face recognition algorithms, and is robust to the observed fiducial point localization errors.  相似文献   

3.
Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)] that compound expressions. AUs are agnostic, leaving the inference about conveyed intent to higher order decision making (e.g., emotion recognition). The proposed fully automatic method not only allows the recognition of 22 AUs but also explicitly models their temporal characteristics (i.e., sequences of temporal segments: neutral, onset, apex, and offset). To do so, it uses a facial point detector based on Gabor-feature-based boosted classifiers to automatically localize 20 facial fiducial points. These points are tracked through a sequence of images using a method called particle filtering with factorized likelihoods. To encode AUs and their temporal activation models based on the tracking data, it applies a combination of GentleBoost, support vector machines, and hidden Markov models. We attain an average AU recognition rate of 95.3% when tested on a benchmark set of deliberately displayed facial expressions and 72% when tested on spontaneous expressions.  相似文献   

4.
Generating discriminating cartoon faces using interacting snakes   总被引:1,自引:0,他引:1  
As a computational bridge between the high-level a priori knowledge of object shape and the low-level image data, active contours (or snakes) are useful models for the extraction of deformable objects. We propose an approach for manipulating multiple snakes iteratively, called interacting snakes, that minimizes the attraction energy functionals on both contours and enclosed regions of individual snakes and the repulsion energy functionals among multiple snakes that interact with each other. We implement the interacting snakes through explicit curve (parametric active contours) representation in the domain of face recognition. We represent human faces semantically via facial components such as eyes, mouth, face outline, and the hair outline. Each facial component is encoded by a closed (or open) snake that is drawn from a 3D generic face model. A collection of semantic facial components form a hypergraph, called semantic face graph, which employs interacting snakes to align the general facial topology onto the sensed face images. Experimental results show that a successful interaction among multiple snakes associated with facial components makes the semantic face graph a useful model for face representation, including cartoon faces and caricatures, and recognition.  相似文献   

5.
Most of the research on sign language recognition concentrates on recognizing only manual signs (hand gestures and shapes), discarding a very important component: the non-manual signals (facial expressions and head/shoulder motion). We address the recognition of signs with both manual and non-manual components using a sequential belief-based fusion technique. The manual components, which carry information of primary importance, are utilized in the first stage. The second stage, which makes use of non-manual components, is only employed if there is hesitation in the decision of the first stage. We employ belief formalism both to model the hesitation and to determine the sign clusters within which the discrimination takes place in the second stage. We have implemented this technique in a sign tutor application. Our results on the eNTERFACE’06 ASL database show an improvement over the baseline system which uses parallel or feature fusion of manual and non-manual features: we achieve an accuracy of 81.6%.  相似文献   

6.
A system that could automatically analyze the facial actions in real time has applications in a wide range of different fields. However, developing such a system is always challenging due to the richness, ambiguity, and the dynamic nature of facial actions. Although a number of research groups attempt to recognize facial action units (AUs) by either improving facial feature extraction techniques, or the AU classification techniques, these methods often recognize AUs or certain AU combinations individually and statically, ignoring the semantic relationships among AUs and the dynamics of AUs. Hence, these approaches cannot always recognize AUs reliably, robustly, and consistently.In this paper, we propose a novel approach that systematically accounts for the relationships among AUs and their temporal evolutions for AU recognition. Specifically, we use a dynamic Bayesian network (DBN) to model the relationships among different AUs. The DBN provides a coherent and unified hierarchical probabilistic framework to represent probabilistic relationships among various AUs and to account for the temporal changes in facial action development. Within our system, robust computer vision techniques are used to obtain AU measurements. And such AU measurements are then applied as evidence to the DBN for inferring various AUs. The experiments show that the integration of AU relationships and AU dynamics with AU measurements yields significant improvement of AU recognition, especially for spontaneous facial expressions and under more realistic environment including illumination variation, face pose variation, and occlusion.  相似文献   

7.
We propose an efficient algorithm for recognizing facial expressions using biologically plausible features: contours of face and its components with radial encoding strategy. A self-organizing network (SON) is applied to check the homogeneity of the encoded contours and then different classifiers, such as SON, multi-layer perceptron and K-nearest neighbor, are used for recognizing expressions from contours. Experimental results show that the recognition accuracy of our algorithm is comparable to that of other algorithms in the literature on the Japanese female facial expression database. We also apply our algorithm to Taiwanese facial expression image database to demonstrate its efficiency in recognizing facial expressions.  相似文献   

8.
The performance of an automatic facial expression recognition system can be significantly improved by modeling the reliability of different streams of facial expression information utilizing multistream hidden Markov models (HMMs). In this paper, we present an automatic multistream HMM facial expression recognition system and analyze its performance. The proposed system utilizes facial animation parameters (FAPs), supported by the MPEG-4 standard, as features for facial expression classification. Specifically, the FAPs describing the movement of the outer-lip contours and eyebrows are used as observations. Experiments are first performed employing single-stream HMMs under several different scenarios, utilizing outer-lip and eyebrow FAPs individually and jointly. A multistream HMM approach is proposed for introducing facial expression and FAP group dependent stream reliability weights. The stream weights are determined based on the facial expression recognition results obtained when FAP streams are utilized individually. The proposed multistream HMM facial expression system, which utilizes stream reliability weights, achieves relative reduction of the facial expression recognition error of 44% compared to the single-stream HMM system.  相似文献   

9.
Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypic emotional expressions, such as anger and happiness. Instead of representing another approach to machine analysis of prototypic facial expressions of emotion, the method presented in this paper attempts to handle a large range of human facial behavior by recognizing facial muscle actions that produce expressions. Virtually all of the existing vision systems for facial muscle action detection deal only with frontal-view face images and cannot handle temporal dynamics of facial actions. In this paper, we present a system for automatic recognition of facial action units (AUs) and their temporal models from long, profile-view face image sequences. We exploit particle filtering to track 15 facial points in an input face-profile sequence, and we introduce facial-action-dynamics recognition from continuous video input using temporal rules. The algorithm performs both automatic segmentation of an input video into facial expressions pictured and recognition of temporal segments (i.e., onset, apex, offset) of 27 AUs occurring alone or in a combination in the input face-profile video. A recognition rate of 87% is achieved.  相似文献   

10.
International Journal of Computer Vision - The locations of the fiducial facial landmark points around facial components and facial contour capture the rigid and non-rigid facial deformations due...  相似文献   

11.
Most studies use the facial expression to recognize a user’s emotion; however, gestures, such as nodding, shaking the head, or stillness can also be indicators of the user’s emotion. In our research, we use the facial expression and gestures to detect and recognize a user’s emotion. The pervasive Microsoft Kinect sensor captures video data, from which several features representing facial expressions and gestures are extracted. An in-house extensible markup language-based genetic programming engine (XGP) evolves the emotion recognition module of our system. To improve the computational performance of the recognition module, we implemented and compared several approaches, including directed evolution, collaborative filtering via canonical voting, and a genetic algorithm, for an automated voting system. The experimental results indicate that XGP is feasible for evolving emotion classifiers. In addition, the obtained results verify that collaborative filtering improves the generality of recognition. From a psychological viewpoint, the results prove that different people might express their emotions differently, as the emotion classifiers that are evolved for particular users might not be applied successfully to other user(s).  相似文献   

12.
In this paper, we investigate the interest of action unit (AU) detection for automatic emotion recognition. We propose and compare two emotion detectors: the first works directly on a high-dimensional feature space and the second projects facial image in the low-dimensional space of AU intensities before recognizing emotion. In both approaches, facial images are coded by local Gabor binary pattern (LGBP) histogram differences. These features reduce the sensitivity to subject identity by computing differences between two LGBP histograms: one computed on an expressive image and the other synthesized and approaching the one we would compute on a neutral face of the same subject. As classifiers, we test support vector machines with different kernels. A new kernel is proposed, the histogram difference intersection kernel that increases classification performances. This kernel is well suited when exploiting the proposed histogram differences. Thorough experiments on three challenging databases (respectively, the Cohn-Kanade, MMI and Bosphorus databases) show the accuracy of our AU and emotion detectors. They lead to significant conclusions on three critical issues: (1) the interest of combining different training databases labeled by different AU coders, (2) the influence of each AU according to its type and detection accuracy on emotion recognition and (3) the sensitivity to identity variations.  相似文献   

13.
孙劲光    孟凡宇 《智能系统学报》2015,10(6):912-920
针对传统人脸识别算法在非限制条件下识别准确率不高的问题,提出了一种特征加权融合人脸识别方法(DLWF+)。根据人脸面部左眼、右眼、鼻子、嘴、下巴等5个器官位置,将人脸图像划分成5个局部采样区域;将得到的5个局部采样区域和整幅人脸图像分别输入到对应的神经网络中进行网络权值调整,完成子网络的构建;利用softmax回归求出6个相似度向量并组成相似度矩阵与权向量相乘得出最终的识别结果。经ORL和WFL人脸库上进行实验验证,识别准确率分别达到97%和91.63%。实验结果表明:该算法能够有效提高人脸识别能力,与传统识别算法相比在限制条件和非限制条件下都具有较高的识别准确率。  相似文献   

14.
15.
A novel approach is proposed for the recognition of moving hand gestures based on the representation of hand motions as contour-based similarity images (CBSIs). The CBSI was constructed by calculating the similarity between hand contours in different frames. The input CBSI was then matched with CBSIs in the database to recognize the hand gesture. The proposed continuous hand gesture recognition algorithm can simultaneously divide the continuous gestures into disjointed gestures and recognize them. No restrictive assumptions were considered for the motion of the hand between the disjointed gestures. The proposed algorithm was tested using hand gestures from American Sign Language and the results showed a recognition rate of 91.3% for disjointed gestures and 90.4% for continuous gestures. The experimental results illustrate the efficiency of the algorithm for noisy videos.  相似文献   

16.
本文提出基于事例的交互式遗传算法进行面部动作单元识别的算法,将用户的比较能力融入到搜索过程,快速检索到与待识别图像匹配的事例图像,从而实现动作单元的半自动识别。该方法不需抽取图像特征,因而可用于识别非控制成像条件下自发面部图像或图像序列中的动作单元,具有较好的鲁棒性和实用性。文中采用16幅受控成像条件下收集的简单图像进行实验,单独AU的平均识别率达到77.5%,AU组合的平均相似度为82.8%。采用10幅有干扰的非受控成像条件下收集的复杂图像进行实验,单独AU的平均识别率为82.8%,AU组合的平均相似度为93.1%。相对于特征脸算法,本文算法的平均识别率和相似度都有较大程度的提高。  相似文献   

17.
人脸活动单元自动识别研究综述   总被引:3,自引:0,他引:3  
人脸活动单元(action units,AU)的自动识别能应用于行为科学、人机交互、安全、医疗诊断等众多领域,近年来得到了广泛关注.文中阐述了AU自动识别的基本概念、一般过程及其主要特征提取和分类方法,介绍了具有AU编码的代表性人脸表情数据库,并对单个AU与AU组合的识别、AU强度与AU动态性分析的研究现状进行了评述.最后总结了目前AU自动识别研究中存在的主要难点,并展望了其发展方向.  相似文献   

18.
人脸动作编码系统从人脸解剖学的角度定义了一组面部动作单元(action unit,AU),用于精确刻画人脸表情变化。每个面部动作单元描述了一组脸部肌肉运动产生的表观变化,其组合可以表达任意人脸表情。AU检测问题属于多标签分类问题,其挑战在于标注数据不足、头部姿态干扰、个体差异和不同AU的类别不均衡等。为总结近年来AU检测技术的发展,本文系统概述了2016年以来的代表性方法,根据输入数据的模态分为基于静态图像、基于动态视频以及基于其他模态的AU检测方法,并讨论在不同模态数据下为了降低数据依赖问题而引入的弱监督AU检测方法。针对静态图像,进一步介绍基于局部特征学习、AU关系建模、多任务学习以及弱监督学习的AU检测方法。针对动态视频,主要介绍基于时序特征和自监督AU特征学习的AU检测方法。最后,本文对比并总结了各代表性方法的优缺点,并在此基础上总结和讨论了面部AU检测所面临的挑战和未来发展趋势。  相似文献   

19.
This study presents a facial expression recognition system which separates the non-rigid facial expression from the rigid head rotation and estimates the 3D rigid head rotation angle in real time. The extracted trajectories of the feature points contain both rigid head motion components and non-rigid facial expression motion components. A 3D virtual face model is used to obtain accurate estimation of the head rotation angle such that the non-rigid motion components can be precisely separated to enhance the facial expression recognition performance. The separation performance of the proposed system is further improved through the use of a restoration mechanism designed to recover feature points lost during large pan rotations. Having separated the rigid and non-rigid motions, hidden Markov models (HMMs) are employed to recognize a prescribed set of facial expressions defined in terms of facial action coding system (FACS) action units (AUs).  相似文献   

20.
基于传统的手部轮廓特征提取不能应对飞行模拟环境下的脸部肤色、遮挡、光照影响,以及传统的傅里叶描述子特征容易受到背景、手的姿态变化,且对手势描述能力有限等问题,对传统的手部分割和特征提取方法改进.本文首先对采集的数据集进行肤色处理,然后结合调用的手部关键点模型检测出手部22个特征点,采用八向种子填充算法进行图像分割.接着...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号