首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
孙劲光    孟凡宇 《智能系统学报》2015,10(6):912-920
针对传统人脸识别算法在非限制条件下识别准确率不高的问题,提出了一种特征加权融合人脸识别方法(DLWF+)。根据人脸面部左眼、右眼、鼻子、嘴、下巴等5个器官位置,将人脸图像划分成5个局部采样区域;将得到的5个局部采样区域和整幅人脸图像分别输入到对应的神经网络中进行网络权值调整,完成子网络的构建;利用softmax回归求出6个相似度向量并组成相似度矩阵与权向量相乘得出最终的识别结果。经ORL和WFL人脸库上进行实验验证,识别准确率分别达到97%和91.63%。实验结果表明:该算法能够有效提高人脸识别能力,与传统识别算法相比在限制条件和非限制条件下都具有较高的识别准确率。  相似文献   

2.
微表情(ME)的发生只牵涉到面部局部区域,具有动作幅度小、持续时间短的特点,但面部在产生微表情的同时也存在一些无关的肌肉动作。现有微表情识别的全局区域方法会提取这些无关变化的时空模式,从而降低特征向量对于微表情的表达能力,进而影响识别效果。针对这个问题,提出使用局部区域方法进行微表情识别。首先,根据微表情发生时所牵涉到的动作单元(AU)所在区域,通过面部关键点坐标将与微表情相关的七个局部区域划分出来;然后,提取这些局部区域组合的时空模式并串联构成特征向量,进行微表情识别。留一交叉验证的实验结果表明局部区域方法较全局区域方法进行微表情识别的识别率平均提高9.878%。而通过对各区域识别结果的混淆矩阵进行分析表明所提方法充分利用了面部各局部区域的结构信息,并有效摒除与微表情无关区域对识别性能的影响,较全局区域方法可以显著提高微表情识别的性能。  相似文献   

3.
人脸动作编码系统从人脸解剖学的角度定义了一组面部动作单元(action unit,AU),用于精确刻画人脸表情变化。每个面部动作单元描述了一组脸部肌肉运动产生的表观变化,其组合可以表达任意人脸表情。AU检测问题属于多标签分类问题,其挑战在于标注数据不足、头部姿态干扰、个体差异和不同AU的类别不均衡等。为总结近年来AU检测技术的发展,本文系统概述了2016年以来的代表性方法,根据输入数据的模态分为基于静态图像、基于动态视频以及基于其他模态的AU检测方法,并讨论在不同模态数据下为了降低数据依赖问题而引入的弱监督AU检测方法。针对静态图像,进一步介绍基于局部特征学习、AU关系建模、多任务学习以及弱监督学习的AU检测方法。针对动态视频,主要介绍基于时序特征和自监督AU特征学习的AU检测方法。最后,本文对比并总结了各代表性方法的优缺点,并在此基础上总结和讨论了面部AU检测所面临的挑战和未来发展趋势。  相似文献   

4.
在复杂的非人脸成分干扰以及训练样本过大、训练样本之间相似度较高的条件下,原始稀疏表示分类(SRC)算法识别准确率较低。针对上述问题,提出一种基于主动表观模型的稀疏聚类(CS-AAM)人脸识别算法。首先,利用主动表观模型快速、准确地对人脸特征点进行定位,获取主要人脸信息;然后,对训练样本进行K-means聚类,将相似程度高的图像分为一类,计算聚类中心,将该中心作为原子构造过完备字典并进行稀疏分解;最后,计算稀疏系数和重构残差对人脸图像进行分类、识别。将该算法与最近邻(NN)、支持向量机(SVM)、稀疏表示分类(SRC)、协同表示分类(CRC)人脸识别算法在ORL和Extended Yale B人脸数据库上对不同样本数及不同维数的人脸图像分别进行识别率测试,在相同样本数或相同维数情况下CS-AAM算法识别率均高于其他算法。在ORL人脸库中选取样本数为210时,相同维数条件下CS-AAM算法识别率为95.2%;在Extended Yale B人脸库上选取样本数为600时,CS-AAM算法识别率为96.8%。实验结果表明,该算法能够有效地提高人脸图像的识别准确率。  相似文献   

5.
为了提高实时视频监控中火焰识别率和降低误识率,提出了一种基于多特征量对数回归模型的火焰快速识别算法。首先,根据火焰的色度特征进行图像分割,通过运动目标与参考图像差分运算获取火焰候选区域(CFR);然后提取候选区域的面积变化率、圆形度、尖角个数以及质心位移等特征量,建立火焰的对数回归快速识别模型;其次采用美国国家标准与技术研究院(NIST)、仁荷大学计算机视觉实验室(ICV)和基于计算机视觉的火灾探测(VisiFire)实验库以及自制蜡烛、纸燃烧火焰中的火焰和非火焰图像中的300幅进行参数学习;最后选取实验数据库中8段视频共11071幅图像进行识别算法检验。测试结果表明,所提算法的真正率(TPR)达到93%、真负率(TNR)达到98%,识别平均用时0.058 s/帧。所提算法识别速度快且识别率高,可以应用于嵌入式实时图像火焰识别。  相似文献   

6.
为了提高Android平台下实时人体行为识别方法的性能,提出对动作变化和过渡动作进行检测和分割的方法。该方法采用加速度在重力方向上的投影和水平方向上投影的幅值来表征行为活动,通过趋势判断行为变化,结合趋势突变点检测和DTW算法进行过渡动作分割。提取加速度时域特征,使用随机森林对九种行为进行分类识别,平均识别率达到97.26%,其中过渡动作平均识别率达到95.05%。  相似文献   

7.
目的 由于摄像机视角和成像质量的差异,造成行人姿态变化、图像分辨率变化和光照变化等问题的出现,从而导致同一行人在不同监控视频中的外观区别很大,给行人再识别带来很大挑战。为提高行人再识别的识别率,针对行人姿态变化问题,提出一种区域块分割和融合的行人再识别算法。方法 首先根据人体结构分布,将行人图像划分为3个局部区域。然后根据各区域在识别过程中的作用不同,将GOG(Gaussian of Gaussian)特征、LOMO(local maximal occurrence)特征和KCCA(Kernel canonical correlation analysis)特征的不同组合作为各区域特征。接着通过距离测度算法学习对应区域之间的相似度,并通过干扰块剔除算法消除图像中出现的无效干扰块,融合有效区域块的相似度。最后将行人图像对的全局相似度和各局部区域相似度进行融合,实现行人再识别。结果 在4个基准数据集VIPeR、GRID、PRID450S和CUHK01上进行了大量实验,其中Rank1(排名第1的搜索结果即为待查询人的比例)分别为62.85%、30.56%、71.82%和79.03%,Rank5分别为86.17%、51.20%、91.16%和93.60%,识别率均有显著提高,具有实际应用价值。结论 提出的区域块分割和融合方法,能够去除图像中的无用信息和干扰信息,同时保留行人的有效信息并高效利用。该方法在一定程度上能够解决行人姿态变化带来的外观差异问题,大幅度地提升识别率。  相似文献   

8.
基于深度学习的自然环境下花朵识别   总被引:1,自引:0,他引:1  
基于自然环境下的花朵识别已经成为了现在园艺植物以及计算机视觉方面的交叉研究热点。本文的花朵图像数据集是利用手机直接在自然场景中当场拍摄的,采集了湖南省植物园内26种观赏花朵的2600幅图像,其中还包括同一品种不同类别相似度很高的杜鹃,郁金香等花朵。设计了一种由3个残差块组成的20层深度学习模型Resnet20,模型的优化算法结合了Adam的高效初始化以及Sgd优秀的泛化能力,该优化算法主要是根据每次训练批次以及learning rate来进行转换调整,实验结果表明比单独使用Adam算法正确率高4到5个百分点,比单独使用Sgd算法收敛更快。该模型在Flower26数据集上,通过数据增强识别率可达到 96.29%,表明深度学习是一种很有前途的应用于花朵识别的智能技术。  相似文献   

9.
统一化的LGBP特征及稀疏表示的人脸识别算法   总被引:1,自引:0,他引:1  
为了克服非约束性(光照、遮挡、姿势等变化)条件下会大大降低人脸识别率的缺陷,提出一种基于Gabor相位和幅值信息的统一化局部二进制模式稀疏表示人脸识别算法.首先将人脸图像经过Gabor滤波器滤波得到Gabor相位和幅值图像,然后分块提取其统一化的局部二进制直方图,最后通过稀疏表示判断测试图像所属类.利用AR数据库进行实验的结果表明,与SRC、结合LBP和SRC特征的分割识别算法相比,该算法在非约束性条件下识别率最高.  相似文献   

10.
多方向显著性权值学习的行人再识别   总被引:1,自引:1,他引:0       下载免费PDF全文
目的 针对当前行人再识别匹配块的显著性外观特征不一致的问题,提出一种对视角和背景变化具有较强鲁棒性的基于多向显著性相似度融合学习的行人再识别算法。方法 首先用流形排序估计目标的内在显著性,并融合类间显著性得到图像块的显著性;然后根据匹配块的4种显著性分布情况,通过多向显著性加权融合建立二者的视觉相似度,同时采用基于结构支持向量机排序的度量学习方法获得各方向显著性权重值,形成图像对之间全面的相似度度量。结果 在两个公共数据库进行再识别实验,本文算法较同类方法能获取更为全面的相似度度量,具有较高的行人再识别率,且不受背景变化的影响。对VIPeR数据库测试集大小为316对行人图像的再识别结果进行了定量统计,本文算法的第1识别率(排名第1的搜索结果即为待查询人的比率)为30%,第15识别率(排名前15的搜索结果中包含待查询人的比率)为72%,具有实际应用价值。结论 多方向显著性加权融合能对图像对的显著性分布进行较为全面的描述,进而得到较为全面的相似度度量。本文算法能够实现大场景非重叠多摄像机下的行人再识别,具有较高的识别力和识别精度,且对背景变化具有较强的鲁棒性。  相似文献   

11.
A system that could automatically analyze the facial actions in real time has applications in a wide range of different fields. However, developing such a system is always challenging due to the richness, ambiguity, and the dynamic nature of facial actions. Although a number of research groups attempt to recognize facial action units (AUs) by either improving facial feature extraction techniques, or the AU classification techniques, these methods often recognize AUs or certain AU combinations individually and statically, ignoring the semantic relationships among AUs and the dynamics of AUs. Hence, these approaches cannot always recognize AUs reliably, robustly, and consistently.In this paper, we propose a novel approach that systematically accounts for the relationships among AUs and their temporal evolutions for AU recognition. Specifically, we use a dynamic Bayesian network (DBN) to model the relationships among different AUs. The DBN provides a coherent and unified hierarchical probabilistic framework to represent probabilistic relationships among various AUs and to account for the temporal changes in facial action development. Within our system, robust computer vision techniques are used to obtain AU measurements. And such AU measurements are then applied as evidence to the DBN for inferring various AUs. The experiments show that the integration of AU relationships and AU dynamics with AU measurements yields significant improvement of AU recognition, especially for spontaneous facial expressions and under more realistic environment including illumination variation, face pose variation, and occlusion.  相似文献   

12.
Automatic recognition of facial gestures (i.e., facial muscle activity) is rapidly becoming an area of intense interest in the research field of machine vision. In this paper, we present an automated system that we developed to recognize facial gestures in static, frontal- and/or profile-view color face images. A multidetector approach to facial feature localization is utilized to spatially sample the profile contour and the contours of the facial components such as the eyes and the mouth. From the extracted contours of the facial features, we extract ten profile-contour fiducial points and 19 fiducial points of the contours of the facial components. Based on these, 32 individual facial muscle actions (AUs) occurring alone or in combination are recognized using rule-based reasoning. With each scored AU, the utilized algorithm associates a factor denoting the certainty with which the pertinent AU has been scored. A recognition rate of 86% is achieved.  相似文献   

13.
Facial Action Coding System (FACS) is the de facto standard in the analysis of facial expressions. FACS describes expressions in terms of the configuration and strength of atomic units called Action Units: AUs. FACS defines 44 AUs and each AU intensity is defined on a nonlinear scale of five grades. There has been significant progress in the literature on the detection of AUs. However, the companion problem of estimating the AU strengths has not been much investigated. In this work we propose a novel AU intensity estimation scheme applied to 2D luminance and/or 3D surface geometry images. Our scheme is based on regression of selected image features. These features are either non-specific, that is, those inherited from the AU detection algorithm, or are specific in that they are selected for the sole purpose of intensity estimation. For thoroughness, various types of local 3D shape indicators have been considered, such as mean curvature, Gaussian curvature, shape index and curvedness, as well as their fusion. The feature selection from the initial plethora of Gabor moments is instrumented via a regression that optimizes the AU intensity predictions. Our AU intensity estimator is person-independent and when tested on 25 AUs that appear singly or in various combinations, it performs significantly better than the state-of-the-art method which is based on the margins of SVMs designed for AU detection. When evaluated comparatively, one can see that the 2D and 3D modalities have relative merits per upper face and lower face AUs, respectively, and that there is an overall improvement if 2D and 3D intensity estimations are used in fusion.  相似文献   

14.
Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)] that compound expressions. AUs are agnostic, leaving the inference about conveyed intent to higher order decision making (e.g., emotion recognition). The proposed fully automatic method not only allows the recognition of 22 AUs but also explicitly models their temporal characteristics (i.e., sequences of temporal segments: neutral, onset, apex, and offset). To do so, it uses a facial point detector based on Gabor-feature-based boosted classifiers to automatically localize 20 facial fiducial points. These points are tracked through a sequence of images using a method called particle filtering with factorized likelihoods. To encode AUs and their temporal activation models based on the tracking data, it applies a combination of GentleBoost, support vector machines, and hidden Markov models. We attain an average AU recognition rate of 95.3% when tested on a benchmark set of deliberately displayed facial expressions and 72% when tested on spontaneous expressions.  相似文献   

15.
16.
Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypic emotional expressions, such as anger and happiness. Instead of representing another approach to machine analysis of prototypic facial expressions of emotion, the method presented in this paper attempts to handle a large range of human facial behavior by recognizing facial muscle actions that produce expressions. Virtually all of the existing vision systems for facial muscle action detection deal only with frontal-view face images and cannot handle temporal dynamics of facial actions. In this paper, we present a system for automatic recognition of facial action units (AUs) and their temporal models from long, profile-view face image sequences. We exploit particle filtering to track 15 facial points in an input face-profile sequence, and we introduce facial-action-dynamics recognition from continuous video input using temporal rules. The algorithm performs both automatic segmentation of an input video into facial expressions pictured and recognition of temporal segments (i.e., onset, apex, offset) of 27 AUs occurring alone or in a combination in the input face-profile video. A recognition rate of 87% is achieved.  相似文献   

17.
In this paper, we investigate the interest of action unit (AU) detection for automatic emotion recognition. We propose and compare two emotion detectors: the first works directly on a high-dimensional feature space and the second projects facial image in the low-dimensional space of AU intensities before recognizing emotion. In both approaches, facial images are coded by local Gabor binary pattern (LGBP) histogram differences. These features reduce the sensitivity to subject identity by computing differences between two LGBP histograms: one computed on an expressive image and the other synthesized and approaching the one we would compute on a neutral face of the same subject. As classifiers, we test support vector machines with different kernels. A new kernel is proposed, the histogram difference intersection kernel that increases classification performances. This kernel is well suited when exploiting the proposed histogram differences. Thorough experiments on three challenging databases (respectively, the Cohn-Kanade, MMI and Bosphorus databases) show the accuracy of our AU and emotion detectors. They lead to significant conclusions on three critical issues: (1) the interest of combining different training databases labeled by different AU coders, (2) the influence of each AU according to its type and detection accuracy on emotion recognition and (3) the sensitivity to identity variations.  相似文献   

18.
基于表情相似性的人脸表情流形   总被引:1,自引:0,他引:1  
续爽  贾云得 《软件学报》2009,20(8):2191-2198
在图嵌入(graph embedding)的框架下提出一种根据表情相似度构建邻接权重图的方法来学习人脸表情子空间.数据集中人脸图像的表情以半监督-学习的方式来估计,人脸图像之间的表情相似性由表情模糊隶属度矢量之间的内积来度量,与个体、光照、姿态等人脸差异无关.在得到的子空间内,相似表情的人脸图像位于流形上的邻近位置,表情数据在子空间内按语义的分布很好地揭示了表情模糊、演变的特性.在Cohn-Kanade人脸表情数据库和实验室自行采集的人脸表情数据集上的实验结果说明了该方法的有效性.因此,该方法可以很好地应用于各种基于人脸表情识别的人机交互中.  相似文献   

19.
基于局部二元模式的面部表情识别研究   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种基于局部二元模式(Local Binary Pattern,LBP)与支持向量机(SVM)相结合的面部表情识别方法。使用LBP算子对图像进行处理,对图像的模式进行统计形成面部表情特征;使用线性判别分析对表情特征进行降维处理;采用支持向量机对面部表情进行分类。用Matlab实现了上述方法,并在日本女性人脸表情(JAFFE)数据库上测试,取得了70.95%的识别率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号