首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
针对人脸面部表情的识别率易受非均匀光照影响,进而降低人脸面部表情辨识率的问题,提出一种融合局部特征与深度置信网络(DBN)的人脸面部表情识别算法。首先提取出人脸面部表情图像中的局部敏感质量分布图(LSH)非均匀光照不变特征;其次通过双编码局部二值模式(DCLBP)提取出人脸面部表情的边缘局部细节纹理特征;然后计算其各自特征的基本标准差来进一步确定自适应融合特征的权重值,并以此构造融合后的人脸面部表情特征;最后将部敏感质量分布图(LSH)与双编码局部二值模式(DCLBP)融合后的人脸面部表情特征进行训练深度置信网络(DBN)模型,将训练后的深度置信网络(DBN)模型进行人脸面部表情识别。在JAFFE人脸面部表情数据库和自建的维吾尔族人脸面部表情数据库中识别实验表明,该算法比其他4中算法的对比中其识别率分别至少提高了4.3%和5.22%,具有很好的鲁棒性和有效性。  相似文献   

2.
一种基于肤色和几何特征的人面部识别方法   总被引:10,自引:7,他引:3  
面部识别的前提条件是场景中人脸的正确定位与分割,因此,本文提出了一种基于肤色及面部几何特征的人脸分割方法,首先利用图像的时间差分方法确定场景中有没有运动物体,然后利用BP神经网络对肤色进行识别,比用HIS或LHC颜色空间中的色调识别肤色的方法具有更强的环境适应能力,最后利用扫描投影算法及面部固有的几何特征定位分割细节特征,实验证明该方法具有定位准确和运算速度快的优点。  相似文献   

3.
人类面部表情是其心理情绪变化的最直观刻画,不同人的面部表情具有很大差异,现有表情识别方法均利用面部统计特征区分不同表情,其缺乏对于人脸细节信息的深度挖掘。根据心理学家对面部行为编码的定义可以看出,人脸的局部细节信息决定了其表情意义。因此该文提出一种基于多尺度细节增强的面部表情识别方法,针对面部表情受图像细节影响较大的特点,提出利用高斯金字塔提取图像细节信息,并对图像进行细节增强,从而强化人脸表情信息。针对面部表情的局部性特点,提出利用层次结构的局部梯度特征计算方法,描述面部特征点局部形状特征。最后,使用支持向量机(SVM)对面部表情进行分类。该文在CK+表情数据库中的实验结果表明,该方法不仅验证了图像细节对面部表情识别过程的重要作用,而且在小规模训练数据下也能够得到非常好的识别结果,表情平均识别率达到98.19%。  相似文献   

4.
针对无法对面部表情进行精确识别的问题,提出了基于ResNet50网络融合双线性混合注意力机制的网络模型。针对传统池化算法造成图像特征提取残缺、模糊等问题,提出了一种基于Average-Pooling算法的自适应池化权重算法,同时基于粒子群算法对卷积神经网络模型超参数进行自适应调节,从而进一步提升模型识别精度。基于改进的网络模型,设计了一款实时面部表情识别系统。经验证,在Fer2013数据集和CK+数据集上,改进的模型在测试集中的识别精度分别为73.51%和99.86%。  相似文献   

5.
面部表情能够反映人的内心情绪,在智慧课堂真实场景中识别学生面部表情从而获知学生的学习状态一直是研究的热点与难点.文章对图像进行预处理,然后分别输入到卷积神经网络层提取特征,并使用长短期记忆神经网络与提取到的特征融合.最后,将它们加权融合在一起,通过Softmax层对人脸表情进行分类.分别使用JAFFE等4个数据集、智慧...  相似文献   

6.
多模型ASM及其在人脸面部特征点定位中的应用   总被引:1,自引:1,他引:0  
为了提高ASM在非均匀光照条件下的人脸面部特征点定位的精确度,提出了一种融合Log-Gabor小波特征的多模型ASM方法.该方法的主要特点有:在精确定位目标图像虹膜位置的基础上对全局形状模型进行较准确的初始化;特征点局部纹理特征采用灰度和Log-Gabor小波特征共同描述,减少光照和噪音对算法的影响;建立包括全局ASM和基于人脸面部显著特征区域的局部ASM的多模型ASM,交替使用这两种ASM模型在边缘约束策略基础上对特征点的定位结果进行约束.实验表明,多模型ASM算法对人脸面部特征点定位的准确率比传统ASM算法有明显提高.  相似文献   

7.
针对现有人脸表情识别方法对于面部细节处的局部特征关注度不足的问题,提出了基于面部关键点和图卷积的人脸表情识别方法CGNet。CGNet将面部图像按面部器官进行分割得到多个分割图像,提取分割图像的多尺度特征并引入空间注意力机制提取细节信息,提升网络对于面部细节的关注度;提取人脸关键点,利用图卷积网络提取出人脸面部的结构信息,提升网络对高维度特征的表示能力。实验结果表明,CGNet是一种高效的表情识别算法,能够获得更有效的面部特征,提高识别准确率。  相似文献   

8.
针对人脸在整个图像中识别的问题,提出了基于神经网络的前端人脸检测系统.该系统首先将训练集中每个图像划分成独立的小窗口,并标示出每个窗口中包含的人脸图像,并据此对神经网络进行训练.在利用神经网络对人脸进行识别的过程中,对待检测图像划分为分辨率为金字塔形的小图像,然后对每个分辨率的小图像进行人脸识别.同时为了提升系统对人脸检测的准确率和误报率,需要对多个神经网络输出的阈值进行权衡.本系统使用自适应算法,增加了错误检测集以提升系统的检测精度.  相似文献   

9.
面部表情识别很大程度上受限于来自相同条件、相同个体和大量标记样本这些假设,当突破上述假设条件时,识别性能将会显著下降。针对这个问题,提出一种自适应非负加权约束低秩稀疏表示的跨数据集面部表情识别方法,通过自适应非负加权矩阵约束加强图像数据表示中重要特征的作用并减少无用特征的干扰。通过映射矩阵的约束,使基于低秩与稀疏表示的子空间能够学习鲁棒的相似图像以便实现最终的跨数据集面部表情识别。在公开的JAFFE和CK+两个跨数据库上的对比实验结果表明,文章算法对于跨数据集面部表情识别的性能更优。  相似文献   

10.
黄琪  刘宗昂  李一兵 《信息技术》2009,(10):131-133
提出一种基于全局和局部特征的LBP人脸识别算法。首先将人脸图像进行LBP提取全局直方图特征,再将图像分块,提取每块的LBP局部直方图特征,最后将全局和局部特征按一定的顺序相结合作为图像的总体特征。然后通过RBF神经网络进行分类识别。在ORL人脸库上的实验结果表明,该算法具有较高的识别率。  相似文献   

11.
司琴  李菲菲  陈虬 《电子科技》2020,33(4):18-22
卷积神经网络在人脸识别研究上有较好的效果,但是其提取的人脸特征忽略了人脸的局部结构特征。针对此问题,文中提出一种基于深度学习与特征融合的人脸识别方法。该算法将局部二值模式信息与原图信息相结合作为SDFVGG网络的输入,使得提取的人脸特征更加丰富且更具表征能力。其中,SDFVGG网络是将VGG网络进行深浅特征相融合后的网络。在CAS-PEAL-R1人脸数据库上的实验表明,将网络深浅特征相融合与在卷积神经网络中加入LBP图像信息与原图信息相融合的特征信息对于提高人脸识别准确率非常有效,可得到优于传统算法和一般卷积神经网络的最高98.58%人脸识别率。  相似文献   

12.
The face is the window to the soul. This is what the 19th-century French doctor Duchenne de Boulogne thought. Using electric shocks to stimulate muscular contractions and induce bizarre-looking expressions, he wanted to understand how muscles produce facial expressions and reveal the most hidden human emotions. Two centuries later, this research field remains very active. We see automatic systems for recognizing emotion and facial expression being applied in medicine, security and surveillance systems, advertising and marketing, among others. However, there are still fundamental questions that scientists are trying to answer when analyzing a person’s emotional state from their facial expressions. Is it possible to reliably infer someone’s internal state based only on their facial muscles’ movements? Is there a universal facial setting to express basic emotions such as anger, disgust, fear, happiness, sadness, and surprise? In this research, we seek to address some of these questions through convolutional neural networks. Unlike most studies in the prior art, we are particularly interested in examining whether characteristics learned from one group of people can be generalized to predict another’s emotions successfully. In this sense, we adopt a cross-dataset evaluation protocol to assess the performance of the proposed methods. Our baseline is a custom-tailored model initially used in face recognition to categorize emotion. By applying data visualization techniques, we improve our baseline model, deriving two other methods. The first method aims to direct the network’s attention to regions of the face considered important in the literature but ignored by the baseline model, using patches to hide random parts of the facial image so that the network can learn discriminative characteristics in different regions. The second method explores a loss function that generates data representations in high-dimensional spaces so that examples of the same emotion class are close and examples of different classes are distant. Finally, we investigate the complementarity between these two methods, proposing a late-fusion technique that combines their outputs through the multiplication of probabilities. We compare our results to an extensive list of works evaluated in the same adopted datasets. In all of them, when compared to works that followed an intra-dataset protocol, our methods present competitive numbers. Under a cross-dataset protocol, we achieve state-of-the-art results, outperforming even commercial off-the-shelf solutions from well-known tech companies.  相似文献   

13.
MiE is a facial involuntary reaction that reflects the real emotion and thoughts of a human being. It is very difficult for a normal human to detect a Micro-Expression (MiE), since it is a very fast and local face reaction with low intensity. As a consequence, it is a challenging task for researchers to build an automatic system for MiE recognition. Previous works for MiE recognition have attempted to use the whole face, yet a facial MiE appears in a small region of the face, which makes the extraction of relevant features a hard task. In this paper, we propose a novel deep learning approach that leverages the locality aspect of MiEs by learning spatio-temporal features from local facial regions using a composite architecture of Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM). The proposed solution succeeds to extract relevant local features for MiEs recognition. Experimental results on benchmark datasets demonstrate the highest recognition accuracy of our solution with respect to state-of-the-art methods.  相似文献   

14.
单样本快速人脸不变特征提取方法   总被引:3,自引:3,他引:0  
  相似文献   

15.
具有学习功能的自动人脸识别   总被引:1,自引:1,他引:0  
人脸识别是模式识别领域中一个相当困难而又有理论意义和实际价值的研究课题。传统的基于K-L变换的自动人脸识别方法,不用过多地考虑人脸的局部特征,利用特征脸方法进行识别,取得了一定的。但是,人脸作为一个特殊的场景,脸像会受年龄、心情、拍摄角度、光照条件、发饰等因素影响,所成图像存在差异。传统的基于K-L变换的自动人脸识别方法不能很好地克服这些畸变的影响。文中就主成分分析方法引入人脸识别,模拟人脸脸像的各种变化,事先对脸像做相应的变化,产生一系列变形脸。然后对变形脸进行主成分分析,提取它们的主成分。最后应用遗传算法选择最优特征向量构造子空间,提出一种能抗御一定脸像变化的人脸识别方法,并运用该方法进行了实验。实验结果证明了该方法的可行性和良好的抗畸变能力。  相似文献   

16.
Emotion recognition is a hot research in modern intelligent systems. The technique is pervasively used in autonomous vehicles, remote medical service, and human–computer interaction (HCI). Traditional speech emotion recognition algorithms cannot be effectively generalized since both training and testing data are from the same domain, which have the same data distribution. In practice, however, speech data is acquired from different devices and recording environments. Thus, the data may differ significantly in terms of language, emotional types and tags. To solve such problem, in this work, we propose a bimodal fusion algorithm to realize speech emotion recognition, where both facial expression and speech information are optimally fused. We first combine the CNN and RNN to achieve facial emotion recognition. Subsequently, we leverage the MFCC to convert speech signal to images. Therefore, we can leverage the LSTM and CNN to recognize speech emotion. Finally, we utilize the weighted decision fusion method to fuse facial expression and speech signal to achieve speech emotion recognition. Comprehensive experimental results have demonstrated that, compared with the uni-modal emotion recognition, bimodal features-based emotion recognition achieves a better performance.  相似文献   

17.
音乐灯光表演系统中音乐识别技术研究   总被引:2,自引:1,他引:1  
王贤兰 《电声技术》2010,34(12):48-50,68
分析了MIDI音乐情感体现及其计算机自动识别方法,利用支持向量机、聚类分析等先进算法实现了MIDI音乐基本特征数据的提取、主旋律音轨的自动定位、乐句的智能划分和音乐情感的识别。音乐特征与情感识别准确率高。  相似文献   

18.
In this paper, we investigate feature extraction and feature selection methods as well as classification methods for automatic facial expression recognition (FER) system. The FER system is fully automatic and consists of the following modules: face detection, facial detection, feature extraction, selection of optimal features, and classification. Face detection is based on AdaBoost algorithm and is followed by the extraction of frame with the maximum intensity of emotion using the inter-frame mutual information criterion. The selected frames are then processed to generate characteristic features using different methods including: Gabor filters, log Gabor filter, local binary pattern (LBP) operator, higher-order local autocorrelation (HLAC) and a recent proposed method called HLAC-like features (HLACLF). The most informative features are selected based on both wrapper and filter feature selection methods. Experiments on several facial expression databases show comparisons of different methods.  相似文献   

19.
提出了一种小波局部特征结合LDA(线性判别分析)的人脸识别算法。首先对图像分块,选取包含图像信息量多的区域进行小波变换并提取特征,将小波分解得到的低频部分利用LDA投影求得人脸识别特征,最后利用最近邻分类器对图像进行分类。在ORL和Yale人脸数据库上进行实验,结果表明使用小波局部特征结合LDA的方法可达到较高的识别率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号