首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
The main objective of this study is to explore the utility of a neural network-based approach in hand gesture recognition. The proposed system presents two recognition algorithms to recognize a set of six specific static hand gestures, namely open, close, cut, paste, maximize, and minimize. The hand gesture image is passed through three stages: preprocessing, feature extraction, and classification. In the first method, the hand contour is used as a feature that treats scaling and translation of problems (in some cases). However, the complex moment algorithm is used to describe the hand gesture and to treat the rotation problem in addition to scaling and translation. The back-propagation learning algorithm is employed in the multilayer neural network classifier. The second method proposed in this article achieves better recognition rate than the first method.  相似文献   

2.
静态手势识别是以手势驱动的人机交互系统的核心技术。针对静态手势识别问题,提出了一种基于深度图像进行静态手势识别的方法。为了消除静态手势识别过程中的平移、旋转和缩放不变性,提取手势轮廓的Hu不变矩,并以Hu不变矩作为特征构建静态手势深度感知神经网络模型,以此实现对静态手势进行分类识别。在VisualStudio的开发环境下实现了对该方法的验证,取得了良好的效果,并与传统的模板匹配法与基于卷积神经网络的深度学习方法作比较,静态手势识别准确率总体可达95%,识别效率高,能满足实时性要求。  相似文献   

3.
作为人机交互的重要方式,手势交互和识别由于其具有的高自由度而成为计算机图形学、虚拟现实与人机交互等领域的研究热点.传统直接提取手势轮廓或手部关节点位置信息的手势识别方法,其提取的特征通常难以准确表示手势之间的区别.针对手势识别中不同手势具有的高自由度以及由于手势图像分辨率低、背景杂乱、手被遮挡、手指形状尺寸不同、个体差异性导致手势特征表示不准确等问题,本文提出了一种新的融合关节旋转特征和指尖距离特征的手势特征表示与手势识别方法.首先从手势深度图中利用手部模板并将手部看成链段结构提取手部20个关节点的3D位置信息;然后利用手部关节点位置信息提取四元数关节旋转特征和指尖距离特征,该表示构成了手势特征的内在表示;最后利用一对一支持向量机对手势进行有效识别分类.本文不仅提出了一种新的手势特征表示与提取方法,该表示融合了关节旋转信息和指尖距离特征;而且从理论上证明了该特征表示能唯一地表征手势关节点的位置信息;同时提出了基于一对一SVM多分类策略进行手势分类与识别.对ASTAR静态手势深度图数据集中8类中国数字手势和21类美国字母手势数据集分别进行了实验验证,其分类识别准确率分别为99.71%和85.24%.实验结果表明,本文提出的基于关节旋转特征和指尖距离特征的融合特征能很好地表示不同手势的几何特征,能准确地表征静态手势并进行手势识别.  相似文献   

4.
基于改进RCE和RBF神经网络的静态手势识别   总被引:3,自引:0,他引:3       下载免费PDF全文
针对手势识别的手区域分割、手势特征提取和手势分类的三个过程,提出了一种新的静态手势识别方法。改进了传统的RCE神经网络用于手区域的分割,具有更高的运行速度和更强的抗噪能力。依Freeman链码方向提取手的边缘到掌心的距离作为手势的特征向量。将上一步得到的手势特征向量作为RBF神经网络的输入,进行网络的训练和分类。实验验证了该方法的有效性和可行性,并用其实现了人和仿人机器人的剪刀石头布的猜拳游戏。  相似文献   

5.
基于卷积神经网络的手势识别初探   总被引:3,自引:0,他引:3  
提出一种用于手势识别的新算法,使用卷积神经网络来进行手势的识别。该算法避免了手势复杂的前期预处理,可以直接输入原始的手势图像。卷积神经网络具有局部感知区域、层次结构化、特征抽取和分类过程等特点,在图像识别领域获得广泛的应用。试验结果表明,该方法能识别多种手势,精度较高且复杂度较小,具有很好的鲁棒性,也克服传统算法的诸多固有缺点。  相似文献   

6.
基于傅立叶描述子和HMM的手势识别   总被引:1,自引:0,他引:1  
陈启军  朱振娇  顾爽 《控制工程》2012,19(4):634-638
针对家庭服务机器人平台中人机交互的问题,提出基于视觉的手势识别作为人与机器人交互的方式,研究利用傅立叶描述子对手势形状进行描述,并结合支持向量机和隐马尔可夫模型分别对静态手势和动态手势进行分类,实现了静态手势和动态手势的识别。该系统基于新型传感器Kinect,在图像分割阶段结合图像深度信息,可以有效的将手势区域提取出来,在一定范围内具有较强的鲁棒性,特征提取阶段基于傅立叶描述子,使手势识别具有旋转、缩放、平移不变性。针对七种常见静态手势和四种动态手势进行测试,平均识别率分别达到98.8%和96.7%,实验结果表明该系统具有较高的准确度。  相似文献   

7.
在人机交互领域(Human-Computer Interaction,HCI)中,基于视觉的手势识别因其直观、高效的特点拥有广阔的应用前景。为了改善传统手势识别算法识别率低、鲁棒性差的缺点,基于OpenCV和Keras深度学习框架提出一种简单、快速的手势识别方法作为人机交互的接口。手势图像经过3个处理阶段:预处理、特征提取和分类。对输入图像进行预处理,使用YCbCr肤色模型提取出手部肤色区域,将其转化为灰度图像。使用卷积神经网络对手势图像进行特征提取和分类。实验结果表明:提出的手势识别方法识别率很高,达到99.43%,且具有较好的鲁棒性。与传统的人工选取特征相比,卷积神经网络能够有效地进行特征学习。  相似文献   

8.
针对在复杂背景中传统手势识别算法的识别率低问题,利用Kinect的深度摄像头获取深度图像,分割出手势区域后进行预处理;提取手势的几何特征,并提出深度信息的同心圆分布直方图特征,融合手势的几何特征和深度信息的同心圆分布直方图特征;学习训练随机森林分类器进行手势识别.文中通过在复杂背景条件下对常见的“石头”、“剪刀”、“布”3种手势进行测试,实验结果表明:文中所提方法具有很好的平移,旋转和缩放不变性,能适应复杂环境的变化.  相似文献   

9.
动态手势识别作为人机交互的一个重要方向,在各个领域具有广泛的需求。相较于静态手势,动态手势的变化更为复杂,对其特征的充分提取与描述是准确识别动态手势的关键。为了解决对动态手势特征描述不充分的问题,利用高精度的Leap Motion传感器对手部三维坐标信息进行采集,提出了一种包含手指姿势和手掌位移的特征在内的、能够充分描述复杂动态手势的特征序列,并结合长短期记忆网络模型进行动态手势识别。实验结果表明,提出的方法在包含16种动态手势的数据集上的识别准确率为98.50%;与其他特征序列的对比实验表明,提出的特征序列,能更充分准确地描述动态手势特征。  相似文献   

10.
卷积神经网络本身具有丰富的特征表达能力和学习能力,但本质上,其模块中几何变换能力是固定的。因此,引入可变形卷积核来改进VGG16的网络结构,搭建名为DCVGG的卷积神经网络结构来进行手势识别的研究。在不同数据集下,基于可变形卷积神经网络的手势识别方法能够直接把RGB图像数据输入网络。最终输出的结果,对手势的平均识别率达到97%以上,有效提高网络的性能,提升卷积神经网络对样本对象的容忍度和多样性,丰富卷积神经网络的特征表达能力,与传统LeNet5、VGG16结构和传统人工特征提取算法相比效果更佳,比传统结构更深,鲁棒性更好,识别率更强,可以为复杂背景下有效识别手势提供参考,具有一定的延拓能力。  相似文献   

11.
针对复杂环境中的手势识别问题,提出了一种融合深度信息和红外信息的手势识别方法。首先利用Kinect摄像头的深度信息进行动态实时手势分割,然后融合红外图像复原手势区域。解决了实时手势分割和利用手势的空间分布特征进行手势识别时由于分割的手势区域有缺损或有人脸干扰时识别率低的问题。经实验验证,提出的方法不仅不受环境光线的影响,而且可以识别区分度较小的手势,对旋转、缩放、平移的手势识别也具有鲁棒性。对于区分度较大的手势,识别率高达100%。  相似文献   

12.
Human-computer interactions based on hand gestures are of the most popular natural interactive modes, which severely depends on real-time hand gesture recognition approaches. In this paper, a simple but effective hand feature extraction method is described, and the corresponding hand gesture recognition method is proposed. First, based on a simple tortoise model, we segment the human hand images by skin color features and tags on the wrist, and normalize them to create the training dataset. Second, feature vectors are computed by drawing concentric circular scan lines (CCSL) according to the center of the palm, and linear discriminant analysis (LDA) algorithm is used to deal with those vectors. Last, a weighted k-nearest neighbor (W-KNN) algorithm is presented to achieve real-time hand gesture classification and recognition. Besides the efficiency and effectiveness, we make sure that the whole gesture recognition system can be easily implemented and extended. Experimental results with a user-defined hand gesture dataset and multi-projector display system show the effectiveness and efficiency of the new approach.  相似文献   

13.
Hand gestures are a natural way for human-robot interaction. Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications. This paper presents a novel deep learning network for hand gesture recognition. The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation. To learn short-term features, each video input is segmented into a fixed number of frame groups. A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot. These two entities are fused and fed into a convolutional neural network (ConvNet) for feature extraction. The ConvNets for all groups share parameters. To learn long-term features, outputs from all ConvNets are fed into a long short-term memory (LSTM) network, by which a final classification result is predicted. The new model has been tested with two popular hand gesture datasets, namely the Jester dataset and Nvidia dataset. Comparing with other models, our model produced very competitive results. The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.   相似文献   

14.
针对复杂场景下深度相机环境要求高,可穿戴设备不自然,基于深度学习模型数据集样本少导致识别能力、鲁棒性欠佳的问题,提出了一种基于语义分割的深度学习模型进行手势分割结合迁移学习的神经网络识别的手势识别方法。通过对采集到的图像数据集首进行不同角度旋转,翻转等操作进行数据集样本增强,训练分割模型进行手势区域的分割,通过迁移学习卷积神经网络更好的提取手势特征向量,通过Softmax函数进行手势分类识别。通过4个人在不同背景下做的10个手势,实验结果表明: 针对复杂背景环境下能够正确的识别手势。  相似文献   

15.
The work presented in this paper aims to develop a system for automatic translation of static gestures of alphabets and signs in American sign language. In doing so, we have used Hough transform and neural networks which is trained to recognize signs. Our system does not rely on using any gloves or visual markings to achieve the recognition task. Instead, it deals with images of bare hands, which allows the user to interact with the system in a natural way. An image is processed and converted to a feature vector that will be compared with the feature vectors of a training set of signs. The extracted features are not affected by the rotation, scaling or translation of the gesture within the image, which makes the system more flexible.The system was implemented and tested using a data set of 300 samples of hand sign images; 15 images for each sign. Experiments revealed that our system was able to recognize selected ASL signs with an accuracy of 92.3%.  相似文献   

16.
提出一种基于RGBD数据的手势识别方法,首先采用融合深度信息和彩色信息的手势分割算法分割出手势区域;其次提取静态手势轮廓的圆形度、凸包点及凸缺陷点、7Hu矩特征组成特征向量;最后采用SVM进行静态手势识别。实验结果表明,该方法能有效地识别预定义的5种静态手势,且对环境的适应性比较强。  相似文献   

17.
针对静态手势识别任务中,传统基于人工提取特征方法耗时耗力,识别率较低,现有卷积神经网络依赖单一卷积核提取特征不够充分的问题,提出双通道卷积神经网络模型。输入手势图片通过两个相互独立的通道进行特征提取,双通道具有尺度不同的卷积核,能够提取输入图像中不同尺度的特征,然后在全连接层进行特征融合,最后经过softmax分类器进行分类。在Thomas Moeslund和Jochen Triesch手势数据库上进行实验验证,结果表明该模型提高了静态手势识别的准确率,增强了卷积神经网络的泛化能力。  相似文献   

18.
In this work, we consider the recognition of dynamic gestures based on representative sub-segments of a gesture, which are denoted as most discriminating segments (MDSs). The automatic extraction and recognition of such small representative segments, rather than extracting and recognizing the full gestures themselves, allows for a more discriminative classifier. A MDS is a sub-segment of a gesture that is most dissimilar to all other gesture sub-segments. Gestures are classified using a MDSLCS algorithm, which recognizes the MDSs using a modified longest common subsequence (LCS) measure. The extraction of MDSs from a data stream uses adaptive window parameters, which are driven by the successive results of multiple calls to the LCS classifier. In a preprocessing stage, gestures that have large motion variations are replaced by several forms of lesser variation. We learn these forms by adaptive clustering of a training set of gestures, where we reemploy the LCS to determine similarity between gesture trajectories. The MDSLCS classifier achieved a gesture recognition rate of 92.6% when tested using a set of pre-cut free hand digit (0–9) gestures, while hidden Markov models (HMMs) achieved an accuracy of 89.5%. When the MDSLCS was tested against a set of streamed digit gestures, an accuracy of 89.6% was obtained. At present the HMMs method is considered the state-of-the-art method for classifying motion trajectories. The MDSLCS algorithm had a higher accuracy rate for pre-cut gestures, and is also more suitable for streamed gestures. MDSLCS provides a significant advantage over HMMs by not requiring data re-sampling during run-time and performing well with small training sets.  相似文献   

19.
Hand gestures that are performed by one or two hands can be categorized according to their applications into different categories including conversational, controlling, manipulative and communicative gestures. Generally, hand gesture recognition aims to identify specific human gestures and use them to convey information. The process of hand gesture recognition composes mainly of four stages: hand gesture images collection, gesture image preprocessing using some techniques including edge detection, filtering and normalization, capture the main characteristics of the gesture images and the evaluation (or classification) stage where the image is classified to its corresponding gesture class. There are many methods that have been used in the classification stage of hand gesture recognition such as Artificial Neural Networks, template matching, Hidden Markov Models and Dynamic Time Warping. This exploratory survey aims to provide a progress report on hand posture and gesture recognition technology.  相似文献   

20.
基于自适应子空间在线PCA的手势识别   总被引:1,自引:0,他引:1  
基于视觉的手势识别系统的学习一般是离线的,导致系统对新手势的正确识别需要重新离线学习,因此系统实时性、可扩展性和鲁棒性较差,不适合认知发育的智能框架。文中提出了基于自适应子空间在线PCA的手势识别方法。该方法通过计算样本投影系数向量的PCA来实现子空间在线更新,并根据新样本与已学习样本的差异程度,调整子空间更新策略,使算法自适应于不同情况,减少计算和存储开销,实现增量的在线学习和识别手势的目的。实验表明,本文方法能处理未知手势问题,实现手势在线积累和更新,逐渐增强系统识别能力。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号