首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we propose a new method for recognizing hand gestures in a continuous video stream using a dynamic Bayesian network or DBN model. The proposed method of DBN-based inference is preceded by steps of skin extraction and modelling, and motion tracking. Then we develop a gesture model for one- or two-hand gestures. They are used to define a cyclic gesture network for modeling continuous gesture stream. We have also developed a DP-based real-time decoding algorithm for continuous gesture recognition. In our experiments with 10 isolated gestures, we obtained a recognition rate upwards of 99.59% with cross validation. In the case of recognizing continuous stream of gestures, it recorded 84% with the precision of 80.77% for the spotted gestures. The proposed DBN-based hand gesture model and the design of a gesture network model are believed to have a strong potential for successful applications to other related problems such as sign language recognition although it is a bit more complicated requiring analysis of hand shapes.  相似文献   

2.
This paper presents a novel technique for hand gesture recognition through human–computer interaction based on shape analysis. The main objective of this effort is to explore the utility of a neural network-based approach to the recognition of the hand gestures. A unique multi-layer perception of neural network is built for classification by using back-propagation learning algorithm. The goal of static hand gesture recognition is to classify the given hand gesture data represented by some features into some predefined finite number of gesture classes. The proposed system presents a recognition algorithm to recognize a set of six specific static hand gestures, namely: Open, Close, Cut, Paste, Maximize, and Minimize. The hand gesture image is passed through three stages, preprocessing, feature extraction, and classification. In preprocessing stage some operations are applied to extract the hand gesture from its background and prepare the hand gesture image for the feature extraction stage. In the first method, the hand contour is used as a feature which treats scaling and translation of problems (in some cases). The complex moment algorithm is, however, used to describe the hand gesture and treat the rotation problem in addition to the scaling and translation. The algorithm used in a multi-layer neural network classifier which uses back-propagation learning algorithm. The results show that the first method has a performance of 70.83% recognition, while the second method, proposed in this article, has a better performance of 86.38% recognition rate.  相似文献   

3.
针对在复杂背景中传统手势识别算法的识别率低问题,利用Kinect的深度摄像头获取深度图像,分割出手势区域后进行预处理;提取手势的几何特征,并提出深度信息的同心圆分布直方图特征,融合手势的几何特征和深度信息的同心圆分布直方图特征;学习训练随机森林分类器进行手势识别.文中通过在复杂背景条件下对常见的“石头”、“剪刀”、“布”3种手势进行测试,实验结果表明:文中所提方法具有很好的平移,旋转和缩放不变性,能适应复杂环境的变化.  相似文献   

4.
针对复杂环境中的手势识别问题,提出了一种融合深度信息和红外信息的手势识别方法。首先利用Kinect摄像头的深度信息进行动态实时手势分割,然后融合红外图像复原手势区域。解决了实时手势分割和利用手势的空间分布特征进行手势识别时由于分割的手势区域有缺损或有人脸干扰时识别率低的问题。经实验验证,提出的方法不仅不受环境光线的影响,而且可以识别区分度较小的手势,对旋转、缩放、平移的手势识别也具有鲁棒性。对于区分度较大的手势,识别率高达100%。  相似文献   

5.
提出一种基于RGBD数据的手势识别方法,首先采用融合深度信息和彩色信息的手势分割算法分割出手势区域;其次提取静态手势轮廓的圆形度、凸包点及凸缺陷点、7Hu矩特征组成特征向量;最后采用SVM进行静态手势识别。实验结果表明,该方法能有效地识别预定义的5种静态手势,且对环境的适应性比较强。  相似文献   

6.
设计了一种通过佩戴阵列型表面肌电传感器,实时识别受试者的8种手势,并控制一个自主研发的六自由度灵巧操作假手进行同步动作的人–机协同控制系统.控制假手的手势识别策略基于神经网络算法,受试者仅需在首次训练阶段重复完成预先设定的8种手势动作(分别为放松、手腕外翻、手腕内翻、握拳、伸掌、手势2、手势3和竖大拇指),之后该系统即能够实时识别受试者随机完成8种手势中的任意一种手势.本文提出的网络参数随机搜索算法和梯度下降算法,与目前同规模的神经网络相比提高了网络的训练速度和手势预测精度;该手势识别算法使用Tensorflow机器学习框架学习权值并进行了可视化分析;采用经过优化的手势训练方式既缩短了受试者的手势训练时间,同时提高了手势训练的熟练度.本文对一名肌肉无损伤的受试者进行表面肌电信号采集、训练和预测,对8种手势的综合预测精度达到97%,且再次佩戴时不再需要进行训练.受试者实际控制假手时,使用投票算法对实时手势预测结果进行深度优化,最终假手的动作同步率到达99%.  相似文献   

7.
作为人机交互的重要方式,手势交互和识别由于其具有的高自由度而成为计算机图形学、虚拟现实与人机交互等领域的研究热点.传统直接提取手势轮廓或手部关节点位置信息的手势识别方法,其提取的特征通常难以准确表示手势之间的区别.针对手势识别中不同手势具有的高自由度以及由于手势图像分辨率低、背景杂乱、手被遮挡、手指形状尺寸不同、个体差异性导致手势特征表示不准确等问题,本文提出了一种新的融合关节旋转特征和指尖距离特征的手势特征表示与手势识别方法.首先从手势深度图中利用手部模板并将手部看成链段结构提取手部20个关节点的3D位置信息;然后利用手部关节点位置信息提取四元数关节旋转特征和指尖距离特征,该表示构成了手势特征的内在表示;最后利用一对一支持向量机对手势进行有效识别分类.本文不仅提出了一种新的手势特征表示与提取方法,该表示融合了关节旋转信息和指尖距离特征;而且从理论上证明了该特征表示能唯一地表征手势关节点的位置信息;同时提出了基于一对一SVM多分类策略进行手势分类与识别.对ASTAR静态手势深度图数据集中8类中国数字手势和21类美国字母手势数据集分别进行了实验验证,其分类识别准确率分别为99.71%和85.24%.实验结果表明,本文提出的基于关节旋转特征和指尖距离特征的融合特征能很好地表示不同手势的几何特征,能准确地表征静态手势并进行手势识别.  相似文献   

8.
谢小雨  刘喆颉 《计算机应用》2017,37(9):2700-2704
为了增强手势识别的多样性和简便性,提出了一种基于肌电信号(EMG)和加速度(ACC)信息融合的方法来识别动态手势。首先,利用MYO传感器采集EMG和ACC的手势动作信息;然后分别对ACC和EMG信号作特征降维和预处理;最后,为减少训练样本数,提出用协作稀疏表示分类器来识别基于ACC信号的姿态手势,用动态时间规整(DTW)算法和K-最邻近分类器(KNN)来分类EMG信号的手形手势。其中在利用协作稀疏表示分类器识别ACC姿态信号时,通过对创建字典最佳样本个数以及特征降维的维数进行研究来降低手势识别的复杂度。实验结果表明,手形手势的平均识别率达到了99.17%,对于向上向下、向左向右4种姿态手势平均识别率达到 96.88%,而且计算速度快;对于总体的12个动态手势,其平均识别率达到96.11%。该方法对动态手势的识别率较高,计算速度快。  相似文献   

9.
10.
谈家谱  徐文胜 《计算机应用》2015,35(6):1795-1800
针对基于视频的弯曲指尖点识别难、识别率不高的问题,提出一种基于深度信息、骨骼信息和彩色信息的手势识别方法。该方法首先利用Kinect相机的深度信息和骨骼信息初步快速判定手势在彩色图像中所在的区域,在该区域运用YCrCb肤色模型分割出手势区域;然后计算手势轮廓点到掌心点的距离并生成距离曲线,设定曲线波峰与波谷的比值参数来判定指尖点;最后结合弯曲指尖点特征和最大内轮廓面积特征识别出常用的12个手势。实验结果验证阶段邀请了6位实验者在相对稳定的光照环境条件下来验证提出的方法,每个手势被实验120次,12种手势的平均识别率达到了97.92%。实验结果表明,该方法能快速定位手势并准确地识别出常用的12种手势,且识别率较高。  相似文献   

11.
随着手机等移动电子设备的发展,应用于嵌入式平台的基于MEMS惯性传感器的手势识别成为一个研究热点.提出了一种简单有效的手势识别方法:通过分析手势的运动学特征,在线实时提取手势的加速度和角速度信号特征量,截取手势信号段,利用决策树分类器进行预分类,根据手势信号的变化规律实时识别具体的手势.该方法在20位实验者中获得了96%的平均准确率,手势识别时间小于0.01s.实验结果表明该算法在嵌入式平台下能快速准确地识别手势,满足了实时人机交互的要求.  相似文献   

12.
基于自适应子空间在线PCA的手势识别   总被引:1,自引:0,他引:1  
基于视觉的手势识别系统的学习一般是离线的,导致系统对新手势的正确识别需要重新离线学习,因此系统实时性、可扩展性和鲁棒性较差,不适合认知发育的智能框架。文中提出了基于自适应子空间在线PCA的手势识别方法。该方法通过计算样本投影系数向量的PCA来实现子空间在线更新,并根据新样本与已学习样本的差异程度,调整子空间更新策略,使算法自适应于不同情况,减少计算和存储开销,实现增量的在线学习和识别手势的目的。实验表明,本文方法能处理未知手势问题,实现手势在线积累和更新,逐渐增强系统识别能力。  相似文献   

13.
针对复杂背景下的手势识别容易受到环境干扰造成的识别困难问题,通过分析手势的表观特征,提出并实现了一种可用于自然人机交互的手势识别算法。该算法基于Kinect深度图像实现手势区域分割,然后提取手势手指弧度、指间弧度、手指数目等具有旋转缩放不变性的表观特征,运用最小距离法实现快速分类。并将该算法成功运用于实验室三指灵巧手平台,达到了理想的控制效果。实验表明该算法具有良好的鲁棒性,针对九种常用手势,平均识别率达到94.3%。  相似文献   

14.
基于手势识别算法的鼠标终端   总被引:1,自引:1,他引:0  
提出了一种基于静态手势与动态手势的识别算法,并结合Windows API的鼠标类函数实现鼠标操作.首先,通过图像处理技术把从摄像头捕捉的原图像转换为可信度较高的二值图像;其次,调用静态手势识别算法识别展开的手指个数,根据手指个数,结合Windows API的鼠标类函数实现鼠标双击及移动功能;最后,当检测到手指个数为5时,调用动态手势识别算法来识别手势的上下左右四个方向,并结合Windows API的鼠标类函数模拟鼠标左右键按下、抬起及滚轮滑动等操作.实验表明,该手势识别算法的识别率达到了94.11%,对于一些开发平台没有鼠标或在使用鼠标不方便的情况下,用手势来替代鼠标输入具有一定的研究价值和意义.  相似文献   

15.
The role of gesture recognition is significant in areas like human‐computer interaction, sign language, virtual reality, machine vision, etc. Among various gestures of the human body, hand gestures play a major role to communicate nonverbally with the computer. As the hand gesture is a continuous pattern with respect to time, the hidden Markov model (HMM) is found to be the most suitable pattern recognition tool, which can be modeled using the hand gesture parameters. The HMM considers the speeded up robust feature features of hand gesture and uses them to train and test the system. Conventionally, the Viterbi algorithm has been used for training process in HMM by discovering the shortest decoded path in the state diagram. The recursiveness of the Viterbi algorithm leads to computational complexity during the execution process. In order to reduce the complexity, the state sequence analysis approach is proposed for training the hand gesture model, which provides a better recognition rate and accuracy than that of the Viterbi algorithm. The performance of the proposed approach is explored in the context of pattern recognition with the Cambridge hand gesture data set.  相似文献   

16.
基于Petri网和BPNN的多重触控手势识别   总被引:1,自引:0,他引:1  
为解决多重触控技术的手势识别问题,提出一个多重触控手势描述与识别框架,给出其描述和识别方法。多重触控手势可分为原子手势和组合手势,在手势描述过程中,利用BP网络对原子手势进行建模,然后在将用户的意图映射为原子手势逻辑、时序和空间关系关联而成的组合手势,并在Petri网引入逻辑、时序和空间关系描述符对组合手势进行描述。在手势识别过程中,根据BP网络分类器检测出原子手势,并触发组合手势Petri网模型的转移,实现组合手势的识别。实验结果表明该方法对不同用户操作习惯有鲁棒性,能有效解决多重触控手势识别问题。  相似文献   

17.
Existing gesture segmentations use the backward spotting scheme that first detects the end point, then traces back to the start point and sends the extracted gesture segment to the hidden Markov model (HMM) for gesture recognition. This makes an inevitable time delay between the gesture segmentation and recognition and is not appropriate for continuous gesture recognition. To solve this problem, we propose a forward spotting scheme that executes gesture segmentation and recognition simultaneously. The start and end points of gestures are determined by zero crossing from negative to positive (or from positive to negative) of a competitive differential observation probability that is defined by the difference of observation probability between the maximal gesture and the non-gesture. We also propose the sliding window and accumulative HMMs. The former is used to alleviate the effect of incomplete feature extraction on the observation probability and the latter improves the gesture recognition rate greatly by accepting all accumulated gesture segments between the start and end points and deciding the gesture type by a majority vote of all intermediate recognition results. We use the predetermined association mapping to determine the 3D articulation data, which reduces the feature extraction time greatly. We apply the proposed simultaneous gesture segmentation and recognition method to recognize the upper-body gestures for controlling the curtains and lights in a smart home environment. Experimental results show that the proposed method has a good recognition rate of 95.42% for continuously changing gestures.  相似文献   

18.
In this work, we consider the recognition of dynamic gestures based on representative sub-segments of a gesture, which are denoted as most discriminating segments (MDSs). The automatic extraction and recognition of such small representative segments, rather than extracting and recognizing the full gestures themselves, allows for a more discriminative classifier. A MDS is a sub-segment of a gesture that is most dissimilar to all other gesture sub-segments. Gestures are classified using a MDSLCS algorithm, which recognizes the MDSs using a modified longest common subsequence (LCS) measure. The extraction of MDSs from a data stream uses adaptive window parameters, which are driven by the successive results of multiple calls to the LCS classifier. In a preprocessing stage, gestures that have large motion variations are replaced by several forms of lesser variation. We learn these forms by adaptive clustering of a training set of gestures, where we reemploy the LCS to determine similarity between gesture trajectories. The MDSLCS classifier achieved a gesture recognition rate of 92.6% when tested using a set of pre-cut free hand digit (0–9) gestures, while hidden Markov models (HMMs) achieved an accuracy of 89.5%. When the MDSLCS was tested against a set of streamed digit gestures, an accuracy of 89.6% was obtained. At present the HMMs method is considered the state-of-the-art method for classifying motion trajectories. The MDSLCS algorithm had a higher accuracy rate for pre-cut gestures, and is also more suitable for streamed gestures. MDSLCS provides a significant advantage over HMMs by not requiring data re-sampling during run-time and performing well with small training sets.  相似文献   

19.
Within the context of hand gesture recognition, spatiotemporal gesture segmentation is the task of determining, in a video sequence, where the gesturing hand is located and when the gesture starts and ends. Existing gesture recognition methods typically assume either known spatial segmentation or known temporal segmentation, or both. This paper introduces a unified framework for simultaneously performing spatial segmentation, temporal segmentation, and recognition. In the proposed framework, information flows both bottom-up and top-down. A gesture can be recognized even when the hand location is highly ambiguous and when information about when the gesture begins and ends is unavailable. Thus, the method can be applied to continuous image streams where gestures are performed in front of moving, cluttered backgrounds. The proposed method consists of three novel contributions: a spatiotemporal matching algorithm that can accommodate multiple candidate hand detections in every frame, a classifier-based pruning framework that enables accurate and early rejection of poor matches to gesture models, and a subgesture reasoning algorithm that learns which gesture models can falsely match parts of other longer gestures. The performance of the approach is evaluated on two challenging applications: recognition of hand-signed digits gestured by users wearing short-sleeved shirts, in front of a cluttered background, and retrieval of occurrences of signs of interest in a video database containing continuous, unsegmented signing in American Sign Language (ASL).  相似文献   

20.
The main objective of this study is to explore the utility of a neural network-based approach in hand gesture recognition. The proposed system presents two recognition algorithms to recognize a set of six specific static hand gestures, namely open, close, cut, paste, maximize, and minimize. The hand gesture image is passed through three stages: preprocessing, feature extraction, and classification. In the first method, the hand contour is used as a feature that treats scaling and translation of problems (in some cases). However, the complex moment algorithm is used to describe the hand gesture and to treat the rotation problem in addition to scaling and translation. The back-propagation learning algorithm is employed in the multilayer neural network classifier. The second method proposed in this article achieves better recognition rate than the first method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号