共查询到20条相似文献,搜索用时 765 毫秒
1.
2.
3.
4.
5.
针对基于视频的弯曲指尖点识别难、识别率不高的问题,提出一种基于深度信息、骨骼信息和彩色信息的手势识别方法。该方法首先利用Kinect相机的深度信息和骨骼信息初步快速判定手势在彩色图像中所在的区域,在该区域运用YCrCb肤色模型分割出手势区域;然后计算手势轮廓点到掌心点的距离并生成距离曲线,设定曲线波峰与波谷的比值参数来判定指尖点;最后结合弯曲指尖点特征和最大内轮廓面积特征识别出常用的12个手势。实验结果验证阶段邀请了6位实验者在相对稳定的光照环境条件下来验证提出的方法,每个手势被实验120次,12种手势的平均识别率达到了97.92%。实验结果表明,该方法能快速定位手势并准确地识别出常用的12种手势,且识别率较高。 相似文献
6.
Aditya RamamoorthyAuthor Vitae Subhashis BanerjeeAuthor Vitae 《Pattern recognition》2003,36(9):2069-2081
This paper is concerned with the problem of recognition of dynamic hand gestures. We have considered gestures which are sequences of distinct hand poses. In these gestures hand poses can undergo motion and discrete changes. However, continuous deformations of the hand shapes are not permitted. We have developed a recognition engine which can reliably recognize these gestures despite individual variations. The engine also has the ability to detect start and end of gesture sequences in an automated fashion. The recognition strategy uses a combination of static shape recognition (performed using contour discriminant analysis), Kalman filter based hand tracking and a HMM based temporal characterization scheme. The system is fairly robust to background clutter and uses skin color for static shape recognition and tracking. A real time implementation on standard hardware is developed. Experimental results establish the effectiveness of the approach. 相似文献
7.
作为人机交互的重要方式,手势交互和识别由于其具有的高自由度而成为计算机图形学、虚拟现实与人机交互等领域的研究热点.传统直接提取手势轮廓或手部关节点位置信息的手势识别方法,其提取的特征通常难以准确表示手势之间的区别.针对手势识别中不同手势具有的高自由度以及由于手势图像分辨率低、背景杂乱、手被遮挡、手指形状尺寸不同、个体差异性导致手势特征表示不准确等问题,本文提出了一种新的融合关节旋转特征和指尖距离特征的手势特征表示与手势识别方法.首先从手势深度图中利用手部模板并将手部看成链段结构提取手部20个关节点的3D位置信息;然后利用手部关节点位置信息提取四元数关节旋转特征和指尖距离特征,该表示构成了手势特征的内在表示;最后利用一对一支持向量机对手势进行有效识别分类.本文不仅提出了一种新的手势特征表示与提取方法,该表示融合了关节旋转信息和指尖距离特征;而且从理论上证明了该特征表示能唯一地表征手势关节点的位置信息;同时提出了基于一对一SVM多分类策略进行手势分类与识别.对ASTAR静态手势深度图数据集中8类中国数字手势和21类美国字母手势数据集分别进行了实验验证,其分类识别准确率分别为99.71%和85.24%.实验结果表明,本文提出的基于关节旋转特征和指尖距离特征的融合特征能很好地表示不同手势的几何特征,能准确地表征静态手势并进行手势识别. 相似文献
8.
针对复杂环境中的手势识别问题,提出了一种融合深度信息和红外信息的手势识别方法。首先利用Kinect摄像头的深度信息进行动态实时手势分割,然后融合红外图像复原手势区域。解决了实时手势分割和利用手势的空间分布特征进行手势识别时由于分割的手势区域有缺损或有人脸干扰时识别率低的问题。经实验验证,提出的方法不仅不受环境光线的影响,而且可以识别区分度较小的手势,对旋转、缩放、平移的手势识别也具有鲁棒性。对于区分度较大的手势,识别率高达100%。 相似文献
9.
A novel approach is proposed for the recognition of moving hand gestures based on the representation of hand motions as contour-based similarity images (CBSIs). The CBSI was constructed by calculating the similarity between hand contours in different frames. The input CBSI was then matched with CBSIs in the database to recognize the hand gesture. The proposed continuous hand gesture recognition algorithm can simultaneously divide the continuous gestures into disjointed gestures and recognize them. No restrictive assumptions were considered for the motion of the hand between the disjointed gestures. The proposed algorithm was tested using hand gestures from American Sign Language and the results showed a recognition rate of 91.3% for disjointed gestures and 90.4% for continuous gestures. The experimental results illustrate the efficiency of the algorithm for noisy videos. 相似文献
10.
针对手势识别过程中单一手势特征对手势描述的不足,提出了一种基于改进Hu矩和灰度共生矩阵GLCM的手势识别方法 Hu-GLCM。首先利用肤色模型对采集的图像分割出手势区域;其次采用数学形态学和多边形拟合的方法提取手势的单连通轮廓,利用改进Hu-GLCM算法提取手势的几何形状特征和纹理特征并建立模板数据库;最后通过扩展的Canberra距离对手势图像进行识别和分类。实验结果表明,该改进算法对7种手势的平均识别率达到95%以上,且计算速度快,能够满足实时性的需求。 相似文献
11.
复杂背景下基于傅立叶描述子的手势识别 总被引:6,自引:1,他引:5
人的手势是人们日常生活中最广泛使用的一种交流方式。由于在人机交互界面和虚拟现实环境中的应用,手势识别的研究受到了越来越广泛的关注。但是目前基于单目视觉的手势识别技术中,手势分割要求背景简单或者要求识别者戴着笨重的数据手套。而该文结合了运动信息和基于KL变换的肤色模型,在复杂背景下进行手势分割,与传统的基于RGB肤色模型的手势分割相比,在复杂背景环境下得到了很好的分割效果。在对分割的手势区域进行预处理后,该文使用了一种归一化的傅立叶描述子进行手势的特征提取,相比传统的傅立叶描述子更加准确,最后采用了传统的三层BP网络作为模式识别器,手势训练集和测试集的识别率分别达到了95.9%和95%。 相似文献
12.
13.
基于视觉技术的手势跟踪与动作识别算法 总被引:1,自引:0,他引:1
对工业生产线上规程化操作动作进行手势跟踪与动作识别研究。首先选取YCbCr颜色模型进行手部区域识别,获得完整手部区域;然后利用Euclidean距离变换计算相邻2个手部运动轨迹点之间的距离和各帧图像的手部运动速度;再利用扩展有限状态机模型实现手部运动的分割,将分割的多个动作与建立的动作模板匹配,利用Hausdorff距离匹配法判断匹配结果的准确性,实现手部动作的识别。实验结果表明:该手部动作识别算法对背景干扰和摄像头轻微震动具有一定的抗噪能力,有较高的动作识别正确率,能够满足现实工作环境下的应用需求。 相似文献
14.
基于视觉的多特征手势识别 总被引:1,自引:0,他引:1
手势是一种自然直观的交互方式,基于视觉的手势识别是实现新一代人机交互的关键技术。本文在已有的手势识别技术基础上,从手势分割及手势表示两方面着手,提出了一种单目视觉下的手势识别方法。利用颜色特征检测肤色区域,成功分割出人手;利用人手的轮廓及凸缺陷检测指尖,再利用指尖的数目和方位来表示一个手势,进而结合轮廓长度和面积等几何特征完成手势识别。传统的指尖检测方法需要遍历并扫描手掌外轮廓,计算量大,本文通过凸缺陷检测指尖,减少了计算量,提高了指尖检测的速度。实验结果表明,本文的方法具有很好的鲁棒性及实时性,能适应环境的变化。 相似文献
15.
In this paper, we propose a new method for recognizing hand gestures in a continuous video stream using a dynamic Bayesian network or DBN model. The proposed method of DBN-based inference is preceded by steps of skin extraction and modelling, and motion tracking. Then we develop a gesture model for one- or two-hand gestures. They are used to define a cyclic gesture network for modeling continuous gesture stream. We have also developed a DP-based real-time decoding algorithm for continuous gesture recognition. In our experiments with 10 isolated gestures, we obtained a recognition rate upwards of 99.59% with cross validation. In the case of recognizing continuous stream of gestures, it recorded 84% with the precision of 80.77% for the spotted gestures. The proposed DBN-based hand gesture model and the design of a gesture network model are believed to have a strong potential for successful applications to other related problems such as sign language recognition although it is a bit more complicated requiring analysis of hand shapes. 相似文献
16.
本文研究了图像手势识别和增强现实技术,设计了可以进行静态手势识别和动态跟踪的系统,通过提前录入不同手势,利用皮肤颜色对图像进行OSTU自适应阈值划分,建立二值化图像,与已知的手势进行匹配,以得到手势结果。实验结果表明,准确率达到96.8%,识别速度达到0.55 s。动态跟踪利用检测每帧图像中手部的位置进行定位和捕捉,图像捕捉帧数达到28帧/s,对手势静态识别和动态跟踪实现了人机之间的良好交互。 相似文献
17.
We present a wearable input system which enables interaction through 3D handwriting recognition. Users can write text in the air as if they were using an imaginary blackboard. The handwriting gestures are captured wirelessly by motion sensors applying accelerometers and gyroscopes which are attached to the back of the hand. We propose a two-stage approach for spotting and recognition of handwriting gestures. The spotting stage uses a support vector machine to identify those data segments which contain handwriting. The recognition stage uses hidden Markov models (HMMs) to generate a text representation from the motion sensor data. Individual characters are modeled by HMMs and concatenated to word models. Our system can continuously recognize arbitrary sentences, based on a freely definable vocabulary. A statistical language model is used to enhance recognition performance and to restrict the search space. We show that continuous gesture recognition with inertial sensors is feasible for gesture vocabularies that are several orders of magnitude larger than traditional vocabularies for known systems. In a first experiment, we evaluate the spotting algorithm on a realistic data set including everyday activities. In a second experiment, we report the results from a nine-user experiment on handwritten sentence recognition. Finally, we evaluate the end-to-end system on a small but realistic data set. 相似文献
18.
19.