首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
动态手势识别作为人机交互的一个重要方向,在各个领域具有广泛的需求。相较于静态手势,动态手势的变化更为复杂,对其特征的充分提取与描述是准确识别动态手势的关键。为了解决对动态手势特征描述不充分的问题,利用高精度的Leap Motion传感器对手部三维坐标信息进行采集,提出了一种包含手指姿势和手掌位移的特征在内的、能够充分描述复杂动态手势的特征序列,并结合长短期记忆网络模型进行动态手势识别。实验结果表明,提出的方法在包含16种动态手势的数据集上的识别准确率为98.50%;与其他特征序列的对比实验表明,提出的特征序列,能更充分准确地描述动态手势特征。  相似文献   

2.
针对虚拟场景中的自然手势交互进行了研究,提出了基于Leap Motion的动态指尖手势轨迹识别方法。首先借助Leap Motion设备采集指尖在场景中运动时产生的坐标并同时对数据进行预处理,然后从这一系列坐标中找出起始和结束位置并提取出有效的手势轨迹,再进行轨迹优化和手势初步分类,基于加权欧氏距离将轨迹和手势模板进行相似度计算,得到识别结果。采集200组手势数据进行实验,结果证明提出的方法具有很高的识别率,将方法应用在手势交互系统中,实现使用自然手势和虚拟物品进行交互,增加了交互乐趣,改善了交互体验。  相似文献   

3.
伴随虚拟现实(Virtual Reality,VR)技术的发展,以及人们对人机交互性能和体验感的要求提高,手势识别作为影响虚拟现实中交互操作的重要技术之一,其精确度急需提升[1].针对当前手势识别方法在一些动作类似的手势识别中表现欠佳的问题,提出了一种多特征动态手势识别方法.该方法首先使用体感控制器Leap Motion追踪动态手势获取数据,然后在特征提取过程中增加对位移向量角度和拐点判定计数的提取,接着进行动态手势隐马尔科夫模型(Hidden Markov Model,HMM)的训练,最后根据待测手势与模型的匹配率进行识别.从实验结果中得出,该多特征识别方法能够提升相似手势的识别率.  相似文献   

4.
手势识别的快速发展及体感设备的不断更新为三维手势交互提供了灵感,基于Leap Motion 手势识别和最邻近算法,建立了一种三维手势交互系统。首先对手势设计理论和交互手 势设计原则进行研究,基于此设计手势功能和建立手势库,并将手势库分为 8 种手势;其次进 行手势特征提取,建立手指关键点模型,获取手势特征的角度特征;然后计算 KNN 算法和 SVM 算法的手势识别效率,KNN 改进算法取得较好的识别效率;最后,设计三维交互系统,手势分 类为 4 个模块,每个模块有 2 个手势任务;20 名测试者中提取 1 600 组手势数据,并进行总采 集样本关节点均值的数据分析;设计三维交互系统模块,在 Unity3D 中创建的三维交互系统中 导入 1 600 组手势数据,根据自定义的 8 种手势驱动虚拟手完成交互设计过程,完成用户体验 分析和手势识别效率统计。通过研究发现,基于 Leap Motion 手势识别具有较高的识别效率, 三维手势交互系统富有创新性。  相似文献   

5.
虚拟仿真技术的快速发展及体感设备的不断更新为沙画动画这一全新的艺术创作形式带来新的灵感。针对沙画现场作画工序复杂的问题,结合Leap Motion设备和Unity3D开发环境完成手势识别并驱动虚拟手实现虚拟沙画效果。首先,依据Leap Motion捕捉到的手势坐标及方向信息提取手部关键点;然后提出角域划分的方法并引入新的特征向量,将其与提取信息串联作为手势分类依据;最后,根据自行定义的沙画手势语义驱动虚拟手完成虚拟沙画创作。实验证明,利用Leap Motion完成近距离手势识别效果较其他方法结果更加精准,实时性较高,手势跟踪稳定,虚拟沙画绘画过程沉浸感强。  相似文献   

6.
With the development of multimedia technology, traditional interactive tools, such as mouse and keyboard, cannot satisfy users’ requirements. Touchless interaction has received considerable attention in recent years with benefit of removing barriers of physical contact. Leap Motion is an interactive device which can be used to collect information of dynamic hand gestures, including coordinate, acceleration and direction of fingers. The aim of this study is to develop a new method for hand gesture recognition using jointly calibrated Leap Motion via deterministic learning. Hand gesture features representing hand motion dynamics, including spatial position and direction of fingers, are derived from Leap Motion. Hand motion dynamics underlying motion patterns of different gestures which represent Arabic numbers (0-9) and capital English alphabets (A-Z) are modeled by constant radial basis function (RBF) neural networks. Then, a bank of estimators is constructed by the constant RBF networks. By comparing the set of estimators with a test gesture pattern, a set of recognition errors are generated. The average L1 norms of the errors are taken as the recognition measure according to the smallest error principle. Finally, experiments are carried out to demonstrate the high recognition performance of the proposed method. By using the 2-fold, 10-fold and leave-one-person-out cross-validation styles, the correct recognition rates for the Arabic numbers are reported to be 94.2%, 95.1% and 90.2%, respectively, for the English alphabets are reported to be 89.2%, 92.9% and 86.4%, respectively.  相似文献   

7.
8.
随着虚拟现实技术的飞速发展,人们迫切需要一种自然友好的字符输入方式,于是越来越多的研究人员投入到动态手势的研发当中。本文基于隐马尔可夫模型(HMM)搭建了一套动态手势识别系统。这套系统通过Leap Motion采集动态手势数据,并能够识别36个字母和数字的手势(数字0-9和字母A-Z)。经过大量实验表明,该系统有着很强的鲁棒性,识别单独手势的识别率能够达到93.2%。  相似文献   

9.
为了解放无人机的传统控制方式和解决高空取物等实际问题,提出一种将手势识别技术与无人机相结合的可抓取无人机系统。该系统通过Leap Motion采集手势数据,使用Python并结合Leap Motion v2 SDK库将数据进行处理,通过NRF24L01无线模块对数据进行发送,在无人机端将收到的数据通过Arduino进行分析处理,输出相应的PWM波来控制无人机的飞行状态以及机械爪的抓取。经试验证实该系统可通过手势改变无人机的飞行状态,以及控制机械爪的抓取,表明了该系统实现方法的可靠性及有效性。  相似文献   

10.
11.
Leap Motion手势识别在识别区域边缘和手指遮挡部位存在识别不稳定的现象。提出了一种Leap Motion手势交互层次校正方法。该方法通过实时对比阈值方式分析Leap Motion的识别错误,并采用层次化的校正算法校正人手位置,解决人手交互过程中的识别不稳定现象。通过对实验进行分析,75%参与者对实验交互方式满意,80%参与者认为该方法更精确,且交互内容识别精度超过89%,充分证明了该方法能够提高Leap Motion的识别准确率,提升用户体验。  相似文献   

12.
传统的手势交互都需要借助于Leap Motion或Kinect等专用交互设备。以图像通道转换、二值化等图像处理方式提取手势,以手势平面坐标值的变化获取手势的平面移动信息,以手势面积的变化解决了手势深度的问题。通过绘制手势轮廓结合自创的图像匹配算法计算不同图像的匹配率,用最高匹配率选择相对应的手势运动信息。通过摄像头坐标系到3D场景坐标系之间的转换,结合三维图形的几何变换计算变换矩阵,实现手的空间移动与旋转。在不借助专用的手势交互设备的情况下,实现单目摄像头的动态手势交互。  相似文献   

13.
王红霞  王坤 《计算机应用》2016,36(7):1959-1964
基于RGB-D(RGB-Depth)的静态手势识别的速度高于其动态手势识别,但是存在冗余手势和重复手势而导致识别准确性不高的问题。针对该问题,提出了一种基于加锁机制的静态手势识别方法来识别运动中的手势。首先,将通过Kinect设备获取RGB数据流和Depth数据流融合成人体骨骼数据流;然后,在静态手势方法中引入加锁机制,并与之前建立好的骨骼点特征模型手势库进行比对计算;最后,设计一款“程序员进阶之路”益智类网页游戏进行应用与实验。实验验证在6种不同运动手势情况下,该方法与纯静态手势识别方法相比,平均识别准确率提高了14.4%;与动态手势识别相比,识别速度提高了14%。实验结果表明,提出的基于加锁机制的静态手势识别方法,既保留了静态识别的速率,实现了实时识别;又能很好地剔除冗余手势和重复手势,提高了识别正确性。  相似文献   

14.
Hand gesture recognition is important for interactions under VR environment. Traditional vision-based approaches encounter occlusion problems, and thus, wearable devices could be an effective supplement. This study presents a hand grasps recognition method in virtual reality settings, by fusing signals acquired using force myography (FMG), a muscular activity-based hand gesture recognition method, and Leap Motion. We conducted an experiment where participants performed grasping of virtual objects with VR goggles on their head, an FMG band on their wrist, and a Leap Motion positioned either on the desk or on the goggles (two experimental settings). The FMG, Leap Motion, and fusion of both signals were used for training and testing a simple, but effective linear discriminant analysis classifier, as well as three other mainstream classification algorithms. The results showed that the fusion of both signals achieved a significant improvement in classification accuracy, compared to using Leap Motion alone in both experimental settings.  相似文献   

15.
张强  张勇  刘芝国  周文军  刘佳慧 《计算机工程》2020,46(3):237-245,253
针对基于人工建模方式的手势识别方法准确率低、速度慢的问题,提出一种基于改进YOLOv3的静态手势实时识别方法。采用卷积神经网络YOLOv3模型,将通过Kinect设备采集的IR、Registration of RGB、RGB和Depth图像代替常用的RGB图像作为数据集,并融合四类图像的识别结果以提高识别准确率。采用k-means聚类算法对YOLOv3中的初始候选框参数进行优化,从而加快识别速度。在此基础上,利用迁移学习的方法对基础特征提取器进行改进,以缩短模型的训练时间。实验结果表明,该方法对流式视频静态手势的平均识别准确率为99.8%,识别速度高达52 FPS,模型训练时间为12 h,与Faster R-CNN、SSD、YOLOv2等深度学习方法相比,其识别精度更高,识别速度更快。  相似文献   

16.
针对复杂场景下深度相机环境要求高,可穿戴设备不自然,基于深度学习模型数据集样本少导致识别能力、鲁棒性欠佳的问题,提出了一种基于语义分割的深度学习模型进行手势分割结合迁移学习的神经网络识别的手势识别方法。通过对采集到的图像数据集首进行不同角度旋转,翻转等操作进行数据集样本增强,训练分割模型进行手势区域的分割,通过迁移学习卷积神经网络更好的提取手势特征向量,通过Softmax函数进行手势分类识别。通过4个人在不同背景下做的10个手势,实验结果表明: 针对复杂背景环境下能够正确的识别手势。  相似文献   

17.

New interaction paradigms combined with emerging technologies have produced the creation of diverse Natural User Interface (NUI) devices in the market. These devices enable the recognition of body gestures allowing users to interact with applications in a more direct, expressive, and intuitive way. In particular, the Leap Motion Controller (LMC) device has been receiving plenty of attention from NUI application developers because it allows them to address limitations on gestures made with hands. Although this device is able to recognize the position of several parts of the hands, developers are still left with the difficult task of recognizing gestures. For this reason, several authors approached this problem using machine learning techniques. We propose a classifier based on Approximate String Matching (ASM). In short, we encode the trajectories of the hand joints as character sequences using the K-means algorithm and then we analyze these sequences with ASM. It should be noted that, when using the K-means algorithm, we select the number of clusters for each part of the hands by considering the Silhouette Coefficient. Furthermore, we define other important factors to take into account for improving the recognition accuracy. For the experiments, we generated a balanced dataset including different types of gestures and afterwards we performed a cross-validation scheme. Experimental results showed the robustness of the approach in terms of recognizing different types of gestures, time spent, and allocated memory. Besides, our approach achieved higher performance rates than well-known algorithms proposed in the current state-of-art for gesture recognition.

  相似文献   

18.
黄俊  景红 《计算机系统应用》2015,24(10):259-263
最新体感设备Leap Motion的面世提供给用户一种全新的体验, 即通过跟踪探测动态手势可以进行体感游戏、虚拟演奏、凌空绘画等的非接触式人机交互. 文章首先对Leap Motion的技术特点进行介绍, 并对同类型设备进行对比总结, 介绍了Leap Motion的相关应用和发展前景. 文章分析了Leap Motion的原理和技术基础, 然后提出基于Leap Motion的手势控制技术, 最后以一个基于Unity 3D的手势控制虚拟场景中的物品运动的具体实例, 对Leap Motion手势控制技术的实现进行了细节介绍.  相似文献   

19.
This paper introduces principal motion components (PMC), a new method for one-shot gesture recognition. In the considered scenario a single training video is available for each gesture to be recognized, which limits the application of traditional techniques (e.g., HMMs). In PMC, a 2D map of motion energy is obtained per each pair of consecutive frames in a video. Motion maps associated to a video are processed to obtain a PCA model, which is used for recognition under a reconstruction-error approach. The main benefits of the proposed approach are its simplicity, easiness of implementation, competitive performance and efficiency. We report experimental results in one-shot gesture recognition using the ChaLearn Gesture Dataset; a benchmark comprising more than 50,000 gestures, recorded as both RGB and depth video with a Kinect?camera. Results obtained with PMC are competitive with alternative methods proposed for the same data set.  相似文献   

20.
In this research, we propose a state-of-the-art 3D finger gesture tracking and recognition method. We use the depth sensors for both hands in real time music playing. In line with the development of 3D depth cameras, we implemented a set of 3D gesture-based instruments, such as Virtual Cello and Virtual Piano, which need precise finger tracking in 3D space. For hands tracking, model-based tracking for left hand and appearance-based tracking for right hand techniques are proposed. To detect finger gestures, our approaches consist number of systematic steps as reducing noise in depth map and geometrical processing for Virtual Cello. For Virtual Piano, we introduce the Neural Network (NN) method to detect special hand gestures. It has Multilayer Perceptron (MLP) structure with back propagation training. Literature has few examples using touch screen as medium, with fixed-coordinates, and 2D–gestures to control MIDI input. The end users should no longer carry anything on their hands. We use Senz3D and Leap Motion due to a few technical benefits. Senz3D and Leap Motion use a closer distance to hands, thus detailed finger gestures can be precisely identified. In the past years, we announced a set of virtual musical instruments and the MINE Virtual Band. Our research work is tested on lab environment and professional theatrical stage. More information and demonstrations of the proposed method can be accessed at: http://video.minelab.tw/DETS/VMIB/.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号