首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we present an approach for recognizing pointing gestures in the context of human–robot interaction. In order to obtain input features for gesture recognition, we perform visual tracking of head, hands and head orientation. Given the images provided by a calibrated stereo camera, color and disparity information are integrated into a multi-hypothesis tracking framework in order to find the 3D-positions of the respective body parts. Based on the hands’ motion, an HMM-based classifier is trained to detect pointing gestures. We show experimentally that the gesture recognition performance can be improved significantly by using information about head orientation as an additional feature. Our system aims at applications in the field of human–robot interaction, where it is important to do run-on recognition in real-time, to allow for robot egomotion and not to rely on manual initialization.  相似文献   

2.
Service robotics is currently a highly active research area in robotics, with enormous societal potential. Since service robots directly interact with people, finding natural and easy-to-use user interfaces is of fundamental importance. While past work has predominately focussed on issues such as navigation and manipulation, relatively few robotic systems are equipped with flexible user interfaces that permit controlling the robot by natural means. This paper describes a gesture interface for the control of a mobile robot equipped with a manipulator. The interface uses a camera to track a person and recognize gestures involving arm motion. A fast, adaptive tracking algorithm enables the robot to track and follow a person reliably through office environments with changing lighting conditions. Two alternative methods for gesture recognition are compared: a template based approach and a neural network approach. Both are combined with the Viterbi algorithm for the recognition of gestures defined through arm motion (in addition to static arm poses). Results are reported in the context of an interactive clean-up task, where a person guides the robot to specific locations that need to be cleaned and instructs the robot to pick up trash.  相似文献   

3.
针对当前圣女果采摘机器人无法保证带蒂采摘的问题,提出一种通过机器视觉进行圣女果姿态分析进而生成特定的机械臂采摘动作的方法;该方法通过模拟人手采摘流程,能够使得机械臂末端执行器到达采摘位置时与圣女果的果蒂方向保持一致;整体系统包括对成熟圣女果的目标检测,测距算法的实现,圣女果方向识别以及机械臂动作生成;根据轮廓拟合算法的思想进行算法改进,实现针对圣女果的更加精确、稳定的方向识别算法,从而获得机械臂末端执行器与圣女果果蒂方向一致的目标位姿,进而实现相应机械臂采摘动作的生成;多次实验表明,改进后的对于圣女果方向的识别算法相较于传统轮廓拟合算法而言误差角度更小,对于不同姿态圣女果的方向识别更具稳定性,因此更加适用于实际采摘流程中根据圣女果姿态生成机械臂的特定采摘动作。  相似文献   

4.
This paper presents a hand gesture-based interface to facilitate interaction with individuals with upper-level spinal cord injuries, and offers an alternative way to perform “hands-on” laboratory tasks. The presented system consists of four modules: hand detection, tracking, trajectory recognition, and actuated device control. A 3D particle filter framework based on color and depth information is proposed to provide a more efficient solution to the independent face and hands tracking problem. More specifically, an interaction model utilizing spatial and motion information was integrated into the particle filter framework to tackle the “false merge” and “false labeling” problem through hand interaction and occlusion. To obtain an optimal parameter set for the interaction model, a neighborhood search algorithm was employed. An accuracy of 98.81 % was achieved by applying the optimal parameter set to the tracking module of the system. Once the hands were tracked successfully, the acquired gesture trajectories were compared with motion models. The dynamic time warping method was used for signals’ time alignment, and they were classified by a CONDENSATION algorithm with a recognition accuracy of 97.5 %. In a validation experiment, the decoded gestures were passed as commands to a mobile service robot and a robotic arm to perform simulated laboratory tasks. Control policies using the gestural control were studied and optimal policies were selected to achieve optimal performance. The computational cost of each system module demonstrated a real-time performance.  相似文献   

5.
Most gestural interaction studies on gesture elicitation have focused on hand gestures, and few have considered the involvement of other body parts. Moreover, most of the relevant studies used the frequency of the proposed gesture as the main index, and the participants were not familiar with the design space. In this study, we developed a gesture set that includes hand and non-hand gestures by combining the indices of gesture frequency, subjective ratings, and physiological risk ratings. We first collected candidate gestures in Experiment 1 through a user-defined method by requiring participants to perform gestures of their choice for 15 most commonly used commands, without any body part limitations. In Experiment 2, a new group of participants evaluated the representative gestures obtained in Experiment 1. We finally obtained a gesture set that included gestures made with the hands and other body parts. Three user characteristics were exhibited in this set: a preference for one-handed movements, a preference for gestures with social meaning, and a preference for dynamic gestures over static gestures.  相似文献   

6.
Communication between socially assistive robots and humans might be facilitated by intuitively understandable mechanisms. To investigate the effects of some key nonverbal gestures on a human’s own engagement and robot engagement experienced by humans, participants read a series of instructions to a robot that responded with nods, blinks, changes in gaze direction, or a combination of these. Unbeknown to the participants, the robot had no form of speech processing or gesture recognition, but simply measured speech volume levels, responding with gestures whenever a lull in sound was detected. As measured by visual analogue scales, engagement of participants was not differentially affected by the different responses of the robot. However, their perception of the robot’s engagement in the task, its likability and its understanding of the instructions depended on the gesture presented, with nodding being the most effective response. Participants who self-reported greater robotics knowledge reported higher overall engagement and greater success at developing a relationship with the robot. However, self-reported robotics knowledge did not differentially affect the impact of robot gestures. This suggests that greater familiarity with robotics may help to maximise positive experiences for humans involved in human–robot interactions without affecting the impact of the type of signal sent by the robot.  相似文献   

7.
基于视觉的动态手势识别及其在仿人机器人交互中的应用   总被引:5,自引:0,他引:5  
刘江华  程君实  陈佳品 《机器人》2002,24(3):197-200
手势识别是人和机器人交互中的重要组成部分,本文针对双目视觉系统SFBinoeye实 现了基于光流PCA(主分量分析)和DTW(动态时间规整)的命令手势识别,用以控制仿人机器人 SFHR的手臂运动.利用块相关算法计算光流,并通过主分量分析得到降维的连续投影系数, 与手掌区域的质心位置组合为混合特征向量.针对DTW定义了新的加权距离测度,并用它对 手势进行匹配识别.针对9个手势训练和识别,识别率达到92.4%,并成功地应用于机器人的 手臂控制中.  相似文献   

8.
蒋穗峰  李艳春  肖南峰 《计算机应用》2016,36(12):3486-3491
针对目前操作工人与工业机器人之间的交互还是采用比较机械化的交互方式,设计使用Kinect传感器作为手势采集设备,并使用人的手势来对工业机器人进行控制的方法。首先,使用深度阈值法与手部骨骼点相结合的方法,从Kinect传感器获取的数据中准确地提取出手部图像。在提取过程中,操作员无需佩戴任何设备,对操作员所站位置没有要求,对背景环境也没要求。然后,用稀疏自编码网络与Softmax分类器结合的方法对手势图像进行识别,手势识别过程包含预训练和微调,预训练是用逐层贪婪训练法依次训练网络的每一层,微调是将整个神经网络看成一个整体微调整个网络的参数,手势识别的准确率达到99.846%。最后,在自主研发的工业机器人仿真平台上进行实验,在单手和双手手势下都取得了不错的效果,实验结果验证了手势控制工业机器人的可行性和可用性。  相似文献   

9.
Machine learning is a technique for analyzing data that aids the construction of mathematical models. Because of the growth of the Internet of Things (IoT) and wearable sensor devices, gesture interfaces are becoming a more natural and expedient human-machine interaction method. This type of artificial intelligence that requires minimal or no direct human intervention in decision-making is predicated on the ability of intelligent systems to self-train and detect patterns. The rise of touch-free applications and the number of deaf people have increased the significance of hand gesture recognition. Potential applications of hand gesture recognition research span from online gaming to surgical robotics. The location of the hands, the alignment of the fingers, and the hand-to-body posture are the fundamental components of hierarchical emotions in gestures. Linguistic gestures may be difficult to distinguish from nonsensical motions in the field of gesture recognition. Linguistic gestures may be difficult to distinguish from nonsensical motions in the field of gesture recognition. In this scenario, it may be difficult to overcome segmentation uncertainty caused by accidental hand motions or trembling. When a user performs the same dynamic gesture, the hand shapes and speeds of each user, as well as those often generated by the same user, vary. A machine-learning-based Gesture Recognition Framework (ML-GRF) for recognizing the beginning and end of a gesture sequence in a continuous stream of data is suggested to solve the problem of distinguishing between meaningful dynamic gestures and scattered generation. We have recommended using a similarity matching-based gesture classification approach to reduce the overall computing cost associated with identifying actions, and we have shown how an efficient feature extraction method can be used to reduce the thousands of single gesture information to four binary digit gesture codes. The findings from the simulation support the accuracy, precision, gesture recognition, sensitivity, and efficiency rates. The Machine Learning-based Gesture Recognition Framework (ML-GRF) had an accuracy rate of 98.97%, a precision rate of 97.65%, a gesture recognition rate of 98.04%, a sensitivity rate of 96.99%, and an efficiency rate of 95.12%.  相似文献   

10.
Music is a fundamental part of most cultures. Controlling music playback has commonly been used to demonstrate new interaction techniques and algorithms. In particular, controlling music playback has been used to demonstrate and evaluate gesture recognition algorithms. Previous work, however, used gestures that have been defined based on intuition, the developers’ preferences, and the respective algorithm’s capabilities. In this paper we propose a refined process for deriving gestures from constant user feedback. Using this process every result and design decision is validated in the subsequent step of the process. Therefore, comprehensive feedback can be collected from each of the conducted user studies. Along the process we develop a set of free-hand gestures for controlling music playback. The situational context is analysed to shape the usage scenario and derive an initial set of necessary functions. In a successive user study the set of functions is validated and proposals for gestures are collected from participants for each function. Two gesture sets containing static and dynamic gestures are derived and analysed in a comparative evaluation. The comparative evaluation shows the suitability of the identified gestures and allows further refinement. Our results indicate that the proposed process, that includes validation of each design decision, improves the final results. By using the process to identify gestures for controlling music playback we not only show that the refined process can successfully be applied, but we also provide a consistent gesture set that can serve as a realistic benchmark for gesture recognition algorithms.  相似文献   

11.
12.
13.
针对目前室内移动机器人手势指令识别系统存在的问题,对图像传感器与机器人相分离的图像采集方案进行了研究,并利用动态手势指令对机器人进行控制。动态手势指令识别方法是对手的不同运动轨迹进行识别,通过皮肤颜色模型和手势中心点方向向量法追踪得到手势运动轨迹,提取手势运动轨迹的特征向量,通过基于动态时间规整(DTW)实现对轨迹的识别。实验结果表明,该系统可以实现对机器人前进、后退、左转、右转的实时控制。  相似文献   

14.
为了提高实际复杂场景的人机交互中动态手势识别的准确性和实时性,提出了一种时序局部敏感直方图(Temporal Locality Sensitive Histograms of Oriented Gradients,TLSHOG)特征新方法,用于描述手势运动的时序变化和空间姿态,实现了快速而精确的动态手势识别。采用普通网络摄像头获取手部的二维图像序列作为训练样本,然后构造单帧图像特征描述手部的空间姿态,并结合时间金字塔(Temporal Pyramid,TP)来描述手势运动轨迹的时空特征,运用多维支持向量机(Support Vector Machine,SVM)算法进行模型训练,对测试样本中的多种手势进行精确的分类。实验结果表明,该方法准确度高,实时性好,对于复杂背景干扰、光照强度变化有较强的鲁棒性。  相似文献   

15.
基于Petri网和BPNN的多重触控手势识别   总被引:1,自引:0,他引:1  
为解决多重触控技术的手势识别问题,提出一个多重触控手势描述与识别框架,给出其描述和识别方法。多重触控手势可分为原子手势和组合手势,在手势描述过程中,利用BP网络对原子手势进行建模,然后在将用户的意图映射为原子手势逻辑、时序和空间关系关联而成的组合手势,并在Petri网引入逻辑、时序和空间关系描述符对组合手势进行描述。在手势识别过程中,根据BP网络分类器检测出原子手势,并触发组合手势Petri网模型的转移,实现组合手势的识别。实验结果表明该方法对不同用户操作习惯有鲁棒性,能有效解决多重触控手势识别问题。  相似文献   

16.
Existing gesture segmentations use the backward spotting scheme that first detects the end point, then traces back to the start point and sends the extracted gesture segment to the hidden Markov model (HMM) for gesture recognition. This makes an inevitable time delay between the gesture segmentation and recognition and is not appropriate for continuous gesture recognition. To solve this problem, we propose a forward spotting scheme that executes gesture segmentation and recognition simultaneously. The start and end points of gestures are determined by zero crossing from negative to positive (or from positive to negative) of a competitive differential observation probability that is defined by the difference of observation probability between the maximal gesture and the non-gesture. We also propose the sliding window and accumulative HMMs. The former is used to alleviate the effect of incomplete feature extraction on the observation probability and the latter improves the gesture recognition rate greatly by accepting all accumulated gesture segments between the start and end points and deciding the gesture type by a majority vote of all intermediate recognition results. We use the predetermined association mapping to determine the 3D articulation data, which reduces the feature extraction time greatly. We apply the proposed simultaneous gesture segmentation and recognition method to recognize the upper-body gestures for controlling the curtains and lights in a smart home environment. Experimental results show that the proposed method has a good recognition rate of 95.42% for continuously changing gestures.  相似文献   

17.
The U.S. Department of Energy has identified robotics as a major technology to be utilized in its program of environmental restoration and waste management, and in particular has targeted robotic handling of hazardous waste to be an essential element in this program. Successful performance of waste-handling operations will require a robot to perform complex tasks involving both accurate positioning of its end effector and compliant contact between the end effector and the environment, and will demand that these tasks be completed in uncertain surroundings. This article focuses on the development of a robot control system capable of meeting the requirements of hazardous-waste-handling applications and presents as a solution an adaptive scheme for controlling the mechanical impedance of kinematically redundant manipulators. The proposed controller is capable of accurate end effector impedance control and effective redundancy utilization, does not require knowledge of the complex robot dynamic model or parameter values for the robot or the environment, and is implemented without calculation of the robot inverse kinematic transformation. Computer simulation results are given for a 4 degree of freedom redundant robot under adaptive impedance control. These results indicate that the proposed controller is capable of successfully performing tasks of importance in robotic waste-handling applications.  相似文献   

18.
19.
This paper proposes a framework for industrial and collaborative robot programming based on the integration of hand gestures and poses. The framework allows operators to control the robot via both End-Effector (EE) and joint movements and to transfer compound shapes accurately to the robot. Seventeen hand gestures, which cover the position and orientation controls of the robotic EE and other auxiliary operations, are designed according to cognitive psychology. Gestures are classified by a deep neural network, which is pre-trained for two-hand pose estimation and fine-tuned on a custom dataset, achieving a test accuracy of 99%. The index finger’s pointing direction and the hand’s orientation are extracted via 3D hand pose estimation to indicate the robotic EE’s moving direction and orientation, respectively. The number of stretched fingers is detected via two-hand pose estimation to represent decimal digits for selecting robot joints and inputting numbers. Finally, we integrate these three manners seamlessly to form a programming framework.We conducted two interaction experiments. The reaction time of the proposed hand gestures in indicating randomly given instructions is significantly less than that of other gesture sets, such as American Sign Language (ASL). The accuracy of our method in compound shape reconstruction is much better than that of hand movement trajectory-based methods, and the operating time is comparable with that of teach pendants.  相似文献   

20.
手势是一种高效的人机交互和设备控制的方式,基于视觉的手势识别是人机交互、模式识别等领域的一个富有挑战性的研究课题。文章提出并实现了一个可用于与机器人交互的静态手势检测和识别系统。该系统用摇动检测的方法定位人手;用基于现场采样得到的肤色模型进行手的分割;用简化并改进的CAMSHIFT算法对手势进行跟踪;最后用模式识别的方法提取简单特征进行识别。实验证明,该系统快速、稳定而有效。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号