首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
为了增强机器人人机交互系统的自然性,提出了基于多种传感器的非接触式人机交互系统设计方案,系统通过检测操作者手部动作和手部位置姿态的变化实现机器人的遥操作。研制了肌电传感器,获取手臂上一对拮抗肌上的表面肌电信号,并以此来判断机器人操作者的部分手部动作;利用Kinect体感设备和惯性测量单元获取手臂三维位置和姿态角信息。通过网络将人手的动作及位置姿态发送至机器人控制系统,以完成对机器人的控制。系统综合多种传感器的优点,极大减小了传统接触式交互方式对操作者运动范围的限制,实现了自然交互,实验表明了其有效性。  相似文献   

2.
为实现对具有16个自由度仿人机器人的姿态控制,采用Kinect传感器对人体姿态的坐标数据进行采集,根据坐标信息利用Processing软件开发基于SimpleOpenNI库的上位机软件,建立人体关节模型,并利用空间向量法对仿人机器人的步态规划以及重心控制算法分析,解析各关节的转动角度,经由无线WiFi模块向仿人机器人发送指令以控制舵机的运动,最终实现对机器人的控制,搭建了基于Kinect传感器的测试平台.测试结果表明:仿人机器人上肢在运动范围内无死角,通过对重心的控制,下肢可实现简单的步行,符合预期效果.  相似文献   

3.
设计了一种基于微软Kinect的体感控制智能窗帘系统。采用微软Kinect v1传感器及其开发者工具包(SDK)和Microsoft Visual Studio集成开发环境作为核心开发工具,并和基于80c51单片机的步进电机控制系统组成体感控制智能窗帘系统。能够通过Kinect体感传感器对人体肢体动作识别并做判断,并将判断结果通过串口通讯传送至单片机控制步进电机系统,从而使得通过肢体动作控制窗帘系统的开启和关闭。本系统具有动作判断准确、易于针对特殊群体进行拓展开发、使用方便等优点,在智能家居、残疾人士辅助工具方面有较大应用价值。  相似文献   

4.
This paper considers an approach to operator-guided real time motion control of robot arm manipulators that's based on the use of configuration space (C-space). The goal is to improve operator performance in a complex environment with obstacles, In such tasks, traditional teleoperation techniques, which are all based on control in work space (W-space), suffer from human errors tied to deficiencies in human spatial reasoning. The C-space approach transforms the problem into one humans are much better equipped to handle-moving a point in a maze-and results in a significant improvement in performance: shorter path, less time to complete the task, and virtually no arm-obstacle collisions. Versions of the approach are described for two-dimensional (2-D) and three-dimensional (3-D) tasks, and tools are developed to efficiently interface the human and machine intelligence. Effectiveness of the C-space approach is demonstrated by a series of experiments, showing an improvement in performance on the order of magnitude in the 2-D case and a factor of two to four in the 3-D case, compared to usual work space control.  相似文献   

5.
针对残臂较短或残臂上肌电信号测量点较少的残疾人使用多自由度假手的需求,提出一种基于脑电信号(Electroencephalogram,EEG)和表面肌电信号(Surface electromyogram signal,sEMG)协同处理的假手控制策略.该方法仅用1个肌电传感器和1个脑电传感器实现多自由度假手的控制.采用1个脑电传感器测量人体前额部位的EEG,从测量得到的EEG中提取出眨眼动作信息并将其用于假手动作的编码;采用1个肌电传感器测量手臂上的sEMG,并针对肌电信号存在个体差异和位置差异的问题,采用自适应方法实现手部动作强度的估计;采用振动触觉技术设计触觉编码用于将当前假手的控制指令反馈给佩戴者,从而实现EEG和sEMG对多自由度假手的协同控制.为验证该控制策略的有效性进行了实验研究,结果表明,提出的假手控制策略是有效的.  相似文献   

6.
Humanoid robots needs to have human-like motions and appearance in order to be well-accepted by humans. Mimicking is a fast and user-friendly way to teach them human-like motions. However, direct assignment of observed human motions to robot’s joints is not possible due to their physical differences. This paper presents a real-time inverse kinematics based human mimicking system to map human upper limbs motions to robot’s joints safely and smoothly. It considers both main definitions of motion similarity, between end-effector motions and between angular configurations. Microsoft Kinect sensor is used for natural perceiving of human motions. Additional constraints are proposed and solved in the projected null space of the Jacobian matrix. They consider not only the workspace and the valid motion ranges of the robot’s joints to avoid self-collisions, but also the similarity between the end-effector motions and the angular configurations to bring highly human-like motions to the robot. Performance of the proposed human mimicking system is quantitatively and qualitatively assessed and compared with the state-of-the-art methods in a human-robot interaction task using Nao humanoid robot. The results confirm applicability and ability of the proposed human mimicking system to properly mimic various human motions.  相似文献   

7.
A novel strategy is presented to determine the next-best view for a robot arm, equipped with a depth camera in eye-in-hand configuration, which is oriented to autonomous exploration of unknown objects. Instead of maximizing the total size of the expected unknown volume that becomes visible, the next-best view is chosen to observe the border of incomplete objects. Salient regions of space that belong to the objects are detected, without any prior knowledge, by applying a point cloud segmentation algorithm. The system uses a Kinect V2 sensor, which has not been considered in previous works on next-best view planning, and it exploits KinectFusion to maintain a volumetric representation of the environment. A low-level procedure to reduce Kinect V2 invalid points is also presented. The viability of the approach has been demonstrated in a real setup where the robot is fully autonomous. Experiments indicate that the proposed method enables the robot to actively explore the objects faster than a standard next-best view algorithm.  相似文献   

8.
为完成机械臂在非特定环境下的自主抓取,系统采用微软公司研发的Kinect对场景内的信息进行实时检测.通过对Kinect采集的深度信息进行背景相减法和帧差法处理可以获得目标抓取点信息.利用基于工作空间的RRT算法对机械臂末端进行路径规划,并利用梯度投影法进行逆运动学轨迹优化,求解关节轨迹.机械臂按照关节角运动时,可完成目标的抓取.通过设计一套实时桌面清理实验系统,验证了该方法的有效性.  相似文献   

9.
In this paper an application for the Kinect V2 sensor is described in the robotic field. The sensor is used as a vision device for detecting position, shape and dimensions of an object on the working space of a robot in order to planning the end effector path. The algorithms used for the recognition of contour and spatial position of planar shapes are described. The technique adopted for the recognition of 3D objects are presented. The first results provided by a prototype of gluing robot for the bonding of leather patches and shoe soles are presented.The results confirm the possibility of using the Kinect V2 sensor as an alternative to the well consolidated 3D measuring devices which are definitely more accurate, but also much more expensive.  相似文献   

10.
In human–robot interaction, the robot controller must reactively adapt to sudden changes in the environment (due to unpredictable human behaviour). This often requires operating different modes, and managing sudden signal changes from heterogeneous sensor data. In this paper, we present a multimodal sensor-based controller, enabling a robot to adapt to changes in the sensor signals (here, changes in the human collaborator behaviour). Our controller is based on a unified task formalism, and in contrast with classical hybrid visicn–force–position control, it enables smooth transitions and weighted combinations of the sensor tasks. The approach is validated in a mock-up industrial scenario, where pose, vision (from both traditional camera and Kinect), and force tasks must be realized either exclusively or simultaneously, for human–robot collaboration.  相似文献   

11.
以Kinect为代表的深度图像传感器在肢体康复系统中得到广泛应用.单一深度图像传感器采集人体关节点数据时由于肢体遮挡、传感器数据错误和丢失等原因降低系统可靠性.本文研究了利用两台Kinect深度图像传感器进行数据融合从而达到消除遮挡、数据错误和丢失的目的,提高康复系统中数据的稳定性和可靠性.首先,利用两台Kinect采集患者健康侧手臂运动数据;其次,对两组数据做时间对准、Bursa线性模型下的坐标变换和基于集员滤波的数据融合;再次,将融合后的健康侧手臂运动数据经过“镜像运动”作为患侧手臂运动指令;最后,将患侧运动指令下发给可穿戴式镜像康复外骨骼带动患者患侧手臂完成三维动画提示的康复动作,达到患者主动可控康复的目的.本文通过Kinect与VICON系统联合实验以及7自由度机械臂控制实验验证了数据融合方法的有效性,以及两台Kinect可有效解决上述问题.  相似文献   

12.
提出一种采用Kinect传感器作为视觉伺服的机器人辅助超声扫描系统,来规划引导机器人的扫描路线,以实现机器人辅助的超声扫描操作。系统由Kinect传感器、机器人和超声探头组成。采用Kinect实时获取超声探头的RGB图像和深度图像,并计算探头当前位姿,结合坐标系配准结果,得到机器人的位姿信息,再根据术前的机器人轨迹规划,引导机器人的超声扫描路径。开展腿部模型实验验证本系统的可行性,通过对Kinect传感器的相机标定实验,计算得到了RGB相机和深度相机的内外参数,通过对探头上标识物的定位,进而计算出探头当前位姿,结合Kinect与机器人坐标系的配准结果,得到了两者的转换矩阵,并对机器人的位置给出指令,引导机械臂夹持探头到达指定扫描位置。在机器人夹持超声探头扫描过程中,实时计算探头与腿部之间的距离,以保证所采集超声图像的质量及扫描操作的安全性。实验结果表明,在Kinect视觉系统的导航引导下,机器人可以夹持超声探头实现自主超声扫描,以减少超声医师的扫描时间,降低医师的劳动强度。  相似文献   

13.
We present a robust target tracking algorithm for a mobile robot. It is assumed that a mobile robot carries a sensor with a fan-shaped field of view and finite sensing range. The goal of the proposed tracking algorithm is to minimize the probability of losing a target. If the distribution of the next position of a moving target is available as a Gaussian distribution from a motion prediction algorithm, the proposed algorithm can guarantee the tracking success probability. In addition, the proposed method minimizes the moving distance of the mobile robot based on the chosen bound on the tracking success probability. While the considered problem is a non-convex optimization problem, we derive a closed-form solution when the heading is fixed and develop a real-time algorithm for solving the considered target tracking problem. We also present a robust target tracking algorithm for aerial robots in 3D. The performance of the proposed method is evaluated extensively in simulation. The proposed algorithm has been successful applied in field experiments using Pioneer mobile robot with a Microsoft Kinect sensor for following a pedestrian.  相似文献   

14.
针对在Kinect平台利用人体动作进行人机交互的时效性问题,提出了一种基于时间序列相似性的快速人体动作识别方法。通过Kinect获取人体全身20个关节点,提取关键点的空间三维坐标,转化成特征向量,该特征向量模型能很好地对全身动作进行表示;在动作识别方面提出了一种快速动态时间弯曲距离(Fast Dynamic Time Warping,F-DTW)算法,解决了因动作速度不同导致的两时间序列在时间轴上不一致的问题,通过引入下界函数和提前终止技术对算法进行加速优化,解决动作识别的时延问题,从而能快速地控制机器人;定义20种动作进行识别,平均识别速度较传统算法大大提高,验证了方法的有效性,满足与机器人交互的要求。  相似文献   

15.
机器人移动轨迹按照人的手臂来模拟是提高机器人安全性和人机交互能力的有效方法,特别是针对机器人抓取路径不唯一的场合,类人行为对于人机系统表现更加自然。此前,通常利用Kinect等设备,基于人工神经网络和K近邻算法等智能算法对类人轨迹进行规划,但无法获得未采样过的最优轨迹。本文基于CP-nets采用偏好模型研究类人运动轨迹,然后将该模型应用于机器人控制,在没有采样的情况下,也可得到最优的类人轨迹。实验结果表明,基于CP-nets 的类人规划轨迹具有较高的效率和舒适性,符合人的运动特征。  相似文献   

16.
17.
In this paper, a novel framework which enables humanoid robots to learn new skills from demonstration is proposed. The proposed framework makes use of real-time human motion imitation module as a demonstration interface for providing the desired motion to the learning module in an efficient and user-friendly way. This interface overcomes many problems of the currently used interfaces like direct motion recording, kinesthetic teaching, and immersive teleoperation. This method gives the human demonstrator the ability to control almost all body parts of the humanoid robot in real time (including hand shape and orientation which are essential to perform object grasping). The humanoid robot is controlled remotely and without using any sophisticated haptic devices, where it depends only on an inexpensive Kinect sensor and two additional force sensors. To the best of our knowledge, this is the first time for Kinect sensor to be used in estimating hand shape and orientation for object grasping within the field of real-time human motion imitation. Then, the observed motions are projected onto a latent space using Gaussian process latent variable model to extract the relevant features. These relevant features are then used to train regression models through the variational heteroscedastic Gaussian process regression algorithm which is proved to be a very accurate and very fast regression algorithm. Our proposed framework is validated using different activities concerned with both human upper and lower body parts and object grasping also.  相似文献   

18.
《Advanced Robotics》2013,27(10):1057-1072
It is an easy task for the human visual system to gaze continuously at an object moving in three-dimensional (3-D) space. While tracking the object, human vision seems able to comprehend its 3-D shape with binocular vision. We conjecture that, in the human visual system, the function of comprehending the 3-D shape is essential for robust tracking of a moving object. In order to examine this conjecture, we constructed an experimental system of binocular vision for motion tracking. The system is composed of a pair of active pan-tilt cameras and a robot arm. The cameras are for simulating the two eyes of a human while the robot arm is for simulating the motion of the human body below the neck. The two active cameras are controlled so as to fix their gaze at a particular point on an object surface. The shape of the object surface around the point is reconstructed in real-time from the two images taken by the cameras based on the differences in the image brightness. If the two cameras successfully gaze at a single point on the object surface, it is possible to reconstruct the local object shape in real-time. At the same time, the reconstructed shape is used for keeping a fixation point on the object surface for gazing, which enables robust tracking of the object. Thus these two processes, reconstruction of the 3-D shape and maintaining the fixation point, must be mutually connected and form one closed loop. We demonstrate the effectiveness of this framework for visual tracking through several experiments.  相似文献   

19.
单禹皓  陈通  温万惠  刘光远 《计算机科学》2015,42(10):43-44, 75
改进设计了一种基于Kinect体感相机的呼吸信号非接触式测量方法。该方法适用于坐立姿势下呼吸信号的遥测。在控制实验下,遥测得到的呼吸率接近于接触式测量得到的呼吸率(相对误差0.4%),且遥测的呼吸信号可以显示不同的呼吸模式。  相似文献   

20.
This paper presents a robot teaching system based on hand-robot contact state detection and human motion intent recognition. The system can detect the contact state of the hand-robot joint and extracts motion intention information from the human surface electromyography (sEMG) signals to control the robot's motion. First, a hand-robot contact state detection method is proposed based on the fusion of the virtual robot environment with the physical environment. With the use of a target detection algorithm, the position of the human hand in the color image of the physical environment can be identified and its pixel coordinates can be calculated. Meanwhile, the synthetic images of the virtual robot environment are combined with those of the physical robot scene to determine whether the human hand is in contact with the robot. Besides, a human motion intention recognition model based on deep learning is designed to recognize human motion intention with the input of sEMG signals. Moreover, a robot motion mode selection module is built to control the robot for single-axis motion, linear motion, or repositioning motion by combining the hand-robot contact state and human motion intention. The experimental results indicate that the proposed system can perform online robot teaching for the three motion modes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号