首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper discusses cooperative control of a dual-flexible-arm robot to handle a rigid object in three-dimensional space. The proposed control scheme integrates hybrid position/force control and vibration suppression control. To derive the control scheme, kinematics and dynamics of the robot when it forms a closed kinematic chain is discussed. Kinematics is described using workspace force, velocity and position vectors, and hybrid position/force control is extended from that on dual-rigid-arm robots. Dynamics is derived from constraint conditions and the lumped-mass-spring model of the flexible robots and an object. The vibration suppression control is calculated from the deflections of the flexible links and the dynamics. Experiments on cooperative control are performed. The absolute positions/orientations and internal forces/moments are controlled using the robot, each arm of which has two flexible links, seven joints and a force/torque sensor. The results illustrate that the robot handled the rigid object damping links' vibration successfully in three-dimensional space.  相似文献   

2.
In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms (Proc of IEEE Int Conf on robotics and automation, pp 1794–1799, 2007) are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach (Int Conf on advanced robotics, 2007), based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are presented.  相似文献   

3.
机器人视觉伺服综述   总被引:45,自引:0,他引:45  
系统论述了机器人视觉伺服发展的历史和现状。从不同角度对机器人视觉控制系统进行分类,重点介绍了基位置的视觉伺服系统和基于图像的视觉伺服系统。对人工神经网络在机器人视觉伺服方面的应用情况作了介绍。讨论了视觉伺服中图像特征的选择问题。对机器人视觉所涉及的前沿问题进行阐述,并指出了目前研究中所存在的问题及今后发展方向。  相似文献   

4.
智能空间中家庭服务机器人所需完成的主要任务是协助人完成物品的搜寻、定位与传递。而视觉伺服则是完成上述任务的有效手段。搭建了由移动机器人、机械臂、摄像头组成的家庭服务机器人视觉伺服系统,建立了此系统的运动学模型并对安装在机械臂末端执行器上的视觉系统进行了内外参数标定,通过分解世界平面的单应来获取目标物品的位姿参数,利用所获取的位姿参数设计了基于位置的视觉伺服控制律。实验结果表明,使用平面单应分解方法来设计控制律可简单有效地完成家庭物品的视觉伺服任务。  相似文献   

5.
In this study, we develop flexible joints for a humanoid robot that walks on an oscillating plane and discuss their effectiveness in compensating disturbances. Conventional robots have a rigid frame and are composed of rigid joints driven by geared motors. Therefore, disturbances, which may be caused by external forces from other robots, obstacles, vibration and oscillation of the surface upon which the robot is walking, and so on, are transmitted directly to the robot body, causing the robot to fall. To address this problem, we focus on a flexible mechanism. We develop flexible joints and incorporate them in the waist of a humanoid robot; the experimental task of the robot is to walk on a horizontally oscillating plane until it reaches the desired position. The robot with the proposed flexible joints, reached the goal position despite the fact that the controller was the same as that used for a conventional robot walking on a static plane. From these results, we conclude that our proposed mechanism is effective for humanoid robots that walk on an oscillating plane.  相似文献   

6.
针对受限移动机器人视觉伺服系统,提出一种移动机器人视觉伺服镇定准最小最大模型预测控制策略. 基于移动机器人视觉伺服镇定误差模型,建立移动机器人视觉伺服线性参数时变预测模型,进而引入准最小最大策略,设计移动机器人视觉伺服镇定模型预测控制器.与传统视觉伺服预测控制器相比,所提控制器只需求解线性矩阵不等式表示的凸优化问题,降低了视觉伺服预测控制器的计算耗时,同时保证了闭环视觉伺服系统的渐近稳定性.仿真结果验证了所提出策略的有效性和在计算效率上的优越性.  相似文献   

7.
Image-based visual servoing is a flexible and robust technique to control a robot and guide it to a desired position only by using two-dimensional visual data. However, it is well known that the classical visual servoing based on the Cartesian coordinate system has one crucial problem, that the camera moves backward at infinity, in case that the camera motion from the initial to desired poses is a pure rotation of 1800 around the optical axis. This paper proposes a new formulation of visual servoing, based on a cylindrical coordinate system that can shift the position of the origin. The proposed approach can interpret from a pure rotation around an arbitrary axis to the proper camera rotational motion. It is shown that this formulation contains the classical approach based on the Cartesian coordinate system as an extreme case with the origin located at infinity. Furthermore, we propose a decision method of the origin-shift parameters by estimating a rotational motion from the differences between initial and desired image-plane positions of feature points.  相似文献   

8.
智能机器人在服务国家重大需求,引领国民经济发展和保障国防安全中起到重要作用,被誉为“制造业皇冠顶端的明珠”.随着新一轮工业革命的到来,世界主要工业国家都开始加快机器人技术的战略部署.而智能机器人作为智能制造的重要载体,在深入实施制造强国战略,推动制造业的高端化、智能化、绿色化过程中将发挥重要作用.本文从智能机器人的感知与控制等关键技术的视角出发,重点阐述了机器人的三维环境感知、点云配准、位姿估计、任务规划、多机协同、柔顺控制、视觉伺服等共性关键技术的国内外发展现状.然后,以复杂曲面机器人三维测量、复杂部件机器人打磨、机器人力控智装配等机器人智能制造系统为例,阐述了机器人的智能制造的应用关键技术,并介绍了工程机械智能化无人工厂、无菌化机器人制药生产线等典型案例.最后探讨了智能制造机器人的发展趋势和所面临的挑战.  相似文献   

9.
视觉伺服机器人对运动目标操作的研究   总被引:3,自引:0,他引:3  
田梦倩  罗翔  黄惟一 《机器人》2003,25(6):548-553
机器人视觉伺服系统是机器人领域一重要的研究方向,它的研究对于开发手眼协调的机器人在工业生产、航空航天等方面的应用有着极其重要的意义.本文着眼于视觉伺服机器人操作运动目标这一问题,分析了建立此类系统的控制结构并指明其特点;同时,详细地阐述了三个组成环节:视觉图像处理、预测及滤波、视觉控制器的研究方法和现状.最后,分析了今后的研究趋势.  相似文献   

10.
It is known that most of the key problems in visual servo control of robots are related to the performance analysis of the system considering measurement and modeling errors. In this paper, the development and performance evaluation of a novel intelligent visual servo controller for a robot manipulator using neural network Reinforcement Learning is presented. By implementing machine learning techniques into the vision based control scheme, the robot is enabled to improve its performance online and to adapt to the changing conditions in the environment. Two different temporal difference algorithms (Q-learning and SARSA) coupled with neural networks are developed and tested through different visual control scenarios. A database of representative learning samples is employed so as to speed up the convergence of the neural network and real-time learning of robot behavior. Moreover, the visual servoing task is divided into two steps in order to ensure the visibility of the features: in the first step centering behavior of the robot is conducted using neural network Reinforcement Learning controller, while the second step involves switching control between the traditional Image Based Visual Servoing and the neural network Reinforcement Learning for enabling approaching behavior of the manipulator. The correction in robot motion is achieved with the definition of the areas of interest for the image features independently in both control steps. Various simulations are developed in order to present the robustness of the developed system regarding calibration error, modeling error, and image noise. In addition, a comparison with the traditional Image Based Visual Servoing is presented. Real world experiments on a robot manipulator with the low cost vision system demonstrate the effectiveness of the proposed approach.  相似文献   

11.
Recently, visual servoing has been widely employed in industrial robots and has become an invaluable asset to enhance the functionality of the robot. However, the issue of image feature command generation in a visual servoing task receives little attention. In a contour following task that adopts Image-Based Visual Servoing (IBVS), it is crucial to perform motion planning on the desired image trajectory. Without proper motion planning, not only may the discrepancy between the target position and the current position on the image plane not converge, but also the flexibility of exploiting visual servoing for applications such as contour following will be limited. In order to cope with the aforementioned problem, this paper proposes a PH-spline based motion planning approach for systems that adopt IBVS. In particular, the exterior contour of an object is represented by a PH quantic spline. With proper acceleration/deceleration motion planning, a PH quantic spline interpolator is constructed to generate desired image feature commands so that IBVS can be applied to handle contour following problems of an object without a known geometric model. Furthermore, this paper also develops a depth estimation algorithm for the eye-to-hand camera structure, providing a convenient way to estimate the depth value that is essential in computing image Jacobian. Experimental results of several contour following tasks verify the effectiveness of the proposed approach.  相似文献   

12.
This paper addresses the problem of integrating the human operator with autonomous robotic visual tracking and servoing modules. A CCD camera is mounted on the end-effector of a robot and the task is to servo around a static or moving rigid target. In manual control mode, the human operator, with the help of a joystick and a monitor, commands robot motions in order to compensate for tracking errors. In shared control mode, the human operator and the autonomous visual tracking modules command motion along orthogonal sets of degrees of freedom. In autonomous control mode, the autonomous visual tracking modules are in full control of the servoing functions. Finally, in traded control mode, the control can be transferred from the autonomous visual modules to the human operator and vice versa. This paper presents an experimental setup where all these different schemes have been tested. Experimental results of all modes of operation are presented and the related issues are discussed. In certain degrees of freedom (DOF) the autonomous modules perform better than the human operator. On the other hand, the human operator can compensate fast for failures in tracking while the autonomous modules fail. Their failure is due to difficulties in encoding an efficient contingency plan.  相似文献   

13.
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.  相似文献   

14.
刘欢  魏立峰  王健 《微计算机信息》2007,23(11):278-279
机器人视觉伺服系统是机器人领域一项重要的研究方向,它的研究对于开发手眼协调的机器人在工业生产、航空航天等方面的应用有着重要的意义。本文针对机器人视觉伺服系统的特点,对典型的标定算法进行了分类和比较,最后对标定方法进行了总结。  相似文献   

15.
This paper describes a new method to perform automatic tasks with a robot in an unstructured environment. A task to replace a blown light bulb in a streetlamp is described to show that this method works properly. In order to perform this task correctly, the robot is positioned by tracking secure previously defined paths. The robot, using an eye-in-hand configuration on a visual servoing scheme and a force sensor, is able to interact with its environment due to the fact that the path tracking is performed with time-independent behaviour. The desired path is expressed in the image space. However, the proposed method obtains a correct tracking not only in the image, but also in the 3D space. This method solves the problems of the previously proposed time-independent tracking systems based on visual servoing, such as the specification of the desired tracking velocity, less oscillating behaviour and a correct tracking in the 3D space when high velocities are used. The experiments shown in this paper demonstrate the necessity of time-independent behaviour in tracking and the correct performance of the system.  相似文献   

16.
In this paper dynamic load carrying capacity (DLCC) of a cable robot equipped with a closed loop control system based on feedback linearization, is calculated for both rigid and flexible joint systems. This parameter is the most important character of a cable robot since the main application of this kind of robots is their high load carrying capacity. First of all the dynamic equations required for control approach are represented and then the formulation of control approach is driven based on feedback linearization method which is the most suitable control algorithm for nonlinear dynamic systems like robots. This method provides a perfect accuracy and also satisfies the Lyapunov stability since any desired pole placement can be achieved by using suitable gain for controller. Flexible joint cable robot is also analyzed in this paper and its stability is ensured by implementing robust control for the designed control system. DLCC of the robot is calculated considering motor torque constrain and accuracy constrain. Finally a simulation study is done for two samples of rigid cable robot, a planar complete constrained sample with three cables and 2 degrees of freedom and a spatial unconstrained case with six cables and 6 degrees of freedom. Simulation studies continue with the same spatial robot but flexible joint characteristics. Not only the DLCC of the mentioned robots are calculated but also required motors torque and desired angular velocity of the motors are calculated in the closed loop condition for a predefined trajectory. The effectiveness of the designed controller is shown by the aid of simulation results as well as comparison between rigid and flexible systems.  相似文献   

17.
2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom robot and with a catadioptric camera for a mobile robot navigation task.  相似文献   

18.

The robustness of a visual servoing task depends mainly on the efficiency of visual selections captured from a sensor at each robot’s position. A task function could be described as a regulation of the values sent via the control law to the camera velocities. In this paper we propose a new approach that does not depend on matching and tracking results. Thus, we replaced the classical minimization cost by a new function based on probability distributions and Bhattacharyya distance. To guarantee more robustness, the information related to the observed images was expressed using a combination of orientation selections. The new visual selections are computed by referring to the disposition of Histograms of Oriented Gradients (HOG) bins. For each bin we assign a random variable representing gradient vectors in a particular direction. The new entries will not be used to establish equations of visual motion but they will be directly inserted into the control loop. A new formulation of the interaction matrix has been presented according to the optical flow constraint and using an interpolation function which leads to a more efficient control behaviour and to more positioning accuracy. Experiments demonstrate the robustness of the proposed approach with respect to varying work space conditions.

  相似文献   

19.
This work presents an automated solution for tool changing in industrial robots using visual servoing and sliding mode control. The robustness of the proposed method is due to the control law of the visual servoing, which uses the information acquired by a vision system to close a feedback control loop. Furthermore, sliding mode control is simultaneously used in a prioritised level to satisfy the constraints typically present in a robot system: joint range limits, maximum joint speeds and allowed workspace. Thus, the global control accurately places the tool in the warehouse, but satisfying the robot constraints. The feasibility and effectiveness of the proposed approach is substantiated by simulation results for a complex 3D case study. Moreover, real experimentation with a 6R industrial manipulator is also presented to demonstrate the applicability of the method for tool changing.  相似文献   

20.
A new uncalibrated eye-to-hand visual servoing based on inverse fuzzy modeling is proposed in this paper. In classical visual servoing, the Jacobian plays a decisive role in the convergence of the controller, as its analytical model depends on the selected image features. This Jacobian must also be inverted online. Fuzzy modeling is applied to obtain an inverse model of the mapping between image feature variations and joint velocities. This approach is independent from the robot's kinematic model or camera calibration and also avoids the necessity of inverting the Jacobian online. An inverse model is identified for the robot workspace, using measurement data of a robotic manipulator. This inverse model is directly used as a controller. The inverse fuzzy control scheme is applied to a robotic manipulator performing visual servoing for random positioning in the robot workspace. The obtained experimental results show the effectiveness of the proposed control scheme. The fuzzy controller can position the robotic manipulator at any point in the workspace with better accuracy than the classic visual servoing approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号