首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider a robotic setting and a class of control tasks that rely on partial visual information. These tasks are difficult in the sense that at every given moment, the available information is insufficient for the control task. This implies that the image Jacobian, which relates the image space and the control space, is no longer of full rank. However, the amount of information collected throughout the control process is still large and thus seems sufficient for carrying out the task. Such situations commonly arise when the object is frequently occluded from one of the cameras in a stereo pair or when only one moving camera is available. We propose a generic control rule for such tasks and characterize the conditions required for the success of the task. The analysis is based on the observation that mathematically the behavior of such systems is related to a class of row-action optimization algorithms which are special cases of POCS (Projection On Convex Sets) algorithms. In the second part of the paper we focus on one particular task from this class: position and orientation control with a single rotating camera. We show that this task can be carried out, in principle, for any camera rotation and suggest efficient control and camera moving strategies. We substantiate our claims by simulations and experiments. Interestingly, it seems that the advisable control law is not consistent with simple intuition.  相似文献   

2.
3.
In this paper we present a modular scheme for designing and evaluating different control systems for position based dynamic look and move visual servoing systems. This scheme is particularly applied to a 6 DOF industrial manipulator equipped with a camera mounted on its end effector. The manipulator with its actuators and its current feedback loops can be modeled as a Cartesian device commanded through a serial line. In this case the manipulator can be considered as a decoupled system with 6 independent loops. The use of computer vision as feedback transducer strongly affects the closed loop dynamics of the overall system, so that a visual controller is required for achieving fast response and high control accuracy. Due to the long delay in generating the control signal, it is necessary to carefully select the visual controller. In this paper we present a framework that allows the study of some conventional and new techniques to design this visual controller. Besides an experimental setup has been built and used to evaluate and compare the performance of the position based dynamic look and move system with different controllers. Some criterions for selecting the best strategy for each task are established. Quite a lot of results relative to different trajectory tracking control strategies are presented, showing both simulation and real platform responses.  相似文献   

4.
以二自由度平面机器人为研究对象,将FCMAC控制方法应用到基于图像的机器人视觉伺服系统中.建立了系统的数学模型,并在Matlab平台上对该系统进行仿真实验.仿真结果表明,本文的控制系统对静态目标定位、对直线运动目标及曲线运动目标进行跟踪,可以获得较快的响应速度和较高的控制精度.  相似文献   

5.
This paper presents the application of the Visual Servoing approach to a mobile robot which must execute coordinate motions in a known indoor environment. In this work, we are interested in the execution and control of basic motions like Go to an object by using the mobile robot Hil are2Bis. We use a diagonal matrix for the gain to improve the visual servoing behaviour and the potential field formalism to avoid obstacles. Namely, the robot is controlled according to the position of some features in an image. Such a path will be executed by a nonholonomic mobile robot, which has only two degrees of freedom (two wheels), and three configuration parameters (X Y ); a camera is mounted on the robot close to the end effector of an arm, controlled to add at least a new degree of freedom (pl).  相似文献   

6.
A novel approach to visual servoing is presented, which takes advantage of the structure of the Lie algebra of affine transformations. The aim of this project is to use feedback from a visual sensor to guide a robot arm to a target position. The target position is learned using the principle of teaching by showing in which the supervisor places the robot in the correct target position and the system captures the necessary information to be able to return to that position. The sensor is placed in the end effector of the robot, the camera-in-hand approach, and thus provides direct feedback of the robot motion relative to the target scene via observed transformations of the scene. These scene transformations are obtained by measuring the affine deformations of a target planar contour (under the weak perspective assumption), captured by use of an active contour, or snake. Deformations of the snake are constrained using the Lie groups of affine and projective transformations. Properties of the Lie algebra of affine transformations are exploited to provide a novel method for integrating observed deformations of the target contour. These can be compensated with appropriate robot motion using a non-linear control structure. The local differential representation of contour deformations is extended to allow accurate integration of an extended series of small perturbations. This differs from existing approaches by virtue of the properties of the Lie algebra representation which implicitly embeds knowledge of the three-dimensional world within a two-dimensional image-based system. These techniques have been implemented using a video camera to control a 5 DoF robot arm. Experiments with this implementation are presented, together with a discussion of the results.  相似文献   

7.
For microassembly tasks uncertainty exists at many levels. Single static sensing configurations are therefore unable to provide feedback with the necessary range and resolution for accomplishing many desired tasks. In this paper we present experimental results that investigate the integration of two disparate sensing modalities, force and vision, for sensor-based microassembly. By integrating these sensing modes, we are able to provide feedback in a task-oriented frame of reference over a broad range of motion with an extremely high precision. An optical microscope is used to provide visual feedback down to micron resolutions, while an optical beam deflection technique (based on a modified atomic force microscope) is used to provide nanonewton level force feedback or nanometric level position feedback. Visually servoed motion at speeds of up to 2 mm/s with a repeatability of 0.17 m are achieved with vision alone. The optical beam deflection sensor complements the visual feedback by providing positional feedback with a repeatability of a few nanometers. Based on the principles of optical beam deflection, this is equivalent to force measurements on the order of a nanonewton. The value of integrating these two disparate sensing modalities is demonstrated during controlled micropart impact experiments. These results demonstrate micropart approach velocities of 80 m/s with impact forces of 9 nN and final contact forces of 2 nN. Within our microassembly system this level of performance cannot be achieved using either sensing modality alone. This research will aid in the development of complex hybrid MEMS devices in two ways; by enabling the microassembly of more complex MEMS prototypes; and in the development of automatic assembly machines for assembling and packaging future MEMS devices that require increasingly complex assembly strategies.  相似文献   

8.
Classical visual servoing techniques need a strong a priori knowledge of the shape and the dimensions of the observed objects. In this paper, we present how the 2 1/2 D visual servoing scheme we have recently developed, can be used with unknown objects characterized by a set of points. Our scheme is based on the estimation of the camera displacement from two views, given by the current and desired images. Since vision-based robotics tasks generally necessitate to be performed at video rate, we focus only on linear algorithms. Classical linear methods are based on the computation of the essential matrix. In this paper, we propose a different method, based on the estimation of the homography matrix related to a virtual plane attached to the object. We show that our method provides a more stable estimation when the epipolar geometry degenerates. This is particularly important in visual servoing to obtain a stable control law, especially near the convergence of the system. Finally, experimental results confirm the improvement in the stability, robustness, and behaviour of our scheme with respect to classical methods.  相似文献   

9.
We introduce a novel method for visual homing. Using this method a robot can be sent to desired positions and orientations in 3D space specified by single images taken from these positions. Our method is based on recovering the epipolar geometry relating the current image taken by the robot and the target image. Using the epipolar geometry, most of the parameters which specify the differences in position and orientation of the camera between the two images are recovered. However, since not all of the parameters can be recovered from two images, we have developed specific methods to bypass these missing parameters and resolve the ambiguities that exist. We present two homing algorithms for two standard projection models, weak and full perspective.Our method determines the path of the robot on-line, the starting position of the robot is relatively not constrained, and a 3D model of the environment is not required. The method is almost entirely memoryless, in the sense that at every step the path to the target position is determined independently of the previous path taken by the robot. Because of this property the robot may be able, while moving toward the target, to perform auxiliary tasks or to avoid obstacles, without this impairing its ability to eventually reach the target position. We have performed simulations and real experiments which demonstrate the robustness of the method and that the algorithms always converge to the target pose.  相似文献   

10.
This article addresses the visual servoing of a rigid robotic manipulator equipped with a binocular vision system in eye-to-hand configuration. The control goal is to move the robot end-effector to a visually determined target position precisely without knowing the precise camera model. Many vision-based robotic positioning systems have been successfully implemented and validated by supporting experimental results. Nevertheless, this research aims at providing stability analysis for a class of robotic set-point control systems employing image-based feedback laws. Specifically, by exploring epipolar geometry of the binocular vision system, a binocular visual constraint is found to assist in establishing stability property of the feedback system. Any three-degree-of-freedom positioning task, if satisfying appropriate conditions with the image-based encoding approach, can be encoded in such a way that the encoded error, when driven to zero, implies that the original task has been accomplished with precision. The corresponding image-based control law is proposed to drive the encoded error to zero. The overall closed-loop system is exponentially stable provided that the binocular model imprecision is small.  相似文献   

11.
把视觉引入倒立摆的控制中,建立了视觉伺服的倒立摆系统.利用针孔摄像机的成像几何模型建立与图像坐标有关的状态空间方程,采用基于图像的视觉伺服控制方法,对摆杆在像平面上的位置进行实时控制.在Matlab环境下用LQR(Linear Quadratic Regulator)和极点配置法进行仿真,对比仿真结果.验证了系统方程推导的正确性,以及控制算法的有效性.  相似文献   

12.
Detection and tracking for robotic visual servoing systems   总被引:1,自引:0,他引:1  
Robot manipulators require knowledge about their environment in order to perform their desired actions. In several robotic tasks, vision sensors play a critical role by providing the necessary quantity and quality of information regarding the robot's environment. For example, “visual servoing” algorithms may control a robot manipulator in order to track moving objects that are being imaged by a camera. Current visual servoing systems often lack the ability to detect automatically objects that appear within the camera's field of view. In this research, we present a robust “figureiground” framework for visually detecting objects of interest. An important contribution of this research is a collection of optimization schemes that allow the detection framework to operate within the real-time limits of visual servoing systems. The most significant of these schemes involves the use of “spontaneous” and “continuous” domains. The number and location of continuous domains are. allowed to change over time, adjusting to the dynamic conditions of the detection process. We have developed actual servoing systems in order to test the framework's feasibility and to demonstrate its usefulness for visually controlling a robot manipulator.  相似文献   

13.
刘科  王刚  王国栋 《控制工程》2004,11(6):510-513
针对跟踪轨迹规划对于确保得到连续光滑的跟踪运动的重要性,提出了两级视觉跟踪轨迹规划方法。第一阶段在图像平面上规划运动轨迹,在图像平面上得到的离散规划点映射到机器人关节空间。第二阶段在机器人关节空间中用三次样条函数来连接这些离散点。为了满足实时控制的要求,在图像处理过程中采用窗口技术并抽取边缘特征。建立用于跟踪两维平面运动物体(如随运输带运动的物体)的机器人视觉跟踪控制系统。实验结果表明,跟踪误差渐近地减小到允许的数值范围,所提出的跟踪轨迹规划方法是有效的。  相似文献   

14.
Autonomous acquisition of seam coordinates is a key technology for developing advanced welding robot. This paper describes a position-based visual servo system for robotic seam tracking, which is able to autonomously acquire the seam coordinates of the planar butt joint in the robot base frame and plan the optimal camera angle before welding. A six-axis industrial robot is used in this system, which has an interface for communicating with the master computer. The developed visual sensor device is briefly presented that allows the charge-coupled device (CCD) cameras to rotate about the torch. A set of robust image processing algorithms are proposed so that no special requirements of light source are needed in this system. The feedback errors of this servo system are defined according to the characteristics of the seam image, and the robust tracking controller is developed. Both the image processing program and tracking control program run on the master computer. The experimental results on straight line seam and curve seam show that the accuracy of the seam coordinates acquired with this method is more adequate for high quality welding process.  相似文献   

15.
Visual servoing is a powerful approach to enlarge the applications of robotic systems by incorporating visual information into the control system. On the other hand, teleoperation – the use of machines in a remote way – is increasing the number of applications in many domains. This paper presents a remote visual servoing system using only partial camera calibration and exploiting the high bandwidth of Internet2 to stream video information. The underlying control scheme is based on the image-based philosophy for direct visual servoing – computing the applied torque inputs to the robot based in error signals defined in the image plane – and evoking a velocity field strategy for guidance. The novelty of this paper is a remote visual servoing with the following features: (1) full camera calibration is unnecessary, (2) direct visual servoing does not neglect the robot nonlinear dynamics, and (3) the novel velocity field control approach is utilized. Experiments carried out between two laboratories demonstrated the effectiveness of the application. Work partially supported by CONACyT grant 45826 and CUDI.  相似文献   

16.
徐璠  王贺升 《自动化学报》2023,49(4):744-753
水下仿生软体机器人在水底环境勘测, 水下生物观测等方面具有极高的应用价值. 为进一步提升仿章鱼臂软体机器人在特殊水下环境中控制效果, 提出一种自适应鲁棒视觉伺服控制方法, 实现其在干扰无标定环境中的高精度镇定控制. 基于水底动力学模型, 设计保证动力学稳定的控制器; 针对柔性材料离线标定过程繁琐、成本高, 提出材料参数自适应估计算法; 针对水下特殊工作条件, 设计自适应鲁棒视觉伺服控制器, 实现折射效应的在线补偿, 并通过自适应未知环境干扰上界, 避免先验环境信息的求解. 所提算法在软体机器人样机中验证其镇定控制性能, 为仿生软体机器人的实际应用提供理论基础.  相似文献   

17.
机器人视觉伺服系统的研究   总被引:31,自引:0,他引:31       下载免费PDF全文
机器人伺觉伺服系统及到多学科内容。针对机器人视觉伺服系统主要的三方面内容;系统的结构方式,图象处理,控制方法,介绍了该领域的研究现状及所取得的成就。最后分析了今后的发展趋势。  相似文献   

18.
International Journal of Computer Vision - This article studies the following question: “When is it possible to decide, on the basis of images of point-features observed by an imprecisely...  相似文献   

19.
Target Reaching by Using Visual Information and Q-learning Controllers   总被引:2,自引:0,他引:2  
This paper presents a solution to the problem of manipulation control: target identification and grasping. The proposed controller is designed for a real platform in combination with a monocular vision system. The objective of the controller is to learn an optimal policy to reach and to grasp a spherical object of known size, randomly placed in the environment. In order to accomplish this, the task has been treated as a reinforcement problem, in which the controller learns by a trial and error approach the situation-action mapping. The optimal policy is found by using the Q-Learning algorithm, a model free reinforcement learning technique, that rewards actions that move the arm closer to the target.The vision system uses geometrical computation to simplify the segmentation of the moving target (a spherical object) and determines an estimate of the target parameters. To speed-up the learning time, the simulated knowledge has been ported on the real platform, an industrial robot manipulator PUMA 560. Experimental results demonstrate the effectiveness of the adaptive controller that does not require an explicit global target position using direct perception of the environment.  相似文献   

20.
Basic idea of vision-based control in robotics is to include the vision system directly in the control servo loop of the robot. When images are binary, this problem corresponds to the control of the evolution of a geometric domain. The present paper proposes mathematical tools derived from shape analysis and optimization to study this problem in a quite general way, i.e., without any regularity assumptions or modelsa priori on the domains that we deal with. Indeed, despite the lackness of a vectorial structure, one can develop a differential calculus in the metric space of all non-empty compact subsets of a given domain ofR n , and adapt ideas and results of classical differential systems to study and control the evolution of geometric domains. For instance, a shape Lyapunov characterization allows to investigate the asymptotic behavior of these geometric domains using the notion of directional shape derivative. We apply this inR 2 to the visual servoing problem using the optical flow equations and some experimental simulations illustrate this approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号

京公网安备 11010802026262号