首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
A new visual servo control scheme for a robotic manipulator is presented in this paper, where a back propagation (BP) neural network is used to make a direct transition from image feature to joint angles without requiring robot kinematics and camera calibration. To speed up the convergence and avoid local minimum of the neural network, this paper uses a genetic algorithm to find the optimal initial weights and thresholds and then uses the BP algorithm to train the neural network according to the data given. The proposed method can effectively combine the good global searching ability of genetic algorithms with the accurate local searching feature of BP neural network. The Simulink model for PUMA560 robot visual servo system based on the improved BP neural network is built with the Robotics Toolbox of Matlab. The simulation results indicate that the proposed method can accelerate convergence of the image errors and provide a simple and effective way of robot control.  相似文献   

2.
Homography-Based Visual Servo Control With Imperfect Camera Calibration   总被引:1,自引:0,他引:1  
In this technical note, a robust adaptive uncalibrated visual servo controller is proposed to asymptotically regulate a robot end-effector to a desired pose. A homography-based visual servo control approach is used to address the six degrees-of-freedom regulation problem. A high-gain robust controller is developed to asymptotically stabilize the rotation error, and an adaptive controller is developed to stabilize the translation error while compensating for the unknown depth information and intrinsic camera calibration parameters. A Lyapunov-based analysis is used to examine the stability of the developed controller.   相似文献   

3.
Visual servoing is a powerful approach to enlarge the applications of robotic systems by incorporating visual information into the control system. On the other hand, teleoperation – the use of machines in a remote way – is increasing the number of applications in many domains. This paper presents a remote visual servoing system using only partial camera calibration and exploiting the high bandwidth of Internet2 to stream video information. The underlying control scheme is based on the image-based philosophy for direct visual servoing – computing the applied torque inputs to the robot based in error signals defined in the image plane – and evoking a velocity field strategy for guidance. The novelty of this paper is a remote visual servoing with the following features: (1) full camera calibration is unnecessary, (2) direct visual servoing does not neglect the robot nonlinear dynamics, and (3) the novel velocity field control approach is utilized. Experiments carried out between two laboratories demonstrated the effectiveness of the application. Work partially supported by CONACyT grant 45826 and CUDI.  相似文献   

4.
This article deals with the depth observability problem of a robot visual system with a moving camera. In the visual system, the unknown depth of a feature point is estimated from the input of the camera velocity and the output of the image of the feature point. Although it is well known that the linear velocity of the camera must satisfy some constraints for successful depth estimation, this proposes a criterion to measure the performance of the depth estimation, which is a heuristic extension from an estimation result of a linear system. This performance criterion depends on both the image position and the linear velocity of the camera. Some simulation and experiment examples demonstrate and verify the proposed performance criterion. Furthermore, this criterion is used to develop a new visual servo control scheme that has good performance in both the depth estimation and the visual control. This control scheme is also verified by a simulation example. © 2001 John Wiley & Sons, Inc.  相似文献   

5.
为了有效确保移动机器人视觉伺服控制效果,提高移动机器人视觉伺服控制精度,设计了基于虚拟现实技术的移动机器人视觉伺服控制系统。通过三维视觉传感器和立体显示器等虚拟环境的I/O设备、位姿传感器、视觉图像处理器以及伺服控制器元件,完成系统硬件设计。从运动学和动力学两个方面,搭建移动机器人数学模型,利用标定的视觉相机,生成移动机器人实时视觉图像,通过图像滤波、畸变校正等步骤,完成图像的预处理。利用视觉图像,构建移动机器人虚拟移动环境。在虚拟现实技术下,通过目标定位、路线生成、碰撞检测、路线调整等步骤,规划移动机器人行动路线,通过控制量的计算,实现视觉伺服控制功能。系统测试结果表明,所设计控制系统的位置控制误差较小,姿态角和移动速度控制误差仅为0.05°和0.12m/s,移动机器人碰撞次数较少,具有较好的移动机器人视觉伺服控制效果,能够有效提高移动机器人视觉伺服控制精度。  相似文献   

6.
模型无关的无定标视觉伺服控制   总被引:13,自引:1,他引:13  
本文首先介绍了视觉伺服的一般原理.然后提出了一种模型无关的无定标视觉伺服控制方法,在这种方法中不需要机器人模型和摄像机模型,应用方差最小化的原理推导出了模型无关的无定标视觉伺服控制律.此外还给出了图像雅可比矩阵的递推公式.文章最后通过一个轨线跟踪的仿真实验验证了算法的正确性和有效性.  相似文献   

7.
This article is the second of a two-part tutorial on visual servo control. In this tutorial, we have only considered velocity controllers. It is convenient for most of classical robot arms. However, the dynamics of the robot must of course be taken into account for high speed task, or when we deal with mobile nonholonomic or underactuated robots. As for the sensor, geometrical features coming from a classical perspective camera is considered. Features related to the image motion or coming from other vision sensors necessitate to revisit the modeling issues to select adequate visual features. Finally, fusing visual features with data coming from other sensors at the level of the control scheme will allow to address new research topics  相似文献   

8.
In this paper, a teleoperation system of a robot arm with position measurement function and visual supporting function is developed. The working robot arm is remotely controlled by the manual operation of the human operator and the autonomous control via visual servo. The visual servo employs the template matching technique. The position measurement is realized using a stereo camera based on the angle-pixel characteristic. The visual supporting function to give the human operator useful information about the teleoperation is also provided. The usefulness of the proposed teleoperation system is confirmed through experiments using an industrial articulated robot arm.  相似文献   

9.
针对传统的视觉伺服方法中图像几何特征的标记、提取与匹配过程复杂且通用性差等问题,本文提出了一种基于图像矩的机器人四自由度(4DOF)视觉伺服方法.首先建立了眼在手系统中图像矩与机器人位姿之间的非线性增量变换关系,为利用图像矩进行机器人视觉伺服控制提供了理论基础,然后在未对摄像机与手眼关系进行标定的情况下,利用反向传播(BP)神经网络的非线性映射特性设计了基于图像矩的机器人视觉伺服控制方案,最后用训练好的神经刚络进行了视觉伺服跟踪控制.实验结果表明基于本文算法可实现0.5 mm的位置与0.5°的姿态跟踪精度,验证了算法的的有效性与较好的伺服性能.  相似文献   

10.
王昱欣  王贺升  陈卫东 《机器人》2018,40(5):619-625
当末端带有相机的连续型软体机器人进行作业时,由于避障、安全性等多方面因素,既需要末端相机-机器人系统的视觉伺服,也需要机器人的整体形状控制.针对这个问题,本文提出了一种软体机器人手眼视觉/形状混合控制方法.该方法无需知道空间特征点的3维坐标,只需给定特征点在末端相机像平面的期望像素坐标和软体机器人的期望形状就可达到控制目的.建立了软体机器人的运动学模型,利用该模型,结合深度无关交互矩阵自适应手眼视觉控制和软体机器人形状控制,提出了一种混合控制律,并用李亚普诺夫稳定性理论对该控制律进行证明.仿真和实验的结果均表明,末端相机特征点像素坐标和形状可以收敛到期望值.  相似文献   

11.
In this study, a novel image-based visual servo (IBVS) controller for robot manipulators is investigated using an optimized extreme learning machine (ELM) algorithm and an offline reinforcement learning (RL) algorithm. First of all, the classical IBVS method and its difficulties in accurately estimating the image interaction matrix and avoiding the singularity of pseudo-inverse are introduced. Subsequently, an IBVS method based on ELM and RL is proposed to solve the problem of the singularity of the pseudo-inverse solution and tune adaptive servo gain, improving the servo efficiency and stability. Specifically, the ELM algorithm optimized by particle swarm optimization (PSO) was used to approximate the pseudo-inverse of the image interaction matrix to reduce the influence of camera calibration errors. Then, the RL algorithm was adopted to tune the adaptive visual servo gain in continuous space and improve the convergence speed. Finally, comparative simulation experiments on a 6-DOF robot manipulator were conducted to verify the effectiveness of the proposed IBVS controller.  相似文献   

12.
为实现在统一的理论框架下对机器人视觉伺服基础特性进行细致深入的研究,本文基于任务函数方法,建立了广义的视觉伺服系统模型.在此模型基础之上,重点研究了基于位置的视觉伺服(PBVS)与基于图像的视觉伺服(IBVS)方法在笛卡尔空间和图像空间的动态特性.仿真结果表明,在相同的比较框架结构下,PBVS方法同样对摄像机标定误差具有鲁棒性.二者虽然在动态系统的稳定性、收敛性方面相类似,但是在笛卡尔空间和图像空间的动态性能上却有很大的差别.对于PBvS方法,笛卡尔轨迹可以保证最短路径,但是对应的图像轨迹是不可控的,可能会发生逃离视线的问题;对于IBVS方法,图像空间虽然能保证最短路径,但是由于缺乏笛卡尔空间的直接控制,在处理大范围旋转伺服的情况时,会发生诸如摄像机退化的笛卡尔轨迹偏移现象.  相似文献   

13.
It is known that most of the key problems in visual servo control of robots are related to the performance analysis of the system considering measurement and modeling errors. In this paper, the development and performance evaluation of a novel intelligent visual servo controller for a robot manipulator using neural network Reinforcement Learning is presented. By implementing machine learning techniques into the vision based control scheme, the robot is enabled to improve its performance online and to adapt to the changing conditions in the environment. Two different temporal difference algorithms (Q-learning and SARSA) coupled with neural networks are developed and tested through different visual control scenarios. A database of representative learning samples is employed so as to speed up the convergence of the neural network and real-time learning of robot behavior. Moreover, the visual servoing task is divided into two steps in order to ensure the visibility of the features: in the first step centering behavior of the robot is conducted using neural network Reinforcement Learning controller, while the second step involves switching control between the traditional Image Based Visual Servoing and the neural network Reinforcement Learning for enabling approaching behavior of the manipulator. The correction in robot motion is achieved with the definition of the areas of interest for the image features independently in both control steps. Various simulations are developed in order to present the robustness of the developed system regarding calibration error, modeling error, and image noise. In addition, a comparison with the traditional Image Based Visual Servoing is presented. Real world experiments on a robot manipulator with the low cost vision system demonstrate the effectiveness of the proposed approach.  相似文献   

14.
《Advanced Robotics》2013,27(10):993-1021
This paper presents a new approach to model and control high-speed 6-d.o.f. visual servo loops. The modeling and control strategy take into account the dynamics of the velocity-controlled 6-d.o.f. manipulator as well as a simplified model of the camera and acquisition system in order to significantly increase the bandwidth of the servo loop. Multi-input multi-output generalized predictive control (GPC) is used to optimally control the visual loop with respect to the proposed dynamical model. The predictive feature of the GPC is used for optimal trajectory following in Cartesian space. Experimental results on a 6-d.o.f. industrial robot are presented that validate the proposed model. The visual sensor used in the experiments is a high-speed camera that acquires 120 non-interlaced images. Using this camera, a sampling rate of 120 Hz is achieved for the visual loop. Furthermore, a precise synchronization method is used to reduce the delays due to image transfer and processing. The experiments show a drastic improvement of the loop performance with respect to more classical control strategies for 6-d.o.f. visual servo loops.  相似文献   

15.
The performance of a visual servo control system depends on the set of image features used in the control loop. Although some local performance measures have been used for evaluating the image features, their usage in the process of feature selection requires on-line computation that is difficult to realize in real-time, especially with a large number of candidate features. In this article, we introduce a global measure for evaluating the performance of image features for visual servo tasks. This measure can be computed off-line and it takes into account several desirable properties of the image features, including minimization of singularities in the image Jacobian, linearity of feature variation, and maximization of feature resolution. For a given kinematic and imaging model of a robot/camera setup, the measure can be used for a variety of visual servo tasks. A numerical approximation scheme is presented along with several computed examples to illustrate the utility of this measure. © 1996 John Wiley & Sons, Inc.  相似文献   

16.
钟宇  张静  张华  肖贤鹏 《计算机工程》2022,48(3):100-106
智能协作机器人依赖视觉系统感知未知环境中的动态工作空间定位目标,实现机械臂对目标对象的自主抓取回收作业。RGB-D相机可采集场景中的彩色图和深度图,获取视野内任意目标三维点云,辅助智能协作机器人感知周围环境。为获取抓取机器人与RGB-D相机坐标系之间的转换关系,提出基于yolov3目标检测神经网络的机器人手眼标定方法。将3D打印球作为标靶球夹持在机械手末端,使用改进的yolov3目标检测神经网络实时定位标定球的球心,计算机械手末端中心在相机坐标系下的3D位置,同时运用奇异值分解方法求解机器人与相机坐标系转换矩阵的最小二乘解。在6自由度UR5机械臂和Intel RealSense D415深度相机上的实验结果表明,该标定方法无需辅助设备,转换后的空间点位置误差在2 mm以内,能较好满足一般视觉伺服智能机器人的抓取作业要求。  相似文献   

17.
Visual servo control systems use information from images along with knowledge of the optic parameters (i.e. camera calibration) to position the camera relative to some viewed object. If there are inaccuracies in the camera calibration, then performance degradation and potentially unpredictable response from the visual servo control system may occur. Motivated by the desire to incorporate robustness to the camera calibration, different control methods have been developed. Previous adaptive/robust controllers (especially for six degree‐of‐freedom camera motion) rely heavily on properties of the rotation parameterization to formulate state estimates and a measurable closed‐loop error system. All of these results are based on the singular axis–angle parameterization. Motivated by the desire to express the rotation by a non‐singular parameterization, efforts in this paper address the question: Can state estimates and a measurable closed‐loop error system be crafted in terms of the quaternion parameterization when the camera calibration parameters are unknown? To answer this question, a contribution of this paper is the development of a robust controller and closed‐loop error system based on a new quaternion‐based estimate of the rotation error. A Lyapunov‐based analysis is provided which indicates that the controller yields asymptotic regulation of the rotation and translation error signals given a sufficient approximate of the camera calibration parameters. Simulation results are provided that illustrate the performance of the controller for a range of calibration uncertainty. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
基于神经网络的机器人视觉伺服控制   总被引:3,自引:3,他引:3  
视觉伺服可以应用于机器人初始定位自动导引、自动避障、轨线跟踪和运动目标跟踪等控制系统中。传统的视觉伺服系统在运行时包括工作空间定位和动力学逆运算两个过程,需要实时计算视觉雅可比矩阵和机器人逆雅可比矩阵,计算量大,系统结构复杂。本文分析了基于图像的机器人视觉伺服的基本原理,使用BP神经网络来确定达到指定位姿所需要的关节角度,将视觉信息直接融入伺服过程,在保证伺服精度的情况下大大简化了控制算法。文中针对Puma560工业机器人的模型进行了仿真实验,结果验证了该方法的有效性。  相似文献   

19.
A new uncalibrated eye-to-hand visual servoing based on inverse fuzzy modeling is proposed in this paper. In classical visual servoing, the Jacobian plays a decisive role in the convergence of the controller, as its analytical model depends on the selected image features. This Jacobian must also be inverted online. Fuzzy modeling is applied to obtain an inverse model of the mapping between image feature variations and joint velocities. This approach is independent from the robot's kinematic model or camera calibration and also avoids the necessity of inverting the Jacobian online. An inverse model is identified for the robot workspace, using measurement data of a robotic manipulator. This inverse model is directly used as a controller. The inverse fuzzy control scheme is applied to a robotic manipulator performing visual servoing for random positioning in the robot workspace. The obtained experimental results show the effectiveness of the proposed control scheme. The fuzzy controller can position the robotic manipulator at any point in the workspace with better accuracy than the classic visual servoing approach.  相似文献   

20.
This paper deals with a motion control system for a space robot with a manipulator. Many motion controllers require the positions of the robot body and the manipulator hand with respect to an inertial coordinate system. In order to measure them, a visual sensor using a camera is frequently used. However, there are two difficulties in measuring them by means of a camera. The first one is that a camera is mounted on the robot body, and hence it is difficult to directly measure the position of the robot body by means of it. The second one is that the sampling period of a vision system with a general-purpose camera is much longer than that of a general servo system. In this paper, we develop an adaptive state observer that overcomes the two difficulties. In order to investigate its performance, we design a motion control system that is constructed by combining the observer with a PD control input, and then conduct numerical simulations for the control system. Simulation results demonstrate the effectiveness of the proposed observer.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号