首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a framework of hand-eye relation for visual servoing with more precision, mobility, and global view. Mainly, there are two types of camera utilization for visual servoing: eye-in-hand and eye-to-hand configurations. Both have own merits and drawbacks regarding to precision and view limit that oppose each other. Based on both behaviors, this paper employs a mobile manipulator as the second robot to hold the camera. Here, the camera architecture is eye-to-hand configuration for the main robot, but mainly behaves as eye-in-hand configuration for the second robot. Having this framework, the drawback of each configuration is resolved by the benefit of the other. Here, the camera becomes mobile with more precision and global view. In addition, since there is no additional camera, the vision algorithm can be kept simple. In order to gain real-time visual servoing, this paper also addresses real-time constraints on vision system and data communication between robot and vision. Here, a hexagon pattern of artificial marker with a simplified image processing is developed. A grasp positioning problem is considered with position-based dynamic look and move visual control through object pose estimation. The system performance is validated by the experimental result.  相似文献   

2.
王志武  张子淼  许凯  张福民  刘斌 《红外与激光工程》2022,51(3):20210241-1-20210241-10
在基于视觉图像的位姿测量中,非线性位姿测量算法的全局收敛存在不确定性,测量结果取决于初值的选取,不能保证位姿测量的鲁棒性。线性位姿测量算法对图像处理的要求比较高,如果定位特征点图像坐标提取不够精确,会导致位姿测量精度降低。在自然光条件下,相机采集定位特征点图像,图像中高亮度区域的存在对定位特征点提取精度产生影响,进而使有效定位特征点数量减少,影响位姿测量精度。针对上述问题,文中提出了一种基于最佳偏振角的线性位姿测量方法:在相机镜头前加装线偏振片,根据Stokes矢量建立偏振片最佳偏振角度求解模型,在使用最佳偏振角度的前提下采集定位特征点图像,提取图像坐标;建立线性位姿求解模型,计算被测物体位姿。实验结果表明,该方法能够有效减少图像中的高亮度区域,改善成像质量,提高线性位姿测量精度,在?60°~+60°的测量范围内,角度测量误差小于±0.16°,在0~20 mm的测量范围内,位移测量误差小于±0.05 mm。  相似文献   

3.
Population-Based Uncalibrated Visual Servoing   总被引:1,自引:0,他引:1  
This paper introduces the implementation of a recently introduced method suitable for visual servoing. The method is based on the generalization of secant methods for nonlinear optimization. The difference with existing approaches related to visual servoing is that we do not impose a linear model to interpolate the goal function. Instead, we prefer to identify the linear model by building the secant model using population of the previous iterates, which is as close as possible to the nonlinear function, in the least-squares sense. The new system has been shown to be less sensitive to noise and exhibits a faster convergence than do conventional quasi-Newton methods. The theoretical results are verified experimentally and also by simulations.  相似文献   

4.
《Mechatronics》2000,10(1-2):1-18
A visual servoing algorithm is proposed for a robot with a camera in the hand to track a moving object in terms of image features and their variations, where fuzzy logics and fuzzy-neural networks are involved to learn feature Jacobian-based kinematic control law. Specifically, novel image features are suggested by employing a viewing model of the perspective projection to estimate the relative pitching and yawing angles. Such perspective projection-based features would not interact with the relative distance between the object and the camera, and, desired feature trajectories for learning the visually guided line-of-sight robot motion are obtained by measuring features by the camera in the hand not in the entire workspace, but on a single linear path along which the robot moves under the control of a commercially provided function of linear motion, and then, control actions of the camera are approximately found by fuzzy-neural networks to follow such desired feature trajectories.To show the validity of the proposed algorithm, some experimental results are illustrated, where a four-axis SCARA robot with a BW CCD camera is used.  相似文献   

5.
Traditionally, a visual servo control problem is formulated in the teach by showing framework with an objective to regulate a camera based on a reference (or desired) image obtained by a priori positioning the same camera at the desired task-space location. A new strategy is essential for a variety of applications where it may not be possible to position the camera a priori at the desired position/orientation. In this paper, a visual servo control approach, called “teach by zooming”, is formulated where the objective is to position/orient a camera based on a reference image obtained by another camera. For example, a fixed camera providing a wide area view of the scene can zoom in on an object and record a desired image for another camera. A non-linear Lyapunov-based controller is designed to regulate the image features acquired by an on-board camera to the corresponding image feature coordinates in the desired image acquired by the fixed camera in the presence of uncertain camera calibration parameters. The proposed control formulation becomes identical to the well-known teach by showing controller when the camera-in-hand can be located a priori to the desired position/orientation; thus enabling control in a wide range of applications. Experimental results for regulation control of a 7 degrees-of-freedom robotic manipulator are provided to demonstrate the performance of the proposed visual servo controller.  相似文献   

6.
A control scheme for a robotic manipulator system that uses visual information to position and orient the end-effector is described. The control system directly integrates visual data into the servoing process without subdividing the process into determination of the position and orientation of the workplace and inverse kinematic calculation. The feature of the control scheme is the use of neural networks for the determination of the change in joint angles required in order to achieve the desired position and orientation. The proposed system is able to control the robot so that it can approach the desired position and orientation from arbitrary initial ones. Simulations for a robotic manipulator with six degrees of freedom are described. The validity and the effectiveness of the proposed control scheme are verified by computer simulations  相似文献   

7.
Surgeries of the skull base require accuracy to safely navigate the critical anatomy. This is particularly the case for endoscopic endonasal skull base surgery (ESBS) where the surgeons work within millimeters of neurovascular structures at the skull base. Today's navigation systems provide approximately 2 mm accuracy. Accuracy is limited by the indirect relationship of the navigation system, the image and the patient. We propose a method to directly track the position of the endoscope using video data acquired from the endoscope camera. Our method first tracks image feature points in the video and reconstructs the image feature points to produce 3D points, and then registers the reconstructed point cloud to a surface segmented from preoperative computed tomography (CT) data. After the initial registration, the system tracks image features and maintains the 2D-3D correspondence of image features and 3D locations. These data are then used to update the current camera pose. We present a method for validation of our system, which achieves submillimeter (0.70 mm mean) target registration error (TRE) results.  相似文献   

8.
Real-time and reliable head pose tracking is the basis of human–computer interaction and face analysis applications. Aiming at the problems of accuracy and real time performance in current tracking method, a new head pose tracking method based on stereo visual SLAM is proposed in this paper. The sparse head map is constructed based on ORB feature points extraction and stereo matching, then the 3D-2D matching relations between 3D mappoints and 2D feature points are obtained by projection matching. Finally, the camera pose solved by the Bundle Adjustment is converted to head pose, which realizes the tracking of head pose. The experimental results show that this method can obtain high precise head pose. The mean errors of three Euler angles are all less than 1°. Therefore, the proposed head pose tracking method can track and estimate precise head pose in real time under smooth background.  相似文献   

9.
为了获确的激光打靶实验数据,需要使用诊断搭载平台搭载物理诊断设备对打靶目标进行高精直.针对传统方法存在耗时、误差(RMS)较大的问题,根据搭载平台与物理诊断设备的特点,提出了一种基于视觉伺服的精确自直方法.首先,构建三直向量估算立体视觉系统中靶的偏差,在弱透视条件下,估算值接近于真值;然后,建立三自由度姿态调节模型,提高姿态调节精度.最后,运直向量与调节模型设计视觉伺服控制器,仅需一次离线标定即可进行快直.通过以上改进,实现了物理诊断设备的精确自直.实验结果表明,诊断设备直精度(RMS)分别为:x 指向为11m,y 指向为12m.搭载分幅相机进行激光打靶考核验证,得到了物理实验过程的X光焦斑图像,表直方法满足工程使用要求.  相似文献   

10.
以空间非合作飞行器视觉位姿测量为背景,针对近距离及超近距离情况下由于成像空间小、像机视场等限制,位姿测量所用的视觉特征将不能在单像机中完整成像而无法完成定位的问题,提出一种多像机非共视场的非合作飞行器位姿测量方法。首先将多个像机配置成非共视场的形式,标定各个像机之间的位置关系;然后利用多个像机对目标上的不同特征成像,来自不同像机的底层信息既有冗余又有互补,为位姿测量提供足够的视觉特征和几何特征;最后结合像机之间的位置关系将各个像机中的特征信息进行融合,进而以闭式解法计算目标位姿。实验结果验证了该方法的有效性以及在大目标近距离位姿测量应用中的优越性。  相似文献   

11.
This paper proposes a sensor-based design methodology in order to design a Delta robot with guaranteed accuracy performance for a dedicated sensor-based controller. This sensor-based design methodology takes into account the accuracy performance of the controller in the design process in order to find optimal geometric parameters of the robot. Three types of controllers are envisaged to be applied to the Delta robot, leading to three different optimal designs: leg-direction-based visual servoing, line-based visual servoing and image moment visual servoing. Based on these three controllers, positioning error models taking into account the error of observation coming from the camera are developed, and the controller singularities are analyzed. Then, design optimization problems are formulated in order to find the optimal geometric parameters and relevant parameters of the camera for the Delta robot for each type of controller. Prototypes of Delta robots have been manufactured based on the obtained optimum design parameters in order to test the performance of the pair {robot-controller}.  相似文献   

12.
This paper presents the design of a visual control formulated on an adaptive sliding mode controller for a quadrotor executing a target tracking task subject to disturbances. An image projection of the target from a virtual camera approach, and an image-based visual servoing technique are considered to obtain a singularity-free set of image features to control the position and yaw of the rotorcraft. While, an adaptive sliding mode strategy improves the robustness against bounded external perturbations and uncertainties and provides adaptivity to the visual servoing scheme. Furthermore, an analysis based on Lyapunov theory provides sufficient conditions that guarantee the stability of the closed-loop system. A comparison of the proposed adaptive visual servoing against two recent visual servoing strategies is provided, showing superiority in simulation results. Finally, experimental tests of a Parrot AR.Drone 2.0 tracking a static and moving target further demonstrates the advantages and performance.  相似文献   

13.
张世辉  张钰程 《电子学报》2016,44(2):445-452
如何根据当前观测到的信息确定摄像机的下一最佳观测方位是视觉领域一个具有挑战性的问题.本文提出一种基于单幅深度图像利用遮挡信息求解下一最佳观测方位的方法.该方法首先利用当前观测方位下获得的深度图像中的遮挡信息对遮挡区域外接表面进行四边形剖分,从而建立遮挡区域外接表面模型;然后通过综合考虑下一观测过程中的可见四边形信息以及观测损失信息构造下一最佳观测方位模型;最后采用梯度下降法求解所建模型得到下一最佳观测方位.与已有方法相比,所提方法无需将摄像机位置固定于某一表面,也无需获取视觉目标的先验知识.实验结果验证了所提方法的可行性和有效性.  相似文献   

14.
针对目前视觉监控领域中采集到的人物数据样本量少和特征单一的问题,提出了一种具有高视觉感知约束的双向生成对抗网络生成期望人物姿态图像的方法。采用给定人物的单个图像和期望姿态的二维骨架作为双向生成对抗网络的输入,生成具有该目标人物期望姿态的图像。将生成的期望姿态图像反映射回原始姿态图像,利用少量的图像以无监督学习方式进行学习,生成该人物期望姿态的高质量图像。提出的方法在DeepFashion公开数据集上进行了实验,结果表明,采用文中提出的方法生成的图像结构相似度(SSIM)比以往的方法提高了0.28,有效的提升了基于无监督学习的单人多姿态人物图像生成的质量。  相似文献   

15.
This paper presents a vision-related technique for a real-time visual servoing. The method utilizes the global search feature of a genetic algorithm (GA) and a local search technique of the GA. In the GA process, the computation of the fitness function is based on the configuration of an object model designated as the surface-strips model. To evaluate the effectiveness of the proposed recognition method, experiments to track a fish by a hand-eye camera and catch the fish with a net attached to the hand of the manipulator have been performed.  相似文献   

16.
Position based visual servoing is a widely adopted tool in robotics and automation. While the extended Kalman filter (EKF) has been proposed as an effective technique for this, it requires accurate noise covariance matrices to render desirable performance. Although numerous techniques for updating or estimating the covariance matrices have been developed in the literature, many of these suffer from computational limits or difficulties in imposing structural constraints such as positive semi-definiteness (PSD). In this paper, a relatively new framework, namely the autocovariance least-squares (ALS) method, is applied to estimate noise covariances using real world visual servoing data. To generate the innovations data required for the ALS method, we utilize standard position based visual servoing methods such as EKF, and also an advanced optimization-based framework, namely moving horizon estimation (MHE). A major advantage of the proposed method is that the PSD and other structural constraints on the noise covariances can be enforced conveniently in the optimization problem, which can be solved efficiently using existing software packages. Our results show that using the ALS estimated covariances in the EKF, instead of hand-tuned covariances, gives more than 20% mean error reduction in visual servoing, while utilising MHE to generate the ALS innovations provides a further 21% accuracy improvement.  相似文献   

17.
董伯麟  柴旭 《压电与声光》2020,42(5):724-728
针对基于视觉传感器的移动机器人在快速运动或发生旋转时出现图像模糊和特征丢失,以至无法进行特征匹配,从而导致系统定位和建图的准确度及精确度下降问题,该文提出了一种以深度相机(RGB_D)融合惯性测量单元(IMU)的方案。采用ORB SLAM2算法进行位姿估计,同时将IMU信息作为约束弥补相机数据的缺失。两种传感器的测量数据采用基于扩展卡尔曼滤波的松耦合方式进行非线性优化,通过数据采集实验表明,该方法能有效提高机器人的定位精度和系统建图效果。  相似文献   

18.
胡章芳  张杰  程亮 《半导体光电》2020,41(4):548-554, 559
为了保持直接法的快速性与特征法的高精度和闭环能力,提出了一种融合直接法与特征法的RGB-D同时定位与地图创建(SLAM)算法。该算法主要包含3个并行线程:跟踪线程、局部建图线程和闭环线程。在跟踪线程中对非关键帧进行跟踪,通过最小化光度图像误差来进行相机的初始位姿估计以及像素点的对应关系计算,利用最小化局部地图点重投影误差进一步优化相机位姿,实现快速准确的跟踪与定位;在局部建图线程中对关键帧进行提取并匹配ORB特征,执行局部BA(光束平差法),对局部关键帧位姿和局部地图点的位置进行优化,提高SLAM的局部一致性;在闭环线程中执行对关键帧的闭环检测和优化,从而保证SLAM全局一致性。另外,根据RGB-D图像和相机位姿信息,通过基于Octomap的建图框架,构建完整准确的3D稠密环境地图。在TUM数据集下的实验表明,所提出的方法可以得到与基于特征法相当的精度,且所需时间更少。  相似文献   

19.
张贵阳  霍炬  杨明  周婞  魏亮  薛牧遥 《红外与激光工程》2021,50(4):20200280-1-20200280-11
针对空间目标位姿测量下的相机多参数标定问题,提出基于双更新策略加权差分进化粒子群优化的相机参数标定方法。通过引入自适应判断因子来控制每一次迭代过程中加权差分进化(WDE)算法和粒子群优化(PSO)算法的调用比例,根据概率规律考虑对个体使用PSO算法或WDE算法来进行更新,并通过信息交流机制利用WDE操作得到的个体去引导PSO操作中的个体进化过程,所提出的WDEPSO算法能够保证种群个体进化的多样性和有效性,并且与相机非线性标定模型参数进行耦合,同步实现相机内外参数的组合非线性、全局连续优化,克服目标空间背景饱和光强造成的有限特征点失效引发的局部收敛问题。实验表明,文中方法优化得到的目标函数值更小,获得了较高的标定精度;利用标定参数得到的标准杆测量精度优于0.40 mm,目标大幅度角运动状态下的重构姿态误差小于0.30°,可重复性测量结果稳定。  相似文献   

20.
小视场环境下的摄像机标定   总被引:3,自引:1,他引:2  
郭涛  达飞鹏  方旭 《中国激光》2012,39(8):808001-174
摄像机标定旨在建立三维世界坐标与二维图像坐标之间的映射关系。传统标定板为数十个整齐排列的标准圆或网格,通过提取圆心坐标或网格角点坐标进行标定。在微小物体测量系统中,摄像机视场较小,无法从传统标定板提取足够多点坐标信息。针对这一问题,提出一种基于二次曲线与直线的混合标定方法。该方法抛弃了利用点对点关系的标定方法,转而利用二次曲线方程及直线方程在两种坐标系下相对应的关系进行标定,使摄像机即使是在非常小的范围内仍然能够提取足够的信息进行标定。仿真与实验证明,相对于基于点的标定方法,混合标定方法精度高,具有更好的稳健性。另外,标定模板为一个标准的半圆,制作简单,方便应用到小视场的环境中。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号