首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
基于图像的视觉伺服方法,图像的变化直接解释为摄像机的运动,而不是直接对机械手末端实现笛卡尔速度控制,导致机械手的运动轨迹迂回,产生摄像机回退现象.针对这一问题,提出了将旋转和平移分离并先实现旋转的视觉伺服方案.该方案计算量小,系统响应时间短,解决了图像旋转和平移间的干扰,克服了传统基于图像视觉伺服产生的摄像机回退现象,实现了时间和路径的最优控制.并用传统IBVS的控制律和摄像机成像模型解释了回退现象的产生原因.二维运动仿真说明了提出方案的有效性.  相似文献   

2.
机器人视觉伺服研究进展   总被引:36,自引:1,他引:35  
王麟琨  徐德  谭民 《机器人》2004,26(3):277-282
介绍了机器人视觉伺服系统的结构和主要研究内容,比较了当前几种主要的视觉伺服方法,针对当前 机器人视觉伺服所面临的主要问题,详细阐述了近期提出的一些解决方法.  相似文献   

3.
机器人视觉伺服系统的研究   总被引:31,自引:0,他引:31  
机器人伺觉伺服系统及到多学科内容。针对机器人视觉伺服系统主要的三方面内容;系统的结构方式,图象处理,控制方法,介绍了该领域的研究现状及所取得的成就。最后分析了今后的发展趋势。  相似文献   

4.
针对移动机器人平均路标向量(ALV)的算法性能受自然路标影响较大的问题,提出了一种优化算法.在利用图像特征检测与匹配手段,如SIFT(尺度不变特征变换)、SURF(加速鲁棒特征)等,来获得自然路标的前提下,优化算法首先对原始的ALV算法进行了过程拆解,获得归航子向量;然后利用统计学理论对归航子向量的贡献度进行调整,并剔除误匹配路标;最后将带有权值信息的归航子向量重新整合,获得指向目标位置的归航向量.实验表明,优化的ALV算法有效地提高了自然路标的整体精度,保证了路标的对应性,从而提高了ALV算法的准确性,使机器人可以以更理想的轨迹自主地到达目标位置.  相似文献   

5.
基于图象差的平面大范围视觉伺服控制   总被引:2,自引:1,他引:1  
为解决大范围偏差的控制问题,将期望图象按给定的角度间隔旋转,离线生成一系列子期望图象。比较实时采集图象与期望子图象间的差异程序可获取目标绕重心的旋转运动参数。纵使图象求重心方法给出的平动参数,实现了在大范围偏差时迅速将摄象机调整到期望位姿。在期望位姿附近结合直接图象反馈方式,实现了基于图象差的平面大范围视觉伺服控制。  相似文献   

6.
徐璠  王贺升 《自动化学报》2023,49(4):744-753
水下仿生软体机器人在水底环境勘测, 水下生物观测等方面具有极高的应用价值. 为进一步提升仿章鱼臂软体机器人在特殊水下环境中控制效果, 提出一种自适应鲁棒视觉伺服控制方法, 实现其在干扰无标定环境中的高精度镇定控制. 基于水底动力学模型, 设计保证动力学稳定的控制器; 针对柔性材料离线标定过程繁琐、成本高, 提出材料参数自适应估计算法; 针对水下特殊工作条件, 设计自适应鲁棒视觉伺服控制器, 实现折射效应的在线补偿, 并通过自适应未知环境干扰上界, 避免先验环境信息的求解. 所提算法在软体机器人样机中验证其镇定控制性能, 为仿生软体机器人的实际应用提供理论基础.  相似文献   

7.
M.T. Hussein 《Advanced Robotics》2013,27(24):1575-1585
In this review, recent developments in the field of flexible robot arm control using visual servoing are reviewed. In comparison to rigid robots, the end-effector position of flexible links cannot be obtained precisely enough with respect to position control using kinematic information and joint variables. To solve the task here the use of a vision sensor (camera) system, visual servoing is proposed to realize the task of control flexible manipulators with improved quality requirements. The paper is organized as follows: the visual servoing architectures will be reviewed for rigid robots first. The advantages, disadvantages, and comparisons between different approaches of visual servoing are carried out. The using of visual servoing to control flexible robot is addressed next. Open problems such as state variables estimation as well as the combination of different sensor properties as well as some application-oriented points related to flexible robot are discussed in detail.  相似文献   

8.
针对传统的视觉伺服方法中图像几何特征的标记、提取与匹配过程复杂且通用性差等问题,本文提出了一种基于图像矩的机器人四自由度(4DOF)视觉伺服方法.首先建立了眼在手系统中图像矩与机器人位姿之间的非线性增量变换关系,为利用图像矩进行机器人视觉伺服控制提供了理论基础,然后在未对摄像机与手眼关系进行标定的情况下,利用反向传播(BP)神经网络的非线性映射特性设计了基于图像矩的机器人视觉伺服控制方案,最后用训练好的神经刚络进行了视觉伺服跟踪控制.实验结果表明基于本文算法可实现0.5 mm的位置与0.5°的姿态跟踪精度,验证了算法的的有效性与较好的伺服性能.  相似文献   

9.
A general scheme to represent the relation between dynamic images and camera and/or object motions is proposed for applications to visual control of robots. We consider the case where a moving camera observes moving objects in a static scene. The camera obtains images of the objects moving within the scene. Then, the possible combinations of the camera and the objects' poses and the obtained images are not arbitrary but constrained to each other. Here we represent this constraint as a lower dimensional hypersurface in the product space of the whole combination of their motion control parameters and image data. The visual control is interpreted as to find a path on this surface leading to their poses where a given goal image will be obtained. In this paper, we propose a visual control method to utilize tangential properties of this surface. First, we represent images with a composition of a small number of eigen images by using K-L (Karhunen-Loève) expansion. Then, we consider to reconstruct the eigen space (the eigen image space) to achieve efficient and straightforward controls. Such reconstruction of the space results in the constraint surface being mostly flat within the eigen space. By this method, visual control of robots in a complex configuration is achieved without image processing to extract and correspond image features in dynamic images. The method also does not need camera or hand-eye calibrations. Experimental results of visual servoing with the proposed method show the feasibility and applicability of our newly proposed approach to a simultaneous control of camera self-motion and object motions.  相似文献   

10.
一种拟人空间机械臂视觉伺服图像处理研究   总被引:1,自引:0,他引:1  
胡硕  王芳  秦利  刘福才 《控制工程》2015,22(1):25-31
针对四自由度串并混联拟人空间机械臂本体,构建了基于运动控制卡模式的机器人软件和硬件系统。采用基于位置的控制结构,利用所求得的机械臂逆运动学方程,设计了由图像反馈和末端运动组成的视觉伺服控制系统。提出了一种基于SURF特征的静态目标识别算法,算法首先对目标图像提取SURF特征,并利用欧氏距离实现模板图像与目标图像特征点匹配,然后计算已匹配特征点的质心来获得目标的位置信息,最终实现了稳定的伺服定位。实验结果证明了系统的可靠性,并且证明了四自由度串并混联机械臂本体在实际运动的精确性和稳定性。  相似文献   

11.
仿人机器人视觉导航中的实时性运动模糊探测器设计   总被引:1,自引:0,他引:1  
针对仿人机器人视觉导航系统的鲁棒性受到运动模糊制约的问题,提出一种基于运动模糊特征的实时性异常探测方法. 首先定量地分析运动模糊对视觉导航系统的负面影响,然后研究仿人机器人上图像的运动模糊规律,在此基础上对图像的运动模糊特征进行无参考的度量,随后采用无监督的异常探测技术,在探测框架下对时间序列上发生的图像运动模糊特征进行聚类分析,实时地召回数据流中的模糊异常,以增强机器人视觉导航系统对运动模糊的鲁棒性. 仿真实验和仿人机器人实验表明:针对国际公开的标准数据集和仿人机器人NAO数据集,方法具有良好的实时性(一次探测时间0.1s)和有效性(召回率98.5%,精确率90.7%). 方法的探测框架对地面移动机器人亦具有较好的普适性和集成性,可方便地与视觉导航系统协同工作.  相似文献   

12.
基于散焦图像特征的微装配机器人深度运动显微视觉伺服   总被引:2,自引:0,他引:2  
吕遐东  黄心汉  王敏 《机器人》2007,29(4):357-362
为了描述微操作手深度运动,采用灰度方差聚焦评价算子计算机械手散焦图像特征.散焦特征曲线理论上为单峰分布,峰值点对应显微光学焦平面深度位置.实际提取的散焦特征含有大量随机噪声,利用非线性跟踪微分器抑制噪声实现对机械手散焦图像特征及其微分信号的无颤振光滑逼近.依据散焦微分信号设计粗—精两级自寻优视觉控制器,完成微操作手对装配平面深度的精确定位.微装配深度运动实验验证了本文方法的有效性,机械手深度伺服误差为75 μm.  相似文献   

13.
针对摄像机未标定和特征点坐标未知的情况, 本文提出一种新颖的基于图像的无人直升机自适应视觉伺服方法. 控制器是基于反推法设计的, 但是和已有的基于反推法的视觉伺服不同的是, 它利用与深度无关矩阵将图像误差映射到执行器空间, 从而可以避免估计特征点的深度. 这种设计方法可以线性化未知的摄像机参数和特征点坐标, 所以能方便地设计自适应算法来在线估计这些未知参数, 同时为了保证图像误差收敛和避免估计参数收敛至零解而引入了两个势函数. 利用Lyapunov方法证明了基于非线性动力学的控制器的稳定性, 并给出了仿真验证.  相似文献   

14.
机器人视觉伺服系统是机器人领域的一个重要研究方向,它的研究对于开发手眼协调的机器人在工业生产、设备制造等方面的应用有极其重要的意义。该文介绍了一种基于镜面反射型视觉传感器的车窗装配系统,着重介绍了该系统的设备构成、设计方案。最后分析了目前研究中存在的问题及今后的研究方向。  相似文献   

15.
We present an approach for controlling robotic interactions with objects, using synthetic images generated by morphing shapes. In particular, we attempt the problem of positioning an eye-in-hand robotic system with respect to objects in the workspace for grasping and manipulation. In our formulation, the grasp position (and consequently the approach trajectory of the manipulator), varies with each object. The proposed solution to the problem consists of two parts. First, based on a model-based object recognition framework, images of the objects taken at the desired grasp pose are stored in a database. The recognition and identification of the grasp position for an unknown input object (selected from the family of recognizable objects) occurs by morphing its contour to the templates in the database and using the virtual energy spent during the morph as a dissimilarity measure. In the second step, the images synthesized during the morph are used to guide the eye-in-hand system and execute the grasp. The proposed method requires minimal calibration of the system. Furthermore, it conjoins techniques from shape recognition, computer graphics, and vision-based robot control in a unified engineering amework. Potential applications range from recognition and positioning with respect to partially-occluded or deformable objects to planning robotic grasping based on human demonstration.  相似文献   

16.
We examined human navigational principles for intercepting a projected object and tested their application in the design of navigational algorithms for mobile robots. These perceptual principles utilize a viewer-based geometry that allows the robot to approach the target without need of time-consuming calculations to determine the world coordinates of either itself or the target. Human research supports the use of an Optical Acceleration Cancellation (OAC) strategy to achieve interception. Here, the fielder selects a running path that nulls out the acceleration of the retinal image of an approaching ball, and maintains an image that rises at a constant rate throughout the task. We compare two robotic control algorithms for implementing the OAC strategy in cases in which the target remains in the sagittal plane headed directly toward the robot (which only moves forward or backward). In the “passive” algorithm, the robot keeps the orientation of the camera constant, and the image of the ball rises at a constant rate. In the “active” algorithm, the robot maintains a camera fixation that is centered on the image of the ball and keeps the tangent of the camera angle rising at a constant rate. Performance was superior with the active algorithm in both computer simulations and trials with actual mobile robots. The performance advantage is principally due to the higher gain and effectively wider viewing angle when the camera remains centered on the ball image. The findings confirm the viability and robustness of human perceptual principles in the design of mobile robot algorithms for tasks like interception. Thomas Sugar works in the areas of mobile robot navigation and wearable robotics assisting gait of stroke survivors. In mobile robot navigation, he is interested in combining human perceptual principles with mobile robotics. He majored in business and mechanical engineering for his Bachelors degrees and mechanical engineering for his Doctoral degree all from the University of Pennsylvania. In industry, he worked as a project engineer for W. L. Gore and Associates. He has been a faculty member in the Department of Mechanical and Aerospace Engineering and the Department of Engineering at Arizona State University. His research is currently funded by three grants from the National Sciences Foundation and the National Institutes of Health, and focuses on perception and action, and wearable robots using tunable springs. Michael McBeath works in the area combining Psychology and Engineering. He majored in both fields for his Bachelors degree from Brown University and again for his Doctoral degree from Stanford University. Parallel to his academic career, he worked as a research scientist at NASA—Ames Research Center, and at the Interval Corporation, a technology think tank funded by Microsoft co-founder, Paul Allen. He has been a faculty member in the Department of Psychology at Kent State University and at Arizona State University, where he is Program Director for the Cognition and Behavior area, and is on the Executive Committee for the interdisciplinary Arts, Media, and Engineering program. His research is currently funded by three grants from the National Sciences Foundation, and focuses on perception and action, particularly in sports. He is best known for his research on navigational strategies used by baseball players, animals, and robots.  相似文献   

17.
单目式自主机器人视觉导航中的测距研究   总被引:3,自引:0,他引:3  
吴刚  唐振民 《机器人》2010,32(6):828-832
针对传统自主式机器人视觉测距方法的缺陷,提出了一种建立在B 对偶空间几何学上的单目摄像机标 定方法,并将其应用到单目式自主机器人的视觉测距中.与OpenCV 标定方法相比,新的标定方法对CCD 摄像机 内参数的计算更为准确,并减小单目视觉测距的总体误差.同时,在引入运动目标自动提取算法的基础上,将目标 车辆的自动检测与实时测距融合为一个完整的系统.通过实际的测量及对比试验,验证了所提出方法的有效性.  相似文献   

18.
基于路径识别的移动机器人视觉导航   总被引:15,自引:0,他引:15       下载免费PDF全文
跟随路径导引是自主式移动机器人广泛采用的一种导航方式,其中视觉导航具有其他传感器导航方式所无法比拟的优点,是移动机器人智能导航的主要发展方向。为了提高移动机器人视觉导航的实时性和准确性,提出了一个基于路径识别的视觉导航系统,其基本思想是首先用基于变分辨率的采样二值化和形态学去噪方法从原始场景图像中提取出目标支持点集,然后用一种改进的哈夫变化检测出场景中的路径,最后由路径跟踪模块分直行和转弯两种情况进行导航计算。实验结果表明,该视觉导航系统具有较好的实时性和准确性。  相似文献   

19.
视觉导航技术是保证机器人自主移动的关键技术之一。为了从整体上把握当前国际上最新的视觉导航研究动态,全面评述了仿生机器人视觉导航技术的研究进展,重点分析了视觉SLAM(Simultaneous Local-ization and Mapping)、闭环探测、视觉返家三个关键问题的研究现状及存在的问题。提出了一个新的视觉SLAM算法框架,给出了待解决的关键理论问题,并对视觉导航技术发展的难点及未来趋势进行了总结。  相似文献   

20.
基于广义预测控制的移动机器人视觉导航   总被引:1,自引:1,他引:0       下载免费PDF全文
研究了室内环境下移动机器人的视觉导航问题。由单目传感器获取场景图像,利用颜色信息提取路径,采用最小二乘法拟合路径参数,简化图像处理过程,提高了算法的实时性。通过消除相对参考路径的距离偏差和角度偏差来修正机器人的位姿状态,实现机器人对路径的跟踪。为消除机器视觉识别和传输的耗时,达到实时控制,采用改进的多变量广义预测控制方法预测下一时刻控制信号的变化量来修正系统滞后。仿真和实验结果证明了控制算法的可靠性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号