首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
陆军  穆海军  朱齐丹  杨明 《计算机应用》2007,27(7):1677-1679
研究一种基于全景视觉的机器人自主定位的方法。利用光学视觉原理,设计了一种全景视觉传感器,从而获得机器人周围环境的全方位景物的图像。通过去除全景图像的噪声、分割颜色阈值、计算中心点等处理,识别出机器人周围景物的已知路标,采用三角定位法,计算出机器人的坐标,从而为机器人的导航、避碰等任务奠定良好的基础。实验结果表明,此方法对于机器人自主定位具有一定的可行性。  相似文献   

2.
为了使全方位视觉传感器(Omni-Directional Vision Sensor,ODVS)获取的全景图像上各像素本身具有成像物点的深度信息,文中设计了一种具有单发射中心点(Single Emission Point,SEP)的全景彩色体结构光发生器(Panoramic Color Structured Light Generator,PCSLG)以配合具有单视点(Single View Point,SVP)成像特点的ODVS;然后将ODVS和PCSLG垂直配置在同一轴心线上,实现一种主动式立体全景视觉传感器(Active Stereo Omni-Directional Vision Sensor,ASODVS);最后根据全景图像上各像素点所带光源的颜色信息,通过颜色识别算法以及ODVS和PCSLG的几何关系推断PCSLG的发射角并估算成像物点的深度值.实验结果表明,该文设计的ASODVS能快速实时进行全景立体图像的特征点实时匹配和空间物点深度的测量,实现了一种以观察者为中心的3D主动立体视觉感知.  相似文献   

3.
为避免机器人视觉伺服过程中因无法进行目标选择而导致视觉伺服失败的问题,提出一种基于特征点密度峰值的视觉伺服目标选择方法.采用(oriented fast and rotated brief,ORB)特征点检测与匹配算法识别视场中所有的目标物体,利用PROSAC算法剔除误匹配点,通过引入特征点密度峰值聚类算法分离图像中不同目标物体的特征点,提出基于位置优先决策方法选择最佳视觉伺服目标.实验结果表明,通过该方法机器人能够在多个相同目标物体中选择一个最佳视觉伺服目标,有效解决了移动机器人视觉伺服中出现的目标选择问题.  相似文献   

4.
研究全景视觉机器人同时定位和地图创建(SLAM)问题。针对普通视觉视野狭窄, 对路标的连续跟踪和定位能力差的问题, 提出了一种基于改进的扩展卡尔曼滤波(EKF)算法的全景视觉机器人SLAM方法, 用全景视觉得到机器人周围的环境信息, 然后从这些信息中提取出环境特征, 定位出路标位置, 进而通过EKF算法同步更新机器人位姿和地图库。仿真实验和实体机器人实验结果验证了该算法的准确性和有效性, 且全景视觉比普通视觉定位精度更高。  相似文献   

5.
基于全景与前向视觉的足球机器人定位方法研究   总被引:2,自引:0,他引:2  
针对足球机器人比赛中要求快速准确获取目标位置信息的特点,设计了由全景视觉和前向视觉共同构建而成的机器人视觉系统.对于单一视觉传感器,采用图像坐标系转换求反正切法、分段比例法及针孔摄像机成像模型的方法,以blob面积为优选条件选择定位方式,从而得到精度较高的定位结果.实验结果证明了该视觉系统设计的合理性及定位方法的有效性.  相似文献   

6.
传统的机器人视觉系统通常由图像传感器、镜头和处理器组成,往往存在成像视野小,不耐辐照等弊端。本文针对核环境下辐射剂量大、环境复杂、机器人需要快速灵活运动等特点,提出了基于全景视觉的机器人视觉系统,使用常规摄像机和全景反射镜、凸透镜组成,能够对机器人周围360度的环境信息成像,并对该系统各个部件做了耐辐照的选型。  相似文献   

7.
基于舵机云台的人型机器人单目视觉测距   总被引:1,自引:0,他引:1  
P&T舵机视觉云台是摄像头可水平和垂直转动的视觉系统,视觉的可旋转性增大了可视范围;通过单目视觉三维重建原理与视觉转动角度的结合,可计算出不同转动角下目标物与机器人的距离;实验设计圆形区域,对不同转角下采集的图像进行分析,利用像平面到距离的转化,重绘了圆形区域边界,证明通过该算法机器人能获得全景视角;该算法在机器人比赛环境中,可以实现目标位置和本体位置的判断。  相似文献   

8.
研究目的:高效精确定位是移动机器人智能导航的先决条件。传统视觉定位系统,如视觉里程计(VO)和同时定位与三维重建(SLAM)算法,存在两点不足:一是由累积定位误差引起的漂移问题,二是由光照变化和移动物体导致的错误运动估计结果。创新要点:通过引入全景相机到传统双目VO系统,提出一种增强型VO,高效利用全景相机360°视场角信息。(1)在线建立路口场景压缩全景路标库;(2)机器人以任意方向重新访问路标时,对定位结果进行全局校正;(3)当双目立体VO不能提供可靠定位信息时对航向角估计结果进行校正;(4)为高效利用信息量较多的全景图像,引入压缩感知概念并提出一种自适应压缩特征。研究方法:首先,在压缩亮度特征基础上,增加压缩SURF特征提高其描述能力,通过分析特征区分度,使压缩特征可以根据具体图像特点自适应调节,最终构建自适应压缩特征(ACF,图2),该特征计算速度快(表3)、描述能力强(图6、7,表1),有效提高全景图像信息利用效率。然后,使用ACF对全景路标图像进行描述,提出一种任意方向的路标图像匹配算法,若当前全景图像与路标图像匹配成功,则对当前定位结果进行全局位姿校正(图4),抑制大范围环境中定位路径漂移问题(图10、11)。最后,介绍基于图像片匹配的航向角鲁棒估计方法,当双目视觉里程计因特征跟踪质量差而导致运动估计结果不稳定时,对局部运动估计结果进行校正,提高运动估计的精度(图9)。重要结论:提出的增强型视觉里程计系统可以准实时提供可靠定位结果,极大抑制大范围挑战性环境中传统VO漂移问题和运动估计错误问题。实验结果显示,所提算法大幅度提高传统VO的准确性和鲁棒性。  相似文献   

9.
为了获取立体全景图像,利用双曲面折反射镜面构成的全方位视觉传感器(Omni-Directional Vision Sensor,ODVS)具有固定单视点、水平方向360°、垂直大范围视场等成像特点,将两个具有相同成像参数的ODVS以面对背方式进行组合构成一种新型的双目立体全方位视觉传感器;组合时将上下两个ODVS的单视点固定在同一轴线上,并将两个ODVS的成像平面垂直于该轴线;组合而成的双目立体全方位视觉传感器能简化成像单元的标定、极线的配准以及特征点匹配等繁琐的步骤.实验结果表明,设计的双目立体全方位视觉传感器能有效解决极线约束难题、快速实现全景立体图像的特征点匹配、降低物点深度测量的复杂度.  相似文献   

10.
针对球形机器人定位问题,提出了基于立体视觉的球形机器人定位方法.通过双目相机采集环境图像序列,提取Shi-Tomasi特征点,计算尺度不变特征变换(SIFT)特征描述符,并利用欧氏距离进行立体匹配;通过KLT算法进行特征点跟踪;采用解析法求解机器人在前后帧图像之间的位姿变化量;同时采用特征点筛选、RANSAC算法和卡尔曼滤波等方法,提高运动估计的准确性和鲁棒性.实验结果验证了所提出方法的可行性.  相似文献   

11.
The use of omni-directional cameras has become increasingly popular in vision systems for video surveillance and autonomous robot navigation. However, to date most of the research relating to omni-directional cameras has focussed on the design of the camera or the way in which to project the omni-directional image to a panoramic view rather than the processing of such images after capture. Typically images obtained from omni-directional cameras are transformed to sparse panoramic images that are interpolated to obtain a complete panoramic view prior to low level image processing. This interpolation presents a significant computational overhead with respect to real-time vision. We present an efficient design procedure for space variant feature extraction operators that can be applied to a sparse panoramic image and directly processes this sparse image. This paper highlights the reduction of the computational overheads of directly processing images arising from omni-directional cameras through efficient coding and storage, whilst retaining accuracy sufficient for application to real-time robot vision.
Dermot KerrEmail:
  相似文献   

12.
A vision-based navigation system is presented for determining a mobile robot's position and orientation using panoramic imagery. Omni-directional sensors are useful in obtaining a 360° field of view, permitting various objects in the vicinity of a robot to be imaged simultaneously. Recognizing landmarks in a panoramic image from an a priori model of distinct features in an environment allows a robot's location information to be updated. A system is shown for tracking vertex and line features for omni-directional cameras constructed with catadioptric (containing both mirrors and lenses) optics. With the aid of the panoramic Hough transform, line features can be tracked without restricting the mirror geometry so that it satisfies the single viewpoint criteria. This allows the use of rectangular scene features to be used as landmarks. Two paradigms for localization are explored, with experiments conducted with synthetic and real images. A working implementation on a mobile robot is also shown.  相似文献   

13.
Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRQ global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data.  相似文献   

14.
Outdoor autonomous navigation using SURF features   总被引:1,自引:0,他引:1  
In this article, we propose a speeded-up robust features (SURF)-based approach for outdoor autonomous navigation. In this approach, we capture environmental images using an omni-directional camera and extract features of these images using SURF. We treat these features as landmarks to estimate a robot’s self-location and direction of motion. SURF features are invariant under scale changes and rotation, and are robust under image noise, changes in light conditions, and changes of viewpoint. Therefore, SURF features are appropriate for the self-location estimation and navigation of a robot. The mobile robot navigation method consists of two modes, the teaching mode and the navigation mode. In the teaching mode, we teach a navigation course. In the navigation mode, the mobile robot navigates along the teaching course autonomously. In our experiment, the outdoor teaching course was about 150 m long, the average speed was 2.9 km/h, and the maximum trajectory error was 3.3 m. The processing time of SURF was several times shorter than that of scale-invariant feature transform (SIFT). Therefore, the navigation speed of the mobile robot was similar to the walking speed of a person.  相似文献   

15.
高压输电线跨距远、线路复杂,在充分分析电力线图像的特殊线性特性的基础上,文章提出了一种采用SURF算法完成输电线全景拼接并利用相位一致性提取线路特征的方法。首先采用SURF算法对部分电力线图片进行配准,并用RANSAC算法剔除错误的特征点对,然后拼接得到输电线全景图,对全景图采用相位一致性方法进行特征检测,最后提取经过标记的完整的单根电力线。对现场拍摄的部分输电线路图像进行了实验,结果提取出了完整、精确的单根电力线,说明该方法能提高输电线路弧垂计算的精度。  相似文献   

16.
The paper describes a visual method for the navigation of autonomous floor-cleaning robots. The method constructs a topological map with metrical information where place nodes are characterized by panoramic images and by particle clouds representing position estimates. Current image and position estimate of the robot are interrelated to landmark images and position estimates stored in the map nodes through a holistic visual homing method which provides bearing and orientation estimates. Based on these estimates, a position estimate of the robot is updated by a particle filter. The robot’s position estimates are used to guide the robot along parallel, meandering lanes and are also assigned to newly created map nodes which later serve as landmarks. Computer simulations and robot experiments confirm that the robot position estimate obtained by this method is sufficiently accurate to keep the robot on parallel lanes, even in the presence of large random and systematic odometry errors. This ensures an efficient cleaning behavior with almost complete coverage of a rectangular area and only small repeated coverage. Furthermore, the topological-metrical map can be used to completely cover rooms or apartments by multiple meander parts.  相似文献   

17.
The navigation problem of controlling the accurate positioning of a mobile robot (MR) with a computer vision system under conditions of uncertainty of information about its position is considered. Visual servo control is used, in which the error signal is calculated as the difference in the coordinates of natural visual landmarks detected by the computer vision system on the current and reference (received in a target position) images. To compute the error signal, a probabilistic relaxation method of correct matching of the landmarks extracted in the image is proposed. The efficiency of the proposed method has been confirmed by numerical experiments for processing real images.  相似文献   

18.
The paper presents a robust control law for homing of an autonomous robot. The proposed work aims to solve this problem for practical conditions such as random errors in commanded velocities and unknown distance sensor characteristics. The proposed steering control aligns the robot’s orientation with homing vector using arbitrary real valued distance function providing the capability to work in changing environment conditions. Finite time convergence to the equilibrium using proposed control law is achieved in the presence of bounded random velocity errors regardless of the initial position and orientation. Just the sign information as feedback supports applicability of proposed control law with any distance function. A matching parameter between panoramic images obtained at home and current positions is a function of distance between home and current positions. However, explicit relation between distance and image matching parameter is unknown. This work demonstrates the application of proposed method for visual homing based on image distance function rendering the benefit of minimal image processing. Various simulation and experimental results are presented for visual homing to support the theory presented in this paper. Advantage of proposed visual homing is also explored in changing environment conditions.  相似文献   

19.
在光照条件可变且存在电磁干扰的环境下,针对机器人室外导航任务,提出了一种基于全景近红外视 觉和编码路标的自定位系统.通过近红外光源照明,利用全景视觉识别采用条形编码格式的路标,并利用扩展卡尔 曼滤波算法(EKF)融合视觉数据和里程计数据,从而实现机器人自定位.实验证明,该方法消除了室外大范围导航 时光照变化对机器人定位结果的影响.  相似文献   

20.
Biologically inspired homing methods, such as the Average Landmark Vector, are an interesting solution for local navigation due to its simplicity. However, usually they require a modification of the environment by placing artificial landmarks in order to work reliably. In this paper we combine the Average Landmark Vector with invariant feature points automatically detected in panoramic images to overcome this limitation. The proposed approach has been evaluated first in simulation and, as promising results are found, also in two data sets of panoramas from real world environments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号