共查询到20条相似文献,搜索用时 171 毫秒
1.
2.
《计算机学报》2014,(6)
为了使全方位视觉传感器(Omni-Directional Vision Sensor,ODVS)获取的全景图像上各像素本身具有成像物点的深度信息,文中设计了一种具有单发射中心点(Single Emission Point,SEP)的全景彩色体结构光发生器(Panoramic Color Structured Light Generator,PCSLG)以配合具有单视点(Single View Point,SVP)成像特点的ODVS;然后将ODVS和PCSLG垂直配置在同一轴心线上,实现一种主动式立体全景视觉传感器(Active Stereo Omni-Directional Vision Sensor,ASODVS);最后根据全景图像上各像素点所带光源的颜色信息,通过颜色识别算法以及ODVS和PCSLG的几何关系推断PCSLG的发射角并估算成像物点的深度值.实验结果表明,该文设计的ASODVS能快速实时进行全景立体图像的特征点实时匹配和空间物点深度的测量,实现了一种以观察者为中心的3D主动立体视觉感知. 相似文献
3.
为避免机器人视觉伺服过程中因无法进行目标选择而导致视觉伺服失败的问题,提出一种基于特征点密度峰值的视觉伺服目标选择方法.采用(oriented fast and rotated brief,ORB)特征点检测与匹配算法识别视场中所有的目标物体,利用PROSAC算法剔除误匹配点,通过引入特征点密度峰值聚类算法分离图像中不同目标物体的特征点,提出基于位置优先决策方法选择最佳视觉伺服目标.实验结果表明,通过该方法机器人能够在多个相同目标物体中选择一个最佳视觉伺服目标,有效解决了移动机器人视觉伺服中出现的目标选择问题. 相似文献
4.
5.
6.
传统的机器人视觉系统通常由图像传感器、镜头和处理器组成,往往存在成像视野小,不耐辐照等弊端。本文针对核环境下辐射剂量大、环境复杂、机器人需要快速灵活运动等特点,提出了基于全景视觉的机器人视觉系统,使用常规摄像机和全景反射镜、凸透镜组成,能够对机器人周围360度的环境信息成像,并对该系统各个部件做了耐辐照的选型。 相似文献
7.
基于舵机云台的人型机器人单目视觉测距 总被引:1,自引:0,他引:1
P&T舵机视觉云台是摄像头可水平和垂直转动的视觉系统,视觉的可旋转性增大了可视范围;通过单目视觉三维重建原理与视觉转动角度的结合,可计算出不同转动角下目标物与机器人的距离;实验设计圆形区域,对不同转角下采集的图像进行分析,利用像平面到距离的转化,重绘了圆形区域边界,证明通过该算法机器人能获得全景视角;该算法在机器人比赛环境中,可以实现目标位置和本体位置的判断。 相似文献
8.
研究目的:高效精确定位是移动机器人智能导航的先决条件。传统视觉定位系统,如视觉里程计(VO)和同时定位与三维重建(SLAM)算法,存在两点不足:一是由累积定位误差引起的漂移问题,二是由光照变化和移动物体导致的错误运动估计结果。创新要点:通过引入全景相机到传统双目VO系统,提出一种增强型VO,高效利用全景相机360°视场角信息。(1)在线建立路口场景压缩全景路标库;(2)机器人以任意方向重新访问路标时,对定位结果进行全局校正;(3)当双目立体VO不能提供可靠定位信息时对航向角估计结果进行校正;(4)为高效利用信息量较多的全景图像,引入压缩感知概念并提出一种自适应压缩特征。研究方法:首先,在压缩亮度特征基础上,增加压缩SURF特征提高其描述能力,通过分析特征区分度,使压缩特征可以根据具体图像特点自适应调节,最终构建自适应压缩特征(ACF,图2),该特征计算速度快(表3)、描述能力强(图6、7,表1),有效提高全景图像信息利用效率。然后,使用ACF对全景路标图像进行描述,提出一种任意方向的路标图像匹配算法,若当前全景图像与路标图像匹配成功,则对当前定位结果进行全局位姿校正(图4),抑制大范围环境中定位路径漂移问题(图10、11)。最后,介绍基于图像片匹配的航向角鲁棒估计方法,当双目视觉里程计因特征跟踪质量差而导致运动估计结果不稳定时,对局部运动估计结果进行校正,提高运动估计的精度(图9)。重要结论:提出的增强型视觉里程计系统可以准实时提供可靠定位结果,极大抑制大范围挑战性环境中传统VO漂移问题和运动估计错误问题。实验结果显示,所提算法大幅度提高传统VO的准确性和鲁棒性。 相似文献
9.
为了获取立体全景图像,利用双曲面折反射镜面构成的全方位视觉传感器(Omni-Directional Vision Sensor,ODVS)具有固定单视点、水平方向360°、垂直大范围视场等成像特点,将两个具有相同成像参数的ODVS以面对背方式进行组合构成一种新型的双目立体全方位视觉传感器;组合时将上下两个ODVS的单视点固定在同一轴线上,并将两个ODVS的成像平面垂直于该轴线;组合而成的双目立体全方位视觉传感器能简化成像单元的标定、极线的配准以及特征点匹配等繁琐的步骤.实验结果表明,设计的双目立体全方位视觉传感器能有效解决极线约束难题、快速实现全景立体图像的特征点匹配、降低物点深度测量的复杂度. 相似文献
10.
11.
Sonya Coleman Bryan Scotney Dermot Kerr 《Journal of Mathematical Imaging and Vision》2008,32(3):349-361
The use of omni-directional cameras has become increasingly popular in vision systems for video surveillance and autonomous
robot navigation. However, to date most of the research relating to omni-directional cameras has focussed on the design of
the camera or the way in which to project the omni-directional image to a panoramic view rather than the processing of such
images after capture. Typically images obtained from omni-directional cameras are transformed to sparse panoramic images that
are interpolated to obtain a complete panoramic view prior to low level image processing. This interpolation presents a significant
computational overhead with respect to real-time vision.
We present an efficient design procedure for space variant feature extraction operators that can be applied to a sparse panoramic
image and directly processes this sparse image. This paper highlights the reduction of the computational overheads of directly
processing images arising from omni-directional cameras through efficient coding and storage, whilst retaining accuracy sufficient
for application to real-time robot vision.
相似文献
Dermot KerrEmail: |
12.
A vision-based navigation system is presented for determining a mobile robot's position and orientation using panoramic imagery. Omni-directional sensors are useful in obtaining a 360° field of view, permitting various objects in the vicinity of a robot to be imaged simultaneously. Recognizing landmarks in a panoramic image from an a priori model of distinct features in an environment allows a robot's location information to be updated. A system is shown for tracking vertex and line features for omni-directional cameras constructed with catadioptric (containing both mirrors and lenses) optics. With the aid of the panoramic Hough transform, line features can be tracked without restricting the mirror geometry so that it satisfies the single viewpoint criteria. This allows the use of rectangular scene features to be used as landmarks. Two paradigms for localization are explored, with experiments conducted with synthetic and real images. A working implementation on a mobile robot is also shown. 相似文献
13.
Vision-based localization algorithm based on landmark matching, triangulation, reconstruction, and comparison 总被引:3,自引:0,他引:3
Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRQ global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data. 相似文献
14.
Outdoor autonomous navigation using SURF features 总被引:1,自引:0,他引:1
In this article, we propose a speeded-up robust features (SURF)-based approach for outdoor autonomous navigation. In this
approach, we capture environmental images using an omni-directional camera and extract features of these images using SURF.
We treat these features as landmarks to estimate a robot’s self-location and direction of motion. SURF features are invariant
under scale changes and rotation, and are robust under image noise, changes in light conditions, and changes of viewpoint.
Therefore, SURF features are appropriate for the self-location estimation and navigation of a robot. The mobile robot navigation
method consists of two modes, the teaching mode and the navigation mode. In the teaching mode, we teach a navigation course.
In the navigation mode, the mobile robot navigates along the teaching course autonomously. In our experiment, the outdoor
teaching course was about 150 m long, the average speed was 2.9 km/h, and the maximum trajectory error was 3.3 m. The processing
time of SURF was several times shorter than that of scale-invariant feature transform (SIFT). Therefore, the navigation speed
of the mobile robot was similar to the walking speed of a person. 相似文献
15.
16.
Ralf Möller Martin Krzykawski Lorenz Gerstmayr-Hillen Michael Horst David Fleer Janina de Jong 《Robotics and Autonomous Systems》2013,61(12):1415-1439
The paper describes a visual method for the navigation of autonomous floor-cleaning robots. The method constructs a topological map with metrical information where place nodes are characterized by panoramic images and by particle clouds representing position estimates. Current image and position estimate of the robot are interrelated to landmark images and position estimates stored in the map nodes through a holistic visual homing method which provides bearing and orientation estimates. Based on these estimates, a position estimate of the robot is updated by a particle filter. The robot’s position estimates are used to guide the robot along parallel, meandering lanes and are also assigned to newly created map nodes which later serve as landmarks. Computer simulations and robot experiments confirm that the robot position estimate obtained by this method is sufficiently accurate to keep the robot on parallel lanes, even in the presence of large random and systematic odometry errors. This ensures an efficient cleaning behavior with almost complete coverage of a rectangular area and only small repeated coverage. Furthermore, the topological-metrical map can be used to completely cover rooms or apartments by multiple meander parts. 相似文献
17.
A. A. Loukianov 《Journal of Computer and Systems Sciences International》2006,45(1):153-161
The navigation problem of controlling the accurate positioning of a mobile robot (MR) with a computer vision system under conditions of uncertainty of information about its position is considered. Visual servo control is used, in which the error signal is calculated as the difference in the coordinates of natural visual landmarks detected by the computer vision system on the current and reference (received in a target position) images. To compute the error signal, a probabilistic relaxation method of correct matching of the landmarks extracted in the image is proposed. The efficiency of the proposed method has been confirmed by numerical experiments for processing real images. 相似文献
18.
Okkee Sim Jaesung Oh Kang Kyu Lee Jun-Ho Oh 《Journal of Intelligent and Robotic Systems》2018,89(3-4):403-419
The paper presents a robust control law for homing of an autonomous robot. The proposed work aims to solve this problem for practical conditions such as random errors in commanded velocities and unknown distance sensor characteristics. The proposed steering control aligns the robot’s orientation with homing vector using arbitrary real valued distance function providing the capability to work in changing environment conditions. Finite time convergence to the equilibrium using proposed control law is achieved in the presence of bounded random velocity errors regardless of the initial position and orientation. Just the sign information as feedback supports applicability of proposed control law with any distance function. A matching parameter between panoramic images obtained at home and current positions is a function of distance between home and current positions. However, explicit relation between distance and image matching parameter is unknown. This work demonstrates the application of proposed method for visual homing based on image distance function rendering the benefit of minimal image processing. Various simulation and experimental results are presented for visual homing to support the theory presented in this paper. Advantage of proposed visual homing is also explored in changing environment conditions. 相似文献
19.
20.
Arnau Ramisa Alex Goldhoorn David Aldavert Ricardo Toledo Ramon Lopez de Mantaras 《Journal of Intelligent and Robotic Systems》2011,64(3-4):625-649
Biologically inspired homing methods, such as the Average Landmark Vector, are an interesting solution for local navigation due to its simplicity. However, usually they require a modification of the environment by placing artificial landmarks in order to work reliably. In this paper we combine the Average Landmark Vector with invariant feature points automatically detected in panoramic images to overcome this limitation. The proposed approach has been evaluated first in simulation and, as promising results are found, also in two data sets of panoramas from real world environments. 相似文献