首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
Extrinsic calibration of heterogeneous cameras by line images   总被引:1,自引:0,他引:1  
The extrinsic calibration refers to determining the relative pose of cameras. Most of the approaches for cameras with non-overlapping fields of view (FOV) are based on mirror reflection, object tracking or rigidity constraint of stereo systems whereas cameras with overlapping FOV can be calibrated using structure from motion solutions. We propose an extrinsic calibration method within structure from motion framework for cameras with overlapping FOV and its extension to cameras with partially non-overlapping FOV. Recently, omnidirectional vision has become a popular topic in computer vision as an omnidirectional camera can cover large FOV in one image. Combining the good resolution of perspective cameras and the wide observation angle of omnidirectional cameras has been an attractive trend in multi-camera system. For this reason, we present an approach which is applicable to heterogeneous types of vision sensors. Moreover, this method utilizes images of lines as these features possess several advantageous characteristics over point features, especially in urban environment. The calibration consists of a linear estimation of orientation and position of cameras and optionally bundle adjustment to refine the extrinsic parameters.  相似文献   

2.
Central catadioptric cameras are widely used in virtual reality and robot navigation,and the camera calibration is a prerequisite for these applications.In this paper,we propose an easy calibration method for central catadioptric cameras with a 2D calibration pattern.Firstly,the bounding ellipse of the catadioptric image and field of view (FOV) are used to obtain the initial estimation of the intrinsic parameters.Then,the explicit relationship between the central catadioptric and the pinhole model is used to initialize the extrinsic parameters.Finally,the intrinsic and extrinsic parameters are refined by nonlinear optimization.The proposed method does not need any fitting of partial visible conic,and the projected images of 2D calibration pattern can easily cover the whole image,so our method is easy and robust.Experiments with simulated data as well as real images show the satisfactory performance of our proposed calibration method.  相似文献   

3.
一种反射折射摄像机的简易标定方法   总被引:3,自引:0,他引:3  
Central catadioptric cameras are widely used in virtual reality and robot navigation, and the camera calibration is a prerequisite for these applications. In this paper, we propose an easy calibration method for central catadioptric cameras with a 2D calibration pattern. Firstly, the bounding ellipse of the catadioptric image and field of view (FOV) are used to obtain the initial estimation of the intrinsic parameters. Then, the explicit relationship between the central catadioptric and the pinhole model is used to initialize the extrinsic parameters. Finally, the intrinsic and extrinsic parameters are refined by nonlinear optimization. The proposed method does not need any fitting of partial visible conic, and the projected images of 2D calibration pattern can easily cover the whole image, so our method is easy and robust. Experiments with simulated data as well as real images show the satisfactory performance of our proposed calibration method.  相似文献   

4.
In camera calibration, focal length is the most important parameter to be estimated, while other parameters can be obtained by prior information about scene or system configuration. In this paper, we present a polynomial constraint on the effective focal length with the condition that all the other parameters are known. The polynomial degree is 4 for paracatadioptric cameras and 16 for other catadioptric cameras. However, if the skew is 0 or the ratio between the skew and effective focal length is known, the constraint becomes a linear one or a polynomial one with degree 4 on the effective focal length square for paracatadioptric cameras and other catadioptric cameras, respectively. Based on this constraint, we propose a simple method for estimation of the effective focal length of central catadioptric cameras. Unlike many approaches using lines in literature, the proposed method needs no conic fitting of line images, which is error-prone and highly affects the calibration accuracy. It is easy to implement, and only a single view of one space line is enough with no other space information needed. Experiments on simulated and real data show this method is robust and effective.  相似文献   

5.
段福庆  吕科  周明全 《自动化学报》2011,37(11):1296-1305
一条空间直线的单光心反射折射图像是一个二次曲线段, 大多数利用直线进行单光心反射折射摄像机标定的方法都需要对直线的像进行二次曲线拟合, 曲线拟合的精度严重影响着标定的精度. 然而, 一条空间直线的像仅占整个二次曲线的一小段, 这使得曲线拟合的效果非常差. 本文利用空间三个共线点的反射折射投影给出了摄像机内参数的一个非线性约束. 当反射镜面为抛物面时, 在主点已知的情况下, 该约束变为线性约束. 如其他参数已知, 该约束变为关于有效焦距的多项式约束. 由此, 本文提出了三种不同条件下的标定算法, 算法中无需对直线的像进行二次曲线拟合, 无需场景的任何信息, 标定精度较高. 实验验证了算法的有效性.  相似文献   

6.
Central catadioptric cameras are imaging devices that use mirrors to enhance the field of view while preserving a single effective viewpoint. Lines and spheres in space are all projected into conics in the central catadioptric image planes, and such conics are called line images and sphere images, respectively. We discovered that there exists an imaginary conic in the central catadioptric image planes, defined as the modified image of the absolute conic (MIAC), and by utilizing the MIAC, the novel identical projective geometric properties of line images and sphere images may be exploited: Each line image or each sphere image is double-contact with the MIAC, which is an analogy of the discovery in pinhole camera that the image of the absolute conic (IAC) is double-contact with sphere images. Note that the IAC also exists in the central catadioptric image plane, but it does not have the double-contact properties with line images or sphere images. This is the main reason to propose the MIAC. From these geometric properties with the MIAC, two linear calibration methods for central catadioptric cameras using sphere images as well as using line images are proposed in the same framework. Note that there are many linear approaches to central catadioptric camera calibration using line images. It seems that to use the properties that line images are tangent to the MIAC only leads to an alternative geometric construction for calibration. However, for sphere images, there are only some nonlinear calibration methods in literature. Therefore, to propose linear methods for sphere images may be the main contribution of this paper. Our new algorithms have been tested in extensive experiments with respect to noise sensitivity.  相似文献   

7.
Yang  Zhao  Zhao  Yang  Hu  Xiao  Yin  Yi  Zhou  Lihua  Tao  Dapeng 《Multimedia Tools and Applications》2019,78(9):11983-12006

The surround view camera system is an emerging driving assistant technology that can assist drivers in parking by providing top-down view of surrounding situations. Such a system usually consists of four wide-angle or fish-eye cameras that mounted around the vehicle, and a bird-eye view is synthesized from images of these cameras. Commonly there are two fundamental problems for the surround view synthesis, geometric alignment and image synthesis. Geometric alignment performs fish-eye calibration and computes the image perspective transformation between the bird-eye view and images from the surrounding cameras. Image synthesis technique dedicates to seamless stitch between adjacent views and color balancing. In this paper, we propose a flexible central-around coordinate mapping (CACM) model for vehicle surround view synthesis. The CACM model calculates perspective transformation between a top-view central camera coordinate and the around camera coordinates by a marker point based method. With the transformation matrices, we could generate the pixel point mapping relationship between the bird-eye view and images of the surrounding cameras. After geometric alignment, an image fusion method based on distance weighting is adopted for seamless stitch, and an effective overlapping region brightness optimization method is proposed for color balancing. Both the seamless stitch and color balancing can be easily operated by using two types of weight coefficient under the framework of the CACM model. Experimental results show that the proposed approaches could provide a high-performance surround view camera system.

  相似文献   

8.
Catadioptric camera calibration using geometric invariants   总被引:5,自引:0,他引:5  
Central catadioptric cameras are imaging devices that use mirrors to enhance the field of view while preserving a single effective viewpoint. In this paper, we propose a novel method for the calibration of central catadioptric cameras using geometric invariants. Lines and spheres in space are all projected into conics in the catadioptric image plane. We prove that the projection of a line can provide three invariants whereas the projection of a sphere can only provide two. From these invariants, constraint equations for the intrinsic parameters of catadioptric camera are derived. Therefore, there are two kinds of variants of this novel method. The first one uses projections of lines and the second one uses projections of spheres. In general, the projections of two lines or three spheres are sufficient to achieve catadioptric camera calibration. One important conclusion in this paper is that the method based on projections of spheres is more robust and has higher accuracy than that based on projections of lines. The performances of our method are demonstrated by both the results of simulations and experiments with real images.  相似文献   

9.
In this paper, we propose a new algorithm for dynamic calibration of multiple cameras. Based on the mapping between a horizontal plane in the 3-D space and the 2-D image plane on a panned and tilted camera, we utilize the displacement of feature points and the epipolar-plane constraint among multiple cameras to infer the changes of pan and tilt angles for each camera. This algorithm does not require a complicated correspondence of feature points. It can be applied to surveillance systems with wide-range coverage. It also allows the presence of moving objects in the captured scenes while performing dynamic calibration. The sensitivity analysis of our algorithm with respect to measurement errors and fluctuations in previous estimations is also discussed. The efficiency and feasibility of this approach has been demonstrated in some experiments over real scenery.  相似文献   

10.
In order to calibrate cameras in an accurate manner, lens distortion models have to be included in the calibration procedure. Usually, the lens distortion models used in camera calibration depend on radial functions of image pixel coordinates. Such models are well-known, simple and can be estimated using just image information. However, these models do not take into account an important physical constraint of lens distortion phenomena, namely: the amount of lens distortion induced in an image point depends on the scene point depth with respect to the camera projection plane. In this paper we propose a new accurate depth dependent lens distortion model. To validate this approach, we apply the new lens distortion model to camera calibration in planar view scenarios (that is 3D scenarios where the objects of interest lie on a plane). We present promising experimental results on planar pattern images and on sport event scenarios. Nevertheless, although we emphasize the feasibility of the method for planar view scenarios, the proposed model is valid in general and can be used in any scenario where the point depth can be estimated.  相似文献   

11.
Abstract— A circular camera system employing an image‐based rendering technique that captures light‐ray data needed for reconstructing three‐dimensional (3‐D) images by using reconstruction of parallax rays from multiple images captured from multiple viewpoints around a real object in order to display a 3‐D image of a real object that can be observed from multiple surrounding viewing points on a 3‐D display is proposed. An interpolation algorithm that is effective in reducing the number of component cameras in the system is also proposed. The interpolation and experimental results which were performed on our previously proposed 3‐D display system based on the reconstruction of parallax rays will be described. When the radius of the proposed circular camera array was 1100 mm, the central angle of the camera array was 40°, and the radius of a real 3‐D object was between 60 and 100 mm, the proposed camera system, consisting of 14 cameras, could obtain sufficient 3‐D light‐ray data to reconstruct 3‐D images on the 3‐D display.  相似文献   

12.
封泽希  张辉  谢永明  朱敏 《计算机应用》2011,31(4):1043-1046
目前计算机视觉三维重建方法因需布置和标定摄像机环形拍摄场或者需要结构光而存在应用局限性问题,且算法不稳定。为此提出一种将摄像机阵列和图像配准有机结合的4目阵列重建算法,该算法不需要结构光和现场标定摄像机。经过基于包含光照和阴影的复杂室内仿真图像的实验表明,该方法能稳定有效地进行密集点云重建,且能克服现有重建方法的应用局限性与不稳定等缺陷。  相似文献   

13.
We propose a method for arbitrary view synthesis from uncalibrated multiple camera system, targeting large spaces such as soccer stadiums. In Projective Grid Space (PGS), which is a three-dimensional space defined by epipolar geometry between two basis cameras in the camera system, we reconstruct three-dimensional shape models from silhouette images. Using the three-dimensional shape models reconstructed in the PGS, we obtain a dense map of the point correspondence between reference images. The obtained correspondence can synthesize the image of arbitrary view between the reference images. We also propose a method for merging the synthesized images with the virtual background scene in the PGS. We apply the proposed methods to image sequences taken by a multiple camera system, which installed in a large concert hall. The synthesized image sequences of virtual camera have enough quality to demonstrate effectiveness of the proposed method.  相似文献   

14.
2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom robot and with a catadioptric camera for a mobile robot navigation task.  相似文献   

15.
Existing algorithms for camera calibration and metric reconstruction are not appropriate for image sets containing geometrically transformed images for which we cannot apply the camera constraints such as square or zero-skewed pixels. In this paper, we propose a framework to use scene constraints in the form of camera constraints. Our approach is based on image warping using images of parallelograms. We show that the warped image using parallelograms constrains the camera both intrinsically and extrinsically. Image warping converts the calibration problems of transformed images into the calibration problem with highly constrained cameras. In addition, it is possible to determine affine projection matrices from the images without explicit projective reconstruction. We introduce camera motion constraints of the warped image and a new parameterization of an infinite homography using the warping matrix. Combining the calibration and the affine reconstruction results in the fully metric reconstruction of scenes with geometrically transformed images. The feasibility of the proposed algorithm is tested with synthetic and real data. Finally, examples of metric reconstructions are shown from the geometrically transformed images obtained from the Internet.  相似文献   

16.
针对传统的高动态范围图像合成方法不能适应动态光照的问题, 提出了基于相机阵列的不同曝光的多幅图像的配准及高动态范围图像合成方法。首先利用相机阵列获取不同曝光图像, 结合相机阵列标定参数, 采用光场合成孔径理论对图像进行配准, 并对配准后的图像作中值位图进行二次配准。根据拟合出的各相机的光照响应曲线, 进而将二次配准后的不同曝光的图像合成为一幅高动态范围图像。实验表明, 该方法可以有效地在动态光照下合成高动态范围图像, 取得了不错的效果。  相似文献   

17.
Hybrid central catadioptric and perspective cameras are desired in practice, because the hybrid camera system can capture large field of view as well as high-resolution images. However, the calibration of the system is challenging due to heavy distortions in catadioptric cameras. In addition, previous calibration methods are only suitable for the camera system consisting of perspective cameras and catadioptric cameras with only parabolic mirrors, in which priors about the intrinsic parameters of perspective cameras are required. In this work, we provide a new approach to handle the problems. We show that if the hybrid camera system consists of at least two central catadioptric and one perspective cameras, both the intrinsic and extrinsic parameters of the system can be calibrated linearly without priors about intrinsic parameters of the perspective cameras, and the supported central catadioptric cameras of our method can be more generic. In this work, an approximated polynomial model is derived and used for rectification of catadioptric image. Firstly, with the epipolar geometry between the perspective and rectified catadioptric images, the distortion parameters of the polynomial model can be estimated linearly. Then a new method is proposed to estimate the intrinsic parameters of a central catadioptric camera with the parameters in the polynomial model, and hence the catadioptric cameras can be calibrated. Finally, a linear self-calibration method for the hybrid system is given with the calibrated catadioptric cameras. The main advantage of our method is that it cannot only calibrate both the intrinsic and extrinsic parameters of the hybrid camera system, but also simplify a traditional nonlinear self-calibration of perspective cameras to a linear process. Experiments show that our proposed method is robust and reliable.  相似文献   

18.
Capturing exposure sequences to compute high dynamic range (HDR) images causes motion blur in cases of camera movement. This also applies to light‐field cameras: frames rendered from multiple blurred HDR light‐field perspectives are also blurred. While the recording times of exposure sequences cannot be reduced for a single‐sensor camera, we demonstrate how this can be achieved for a camera array. Thus, we decrease capturing time and reduce motion blur for HDR light‐field video recording. Applying a spatio‐temporal exposure pattern while capturing frames with a camera array reduces the overall recording time and enables the estimation of camera movement within one light‐field video frame. By estimating depth maps and local point spread functions (PSFs) from multiple perspectives with the same exposure, regional motion deblurring can be supported. Missing exposures at various perspectives are then interpolated.  相似文献   

19.
Helmholtz Stereopsis is a powerful technique for reconstruction of scenes with arbitrary reflectance properties. However, previous formulations have been limited to static objects due to the requirement to sequentially capture reciprocal image pairs (i.e. two images with the camera and light source positions mutually interchanged). In this paper, we propose colour Helmholtz Stereopsis—a novel framework for Helmholtz Stereopsis based on wavelength multiplexing. To address the new set of challenges introduced by multispectral data acquisition, the proposed Colour Helmholtz Stereopsis pipeline uniquely combines a tailored photometric calibration for multiple camera/light source pairs, a novel procedure for spatio-temporal surface chromaticity calibration and a state-of-the-art Bayesian formulation necessary for accurate reconstruction from a minimal number of reciprocal pairs. In this framework, reflectance is spatially unconstrained both in terms of its chromaticity and the directional component dependent on the illumination incidence and viewing angles. The proposed approach for the first time enables modelling of dynamic scenes with arbitrary unknown and spatially varying reflectance using a practical acquisition set-up consisting of a small number of cameras and light sources. Experimental results demonstrate the accuracy and flexibility of the technique on a variety of static and dynamic scenes with arbitrary unknown BRDF and chromaticity ranging from uniform to arbitrary and spatially varying.  相似文献   

20.
基于OpenCV图像处理库函数,在VS2013平台下开发了一种红外与可见光图像融合系统,该方法克服了红外图像特征点不明显的缺点,通过特殊的摄像机标定技术,完成了红外与可见光摄像机的标定,进而实现了红外与可见光图像的匹配融合.实验证明,该系统能达到较好的融合效果,并能保证融合的实时性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号