首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Camera calibration with one-dimensional objects   总被引:20,自引:0,他引:20  
Camera calibration has been studied extensively in computer vision and photogrammetry and the proposed techniques in the literature include those using 3D apparatus (two or three planes orthogonal to each other or a plane undergoing a pure translation, etc.), 2D objects (planar patterns undergoing unknown motions), and 0D features (self-calibration using unknown scene points). Yet, this paper proposes a new calibration technique using 1D objects (points aligned on a line), thus filling the missing dimension in calibration. In particular, we show that camera calibration is not possible with free-moving 1D objects, but can be solved if one point is fixed. A closed-form solution is developed if six or more observations of such a 1D object are made. For higher accuracy, a nonlinear technique based on the maximum likelihood criterion is then used to refine the estimate. Singularities have also been studied. Besides the theoretical aspect, the proposed technique is also important in practice especially when calibrating multiple cameras mounted apart from each other, where the calibration objects are required to be visible simultaneously.  相似文献   

2.
This paper addresses the problem of calibrating camera parameters using variational methods. One problem addressed is the severe lens distortion in low-cost cameras. For many computer vision algorithms aiming at reconstructing reliable representations of 3D scenes, the camera distortion effects will lead to inaccurate 3D reconstructions and geometrical measurements if not accounted for. A second problem is the color calibration problem caused by variations in camera responses that result in different color measurements and affects the algorithms that depend on these measurements. We also address the extrinsic camera calibration that estimates relative poses and orientations of multiple cameras in the system and the intrinsic camera calibration that estimates focal lengths and the skew parameters of the cameras. To address these calibration problems, we present multiview stereo techniques based on variational methods that utilize partial and ordinary differential equations. Our approach can also be considered as a coordinated refinement of camera calibration parameters. To reduce computational complexity of such algorithms, we utilize prior knowledge on the calibration object, making a piecewise smooth surface assumption, and evolve the pose, orientation, and scale parameters of such a 3D model object without requiring a 2D feature extraction from camera views. We derive the evolution equations for the distortion coefficients, the color calibration parameters, the extrinsic and intrinsic parameters of the cameras, and present experimental results.  相似文献   

3.
One of the possible methods for accurate, fast, low-cost and automated robot calibration is to employ a single camera rigidly mounted to the robot end-effector together with a single camera calibration board. The end-effector pose is measured by calibration of the camera at every robot measurement configuration. This paper contends that, with several modifications, Tsai's radial alignment constraint (RAC) camera calibration method can be made a fast and sufficiently accurate pose measurement technique. This paper focuses on speed, accuracy and cost enhancement of RAC-based camera calibration. A fast RAC-based algorithm is proposed, which cuts the computation time of Tsai's original algorithm by about a 5: 1 ratio while keeping its accuracy within the tolerances required for a successful robot calibration. A low-cost method for estimation of the ratio of scale factors of the camera/vision system is also proposed. This method does not require a precision vertical micrometer stage to provide non-coplanar calibration points data for camera calibration. Finally, the phenomenon of perspective projection distortion of circular camera calibration points is fully analyzed and error compensation methods are proposed.  相似文献   

4.
Camera model and its calibration are required in many applications for coordinate conversions between the two-dimensional image and the real three-dimensional world. Self-calibration method is usually chosen for camera calibration in uncontrolled environments because the scene geometry could be unknown. However when no reliable feature correspondences can be established or when the camera is static in relation to the majority of the scene, self-calibration method fails to work. On the other hand, object-based calibration methods are more reliable than self-calibration methods due to the existence of the object with known geometry. However, most object-based calibration methods are unable to work in uncontrolled environments because they require the geometric knowledge on calibration objects. Though in the past few years the simplest geometry required for a calibration object has been reduced to a 1D object with at least three points, it is still not easy to find such an object in an uncontrolled environment, not to mention the additional metric/motion requirement in the existing methods. Meanwhile, it is very easy to find a 1D object with two end points in most scenes. Thus, it would be very worthwhile to investigate an object-based method based on such a simple object so that it would still be possible to calibrate a camera when both self-calibration and existing object-based calibration fail to work. We propose a new camera calibration method which requires only an object with two end points, the simplest geometry that can be extracted from many real-life objects. Through observations of such a 1D object at different positions/orientations on a plane which is fixed in relation to the camera, both intrinsic (focal length) and extrinsic (rotation angles and translations) camera parameters can be calibrated using the proposed method. The proposed method has been tested on simulated data and real data from both controlled and uncontrolled environments, including situations where no explicit 1D calibration objects are available, e.g. from a human walking sequence. Very accurate camera calibration results have been achieved using the proposed method.  相似文献   

5.
Pose refinement is an essential task for computer vision systems that require the calibration and verification of model and camera parameters. Typical domains include the real-time tracking of objects and verification in model-based recognition systems. A technique is presented for recovering model and camera parameters of 3D objects from a single two-dimensional image. This basic problem is further complicated by the incorporation of simple bounds on the model and camera parameters and linear constraints restricting some subset of object parameters to a specific relationship. It is demonstrated in this paper that this constrained pose refinement formulation is no more difficult than the original problem based on numerical analysis techniques, including active set methods and lagrange multiplier analysis. A number of bounded and linearly constrained parametric models are tested and convergence to proper values occurs from a wide range of initial error, utilizing minimal matching information (relative to the number of parameters and components). The ability to recover model parameters in a constrained search space will thus simplify associated object recognition problems.  相似文献   

6.
Stereovision is an effective technique to use a CCD video camera to determine the 3D position of a target object from two or more simultaneous views of the scene. Camera calibration is a central issue in finding the position of objects in a stereovision system. This is usually carried out by calibrating each camera independently, and then applying a geometric transformation of the external parameters to find the geometry of the stereo setting. After calibration, the distance of various target objects in the scene can be calculated with CCD video cameras, and recovering the 3D structure from 2D images becomes simpler. However, the process of camera calibration is complicated. Based on the ideal pinhole model of a camera, we describe formulas to calculate intrinsic parameters that specify the correct camera characteristics, and extrinsic parameters that describe the spatial relationship between the camera and the world coordinate system. A simple camera calibration method for our CCD video cameras and corresponding experiment results are also given. This work was presented in part at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002  相似文献   

7.
One of the problems that slows the development of off-line programming is the low static and dynamic positioning accuracy of robots. Robot calibration improves the positioning accuracy and can also be used as a diagnostic tool in robot production and maintenance. This work presents techniques for modeling and performing robot calibration processes with off-line programming using a 3D vision-based measurement system. The measurement system is portable, accurate and low cost, consisting of a single CCD camera mounted on the robot tool flange to measure the robot end-effector pose relative to a world coordinate system. Radial lens distortion is included in the photogrammetric model. Scale factors and image centers are obtained with innovative techniques, making use of a multiview approach. Results show that the achieved average accuracy using a common off-the-shelf CCD camera varies from 0.2 to 0.4 mm, at distances from 600 to 1000 mm from the target, respectively, with different camera orientations. Experimentation is performed on two industrial robots to test their position accuracy improvement using the calibration system proposed: an ABB IRB-2400 and a PUMA-500. The robots were calibrated at different regions and volumes within their workspace achieving accuracy from three to six times better when comparing errors before and after calibration, if measured locally. The proposed off-line robot calibration system is fast, accurate and easy to set up.  相似文献   

8.
摄像机标定与图像畸变修正是摄影测量、视觉检测、计算机视觉等领域的重点研究课题之一,在测绘、工业控制、导航、军事等领域得到了极大的应用。研究了摄像机模型,摄像机标定等内容。对DLT的标定方法进行了改进,在摄像机模型中全面考虑了镜头的畸变,利用图像中心附近点畸变量较小的性质,提出一种摄像机内外部参数和像差修正参数分离的标定方法。并举例说明了基于同一物体的两幅图画三维重构的具体实验步骤和方法,分析了影响精度的因素。  相似文献   

9.
Geometric camera calibration using circular control points   总被引:20,自引:0,他引:20  
Modern CCD cameras are usually capable of a spatial accuracy greater than 1/50 of the pixel size. However, such accuracy is not easily attained due to various error sources that can affect the image formation process. Current calibration methods typically assume that the observations are unbiased, the only error is the zero-mean independent and identically distributed random noise in the observed image coordinates, and the camera model completely explains the mapping between the 3D coordinates and the image coordinates. In general, these conditions are not met, causing the calibration results to be less accurate than expected. In the paper, a calibration procedure for precise 3D computer vision applications is described. It introduces bias correction for circular control points and a nonrecursive method for reversing the distortion model. The accuracy analysis is presented and the error sources that can reduce the theoretical accuracy are discussed. The tests with synthetic images indicate improvements in the calibration results in limited error conditions. In real images, the suppression of external error sources becomes a prerequisite for successful calibration.  相似文献   

10.
Generating 3D models of objects from video sequences is an important problem in many multimedia applications ranging from teleconferencing to virtual reality. In this paper, we present a method of estimating the 3D face model from a monocular image sequence, using a few standard results from the affine camera geometry literature in computer vision, and spline fitting techniques using a modified non parametric regression technique. We use the bicubic spline functions to model the depth map, given a set of observation depth maps computed from frame pairs in a video sequence. The minimal number of splines are chosen on the basis of the Schwartz's Criterion. We extend the spline fitting algorithm to hierarchical splines. Note that the camera calibration parameters and the prior knowledge of the object shape is not required by the algorithm. The system has been successfully demonstrated to extract 3D face structure of humans as well as other objects, starting from their image sequences.  相似文献   

11.
稳定精确的摄像机标定方法   总被引:1,自引:0,他引:1  
在Tsai两步法的基础上提出了一种更加稳定精确的摄像机标定方法。由于Tsai两步法中只考虑了摄像机镜头的径向畸变因素,所以为了提高摄像机的标定精度,在其基础上考虑了镜头的切向畸变情况。在求解摄像机参数的过程中,第一步同Tsai两步法采用最小二乘法求解线性方程得到外部参数,再利用最小二乘法求解关于畸变参数K1,K2,K3,K4的线性方程组,最终求得摄像机的内外所有参数。通过实验对该方法进行了验证。  相似文献   

12.
Light field cameras are becoming popular in computer vision and graphics, with many research and commercial applications already having been proposed.Various types of cameras have been developed with the camera array being one of the ways of acquiring a 4D light field image usingmultiple cameras. Camera calibration is essential, since each application requires the correct projection and ray geometry of the light field. The calibrated parameters are used in the light field image rectified from the images captured by multiple cameras. Various camera calibration approaches have been proposed for a single camera, multiple cameras, and amoving camera. However, although these approaches can be applied to calibrating camera arrays, they are not effective in terms of accuracy and computational cost. Moreover, less attention has been paid to camera calibration of a light field camera. In this paper, we propose a calibration method for a camera array and a rectification method for generating a light field image from the captured images. We propose a two-step algorithm consisting of closed form initialization and nonlinear refinement, which extends Zhang’swell-known method to the camera array. More importantly, we introduce a rigid camera constraint whereby the array of cameras is rigidly aligned in the camera array and utilize this constraint in our calibration. Using this constraint, we obtained much faster and more accurate calibration results in the experiments.  相似文献   

13.
大视场双目立体视觉的摄像机标定   总被引:1,自引:0,他引:1  
针对大视场视觉测量应用,在分析摄像机成像模型的基础上,设计制作了可自由转动的十字靶标,实现了大视场双目视觉摄像机的精确标定。将十字靶标在测量空间内多次均匀摆放,两摄像机同步拍摄多幅靶标图像。由本质矩阵得到摄像机参数的初始值,采用自检校光束法平差得到摄像机参数的最优解。该方法不要求特征点共面,仅需要知道特征点之间的物理距离,降低了靶标制作难度。采用TN3DOMS.S进行了实测,在1500mm×1500mm的测量范围内测试标准标杆,误差均方值为0.06mm。  相似文献   

14.
Techniques are described for calibrating certain intrinsic camera parameters for machine vision. The parameters to be calibrated are the horizontal scale factor, and the image center. The scale factor calibration uses a one-dimensional fast Fourier transform and is accurate and efficient. It also permits the use of only one coplanar set of calibration points for general camera calibration. Three groups of techniques for center calibration are presented: Group I requires using a laser and a four-degree-of-freedom adjustment of its orientation, but is simplest in concept and is accurate and reproducible; Group II is simple to perform, but is less accurate than the other two; and the most general, Group II, is accurate and efficient, but requires a good calibration plate and accurate image feature extraction of calibration points. Group II is recommended most highly for machine vision applications. Results of experiments are presented and compared with theoretical predictions. Accuracy and reproducibility of the calibrated parameters are reported, as well as the improvement in actual 3-D measurement due to center calibration  相似文献   

15.
Camera lens distortion is crucial to obtain the best performance cameral model. Up to now, different techniques exist, which try to minimize the calibration error using different lens distortion models or computing them in different ways. Some compute lens distortion camera parameters in the camera calibration process together with the intrinsic and extrinsic ones. Others isolate the lens distortion calibration without using any template and basing the calibration on the deformation in the image of some features of the objects in the scene, like straight lines or circles. These lens distortion techniques which do not use any calibration template can be unstable if a complete camera lens distortion model is computed. They are named non-metric calibration or self-calibration methods.Traditionally a camera has been always best calibrated if metric calibration is done instead of self-calibration. This paper proposes a metric calibration technique which computes the camera lens distortion isolated from the camera calibration process under stable conditions, independently of the computed lens distortion model or the number of parameters. To make it easier to resolve, this metric technique uses the same calibration template that will be used afterwards for the calibration process. Therefore, the best performance of the camera lens distortion calibration process is achieved, which is transferred directly to the camera calibration process.  相似文献   

16.
Camera calibration from surfaces of revolution   总被引:9,自引:0,他引:9  
This paper addresses the problem of calibrating a pinhole camera from images of a surface of revolution. Camera calibration is the process of determining the intrinsic or internal parameters (i.e., aspect ratio, focal length, and principal point) of a camera, and it is important for both motion estimation and metric reconstruction of 3D models. In this paper, a novel and simple calibration technique is introduced, which is based on exploiting the symmetry of images of surfaces of revolution. Traditional techniques for camera calibration involve taking images of some precisely machined calibration pattern (such as a calibration grid). The use of surfaces of revolution, which are commonly found in daily life (e.g., bowls and vases), makes the process easier as a result of the reduced cost and increased accessibility of the calibration objects. In this paper, it is shown that two images of a surface of revolution will provide enough information for determining the aspect ratio, focal length, and principal point of a camera with fixed intrinsic parameters. The algorithms presented in this paper have been implemented and tested with both synthetic and real data. Experimental results show that the camera calibration method presented is both practical and accurate.  相似文献   

17.
Active camera calibration using pan, tilt and roll   总被引:1,自引:0,他引:1  
Three dimensional vision applications, such as robot vision, require modeling of the relationship between the two-dimensional images and the three-dimensional world. Camera calibration is a process which accurately models this relationship. The calibration procedure determines the geometric parameters of the camera, such as focal length and center of the image. Most of the existing calibration techniques use predefined patterns and a static camera. Recently, a novel calibration technique for computing the focal length and image center, which uses an active camera, has been developed. This technique does not require any predefined patterns or point-to-point correspondence between images-only a set of scenes with some stable edges. It was observed that the algorithms developed for the image center are sensitive to noise and hence unreliable in real situations. This report extends the techniques provided to develop a simpler, yet more robust method for computing the image center.  相似文献   

18.
Generic camera calibration is a non-parametric calibration technique that is applicable to any type of vision sensor. However, the standard generic calibration method was developed such that both central and non-central cameras can be calibrated within the same framework. Consequently, existing parametric calibration techniques cannot be applied for the common case of cameras with a single centre of projection (e.g. pinhole, fisheye, hyperboloidal catadioptric). This paper proposes improvements to the standard generic calibration method for central cameras that reduce its complexity, and improve its accuracy and robustness. Improvements are achieved by taking advantage of the geometric constraints resulting from a single centre of projection in order to enable the application of established pinhole calibration techniques. Input data for the algorithm is acquired using active grids, the performance of which is characterised. A novel linear estimation stage is proposed that enables a well established pinhole calibration technique to be used to estimate the camera centre and initial grid poses. The proposed solution is shown to be more accurate than the linear estimation stage of the standard method. A linear alternative to the existing polynomial method for estimating the pose of additional grids used in the calibration is demonstrated and evaluated. Distortion correction experiments are conducted with real data for both an omnidirectional camera and a fisheye camera using the standard and proposed methods. Motion reconstruction experiments are also undertaken for the omnidirectional camera. Results show the accuracy and robustness of the proposed method to be improved over those of the standard method.  相似文献   

19.
基于OpenCV的摄像机标定   总被引:2,自引:3,他引:2  
以增强现实系统中摄像机标定技术为研究对象,分析了开放计算机视觉函数库OpenCV中的摄像机模型,特别充分考虑了透镜的径向畸变和切向畸变影响及求解方法,给出了基于OpenCV的摄像机标定算法.该算法充分发挥了OpenCV的函数库功能,提高了标定精度和计算效率,具有良好的跨平台移植性,可以满足增强现实和其它计算机视觉系统的需要.  相似文献   

20.
近十几年来,计算机视觉越来越受研究者们的欢迎,特别是全景相机由于其具有较大的视场而被广泛应用到许多领域,包括视频监控、机器人导航、电视电话会议、场景重建以及虚拟现实等。摄像机标定是从二维图像获得三维信息必不可少的一步,摄像机标定结果的好坏直接决定着三维重建结果以及其它计算机视觉应用效果的好坏,所以,研究摄像机的标定方法具有重要的理论研究意义和重要的实际应用价值。这里将2000年到2012年折反射相机标定方法按照标定像的不同分为五大类:基于线的标定、基于二维标定块标定、基于三维点的标定、基于球的标定和自标定,且简要分析其优缺点。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号