首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a method for active self-calibration of multi-camera systems consisting of pan-tilt zoom cameras. The main focus of this work is on extrinsic self-calibration using active camera control. Our novel probabilistic approach avoids multi-image point correspondences as far as possible. This allows an implicit treatment of ambiguities. The relative poses are optimized by actively rotating and zooming each camera pair in a way that significantly simplifies the problem of extracting correct point correspondences. In a final step we calibrate the entire system using a minimal number of relative poses. The selection of relative poses is based on their uncertainty. We exploit active camera control to estimate consistent translation scales for triplets of cameras. This allows us to estimate missing relative poses in the camera triplets. In addition to this active extrinsic self-calibration we present an extended method for the rotational intrinsic self-calibration of a camera that exploits the rotation knowledge provided by the camera’s pan-tilt unit to robustly estimate the intrinsic camera parameters for different zoom steps as well as the rotation between pan-tilt unit and camera. Quantitative experiments on real data demonstrate the robustness and high accuracy of our approach. We achieve a median reprojection error of $0.95$ pixel.  相似文献   

2.
一种机器人手眼关系自标定方法   总被引:2,自引:0,他引:2  
设计了一种基于场景中单个景物点的机器人手眼关系标定方法.精确控制机械手末端执行器做 5 次以上平移运动和2 次以上旋转运动,摄像机对场景中的单个景物点进行成像.通过景物点的视差及深度 值反映摄像机的运动,建立机械手末端执行器与摄像机两坐标系之间相对位置的约束方程组,线性求得摄像 机内参数及手眼关系.标定过程中只需提取场景中的一个景物点,无需匹配,无需正交运动,对机械手的运 动控制操作方便、算法实现简洁.模拟数据实验与真实图像数据实验结果表明该方法可行、有效.  相似文献   

3.
This study investigates the problem of estimating camera calibration parameters from image motion fields induced by a rigidly moving camera with unknown parameters, where the image formation is modeled with a linear pinhole-camera model. The equations obtained show the flow to be separated into a component due to the translation and the calibration parameters and a component due to the rotation and the calibration parameters. A set of parameters encoding the latter component is linearly related to the flow, and from these parameters the calibration can be determined.However, as for discrete motion, in general it is not possible to decouple image measurements obtained from only two frames into translational and rotational components. Geometrically, the ambiguity takes the form of a part of the rotational component being parallel to the translational component, and thus the scene can be reconstructed only up to a projective transformation. In general, for full calibration at least four successive image frames are necessary, with the 3D rotation changing between the measurements.The geometric analysis gives rise to a direct self-calibration method that avoids computation of optical flow or point correspondences and uses only normal flow measurements. New constraints on the smoothness of the surfaces in view are formulated to relate structure and motion directly to image derivatives, and on the basis of these constraints the transformation of the viewing geometry between consecutive images is estimated. The calibration parameters are then estimated from the rotational components of several flow fields. As the proposed technique neither requires a special set up nor needs exact correspondence it is potentially useful for the calibration of active vision systems which have to acquire knowledge about their intrinsic parameters while they perform other tasks, or as a tool for analyzing image sequences in large video databases.  相似文献   

4.
为实现结构光视觉引导的焊接机器人系统的标定,解决现有标定方法复杂,标定靶标制作要求高等缺点,提出一种基于主动视觉的自标定方法。该标定方法对场景中3个特征点取像,通过精确控制焊接机器人进行5次平移运动,标定摄像机内参数和手眼矩阵旋转部分;通过进行2次带旋转运动,结合激光条在特征点平面的参数方程,标定手眼矩阵平移部分和结构光平面在摄像机坐标系下的平面方程;并针对不同焊枪长度进行修正。在以Denso机器人为主体构建的结构光视觉引导的焊接机器人系统上的测试结果稳定,定位精度可达到±0.93 mm。该标定方法简单,特征选取容易,对焊接机器人系统在实际工业现场的使用有重要意义。  相似文献   

5.
This paper describes a new self-calibration method for a single camera undergoing general motions. It has the following main contributions. First, we establish new constraints which relate the intrinsic parameters of the camera to the rotational part of the motions. This derivation is purely algebraic. We propose an algorithm which simultaneously solves for camera calibration and the rotational part of motions. Second, we provide a comparison between the developed method and a Kruppa equation-based method. Extensive experiments on both synthetic and real image data show the reliability and outperformance of the proposed method. The practical contribution of the method is its interesting convergence property compared with that of the Kruppa equations method.  相似文献   

6.
A novel and effective self-calibration approach for robot vision is presented, which can effectively estimate both the camera intrinsic parameters and the hand-eye transformation at the same time. The proposed calibration procedure is based on two arbitrary feature points of the environment, and three pure translational motions and two rotational motions of robot endeffector are needed. New linear solution equations are deduced, and the calibration parameters are finally solved accurately and effectively. The proposed algorithm has been verified by simulated data with different noise and disturbance. Because of the need of fewer feature points and robot motions, the proposed method greatly improves the efficiency and practicality of the calibration procedure.   相似文献   

7.
In computer vision, occlusions are almost always seen as undesirable singularities that pose difficult challenges to image motion analysis problems, such as optic flow computation, motion segmentation, disparity estimation, or egomotion estimation. However, it is well known that occlusions are extremely powerful cues for depth or motion perception, and could be used to improve those methods.

In this paper, we propose to recover camera motion information based uniquely on occlusions, by observing two specially useful properties: occlusions are independent of the camera rotation, and reveal direct information about the camera translation.

We assume a monocular observer, undergoing general rotational and translational motion in a static environment. We present a formal model for occlusion points and develop a method suitable for occlusion detection. Through the classification and analysis of the detected occlusion points, we show how to retrieve information about the camera translation (FOE). Experiments with real images are presented and discussed in the paper.  相似文献   


8.
Recovery of ego-motion using region alignment   总被引:2,自引:0,他引:2  
A method for computing the 3D camera motion (the ego-motion) in a static scene is described, where initially a detected 2D motion between two frames is used to align corresponding image regions. We prove that such a 2D registration removes all effects of camera rotation, even for those image regions that remain misaligned. The resulting residual parallax displacement field between the two region-aligned images is an epipolar field centered at the FOE (Focus-of-Expansion). The 3D camera translation is recovered from the epipolar field. The 3D camera rotation is recovered from the computed 3D translation and the detected 2D motion. The decomposition of image motion into a 2D parametric motion and residual epipolar parallax displacements avoids many of the inherent ambiguities and instabilities associated with decomposing the image motion into its rotational and translational components, and hence makes the computation of ego-motion or 3D structure estimation more robust  相似文献   

9.
We address the self-calibration of a smooth generic central camera from only two dense rotational flows produced by rotations of the camera about two unknown linearly independent axes passing through the camera centre. We give a closed-form theoretical solution to this problem, and we prove that it can be solved exactly up to a linear orthogonal transformation ambiguity. Using the theoretical results, we propose an algorithm for the self-calibration of a generic central camera from two rotational flows.  相似文献   

10.
Self-Calibration of Rotating and Zooming Cameras   总被引:4,自引:0,他引:4  
In this paper we describe the theory and practice of self-calibration of cameras which are fixed in location and may freely rotate while changing their internal parameters by zooming. The basis of our approach is to make use of the so-called infinite homography constraint which relates the unknown calibration matrices to the computed inter-image homographies. In order for the calibration to be possible some constraints must be placed on the internal parameters of the camera.We present various self-calibration methods. First an iterative non-linear method is described which is very versatile in terms of the constraints that may be imposed on the camera calibration: each of the camera parameters may be assumed to be known, constant throughout the sequence but unknown, or free to vary. Secondly, we describe a fast linear method which works under the minimal assumption of zero camera skew or the more restrictive conditions of square pixels (zero skew and known aspect ratio) or known principal point. We show experimental results on both synthetic and real image sequences (where ground truth data was available) to assess the accuracy and the stability of the algorithms and to compare the result of applying different constraints on the camera parameters. We also derive an optimal Maximum Likelihood estimator for the calibration and the motion parameters. Prior knowledge about the distribution of the estimated parameters (such as the location of the principal point) may also be incorporated via Maximum a Posteriori estimation.We then identify some near-ambiguities that arise under rotational motions showing that coupled changes of certain parameters are barely observable making them indistinguishable. Finally we study the negative effect of radial distortion in the self-calibration process and point out some possible solutions to it.An erratum to this article can be found at  相似文献   

11.
We consider the self-calibration (affine and metric reconstruction) problem from images acquired with a camera with unchanging internal parameters undergoing planar motion. The general self-calibration methods (modulus constraint, Kruppa equations) are known to fail with this camera motion. In this paper we give two novel linear constraints on the coordinates of the plane at infinity in a projective reconstruction for any camera motion. In the planar case, we show that the two constraints are equivalent and easy to compute, giving us a linear version of the quartic modulus constraint. Using this fact, we present a new linear method to solve the self-calibration problem with planar motion of the camera from three or more images. This work was partly supported by project BFM2003-02914 from the Ministerio de Ciencia y Tecnología (Spain). Ferran Espuny received the MSc in Mathematics in 2002 from the Universitat de Barcelona, Spain. He is currently a PhD student and associate professor in the Departament d’àlgebra i Geometria at Universitat de Barcelona, Spain. His research, supervised by Dr. José Ignacio Burgos Gil, is focussed on self-calibration and critical motions for both pinhole and generic camera models.  相似文献   

12.
摄像机内参数自标定——理论与算法   总被引:3,自引:0,他引:3  
讨论如何通过摄像机的旋转运动标定其内参数.当摄像机绕其坐标轴旋转时,运用 代数方法给出了计算内参数的公式.该公式在2D投影变换接近理论值P时是非常实用的. 在摄像机绕未知轴旋转时,根据相应的2D投影变换,运用矩阵特征向量理论给出了内参数的 通解公式.通过摄像机绕两个不同未知轴的旋转,摄像机内参数能被唯一地确定.这些结果为 摄像机自标定算法提供了理论基础,同时也给出了实用性算法.模拟实验和真实图像实验的 结果表明本文所给的算法具有一定实用价值.  相似文献   

13.
《自动化学报》1999,25(6):1
讨论如何通过摄像机的旋转运动标定其内参数.当摄像机绕其坐标轴旋转时,运用代数方法给出了计算内参数的公式.该公式在2D投影变换接近理论值P时是非常实用的.在摄像机绕未知轴旋转时,根据相应的2D投影变换,运用矩阵特征向量理论给出了内参数的通解公式.通过摄像机绕两个不同未知轴的旋转,摄像机内参数能被唯一地确定.这些结果为摄像机自标定算法提供了理论基础,同时也给出了实用性算法。模拟实验和真实图像实验的结果表明本文所给的算法具有一定实用价值.  相似文献   

14.
Error analysis of pure rotation-based self-calibration   总被引:2,自引:0,他引:2  
Self-calibration using pure rotation is a well-known technique and has been shown to be a reliable means for recovering intrinsic camera parameters. However, in practice, it is virtually impossible to ensure that the camera motion for this type of self-calibration is a pure rotation. In this paper, we present an error analysis of recovered intrinsic camera parameters due to the presence of translation. We derived closed-form error expressions for a single pair of images with nondegenerate motion; for multiple rotations for which there are no closed-form solutions, analysis was done through repeated experiments. Among others, we show that translation-independent solutions do exist under certain practical conditions. Our analysis can be used to help choose the least error-prone approach (if multiple approaches exist) for a given set of conditions.  相似文献   

15.
We show that the frictional forces arising from simultaneous small amplitude periodic translation and rotation of a rigid plate cause parts on the plate to converge to or diverge from a line coinciding with the rotation axis. The relative phase between the translation and rotation determines whether the parts are attracted to or repelled from the rotation axis. Assuming that both the translational and rotational accelerations of the plate are ldquobang-bangrdquo and have identical frequencies, we derive the resultant velocity fields for point parts on the plate. For many choices of phase the speed of the part is approximately proportional to its distance from the rotation axis. The strength of the velocity field can be controlled by modulating the amplitude of the translational acceleration, or modulating the relative phase between the translational and rotational acceleration profiles. We also determine the phases that maximize part speed towards and away from the rotation axis. These optimal phases not only maximize part speed but also generate velocity fields that are nearly independent of the coefficient of friction.  相似文献   

16.
Self-calibration of an affine camera from multiple views   总被引:6,自引:2,他引:4  
A key limitation of all existing algorithms for shape and motion from image sequences under orthographic, weak perspective and para-perspective projection is that they require the calibration parameters of the camera. We present in this paper a new approach that allows the shape and motion to be computed from image sequences without having to know the calibration parameters. This approach is derived with the affine camera model, introduced by Mundy and Zisserman (1992), which is a more general class of projections including orthographic, weak perspective and para-perspective projection models. The concept of self-calibration, introduced by Maybank and Faugeras (1992) for the perspective camera and by Hartley (1994) for the rotating camera, is then applied for the affine camera.This paper introduces the 3 intrinsic parameters that the affine camera can have at most. The intrinsic parameters of the affine camera are closely related to the usual intrinsic parameters of the pin-hole perspective camera, but are different in the general case. Based on the invariance of the intrinsic parameters, methods of self-calibration of the affine camera are proposed. It is shown that with at least four views, an affine camera may be self-calibrated up to a scaling factor, leading to Euclidean (similarity) shape réconstruction up to a global scaling factor. Another consequence of the introduction of intrinsic and extrinsic parameters of the affine camera is that all existing algorithms using calibrated affine cameras can be assembled into the same framework and some of them can be easily extented to a batch solution.Experimental results are presented and compared with other methods using calibrated affine cameras.  相似文献   

17.
Estimating the focus of attention of a person highly depends on her/his gaze directionality. Here, we propose a new method for estimating visual focus of attention using head rotation, as well as fuzzy fusion of head rotation and eye gaze estimates, in a fully automatic manner, without the need for any special hardware or a priori knowledge regarding the user, the environment or the setup. Instead, we propose a system aimed at functioning under unpretending conditions, only with the usage of simple hardware, like a normal web-camera. Our system is aimed at functioning in a human-computer interaction environment, considering a person is facing a monitor with a camera adjusted on top. To this aim, we propose in this paper two novel techniques, based on local and appearance information, estimating head rotation, and we adaptively fuse them in a common framework. The system is able to recognize head rotational movement, under translational movements of the user towards any direction, without any knowledge or a-priori estimate of the user’s distance from the camera or camera intrinsic parameters.  相似文献   

18.
一种新的线性摄像机自标定方法   总被引:21,自引:2,他引:19  
李华  吴福朝  胡占义 《计算机学报》2000,23(11):1121-1129
提出了一种新的基于主动视觉系统的线性摄像机自定标方法。所谓基于主动视觉系统,是指摄像机固定在摄像机平台上以平摄像机平台的运动可以精确控制。该方法的主要特点是可以线性求解摄像机的所有5个内参数。据作者所知。文献中现有的方法仅能线性求解摄像机的4个由参数。当摄像机为完全的射影模型时,即当有畸变因子(skew factor)存在时,文献中的线性方法均不再适用。该方法的基本思想是控制摄像机做5组平面正交运动,利用图像中的极点(epipoles)信息来线性标定摄像机。同时,针对摄像机做平移运动时基本矩阵的特殊形式,该文提出了求基本矩阵(fundamental matrix)的2点算法。与8点算法相比较,2点算法大大提高了所求极点的精度和鲁棒性。另外,该文对临近奇异状态(即5组平面正交运动中,有两组或者多组运动平面平行)作了较为详尽的分析,并提出了解决临近奇异状态的策略,从而增强了该文算法的衫性。模拟图像和真实图像实验表明该文的自标定方法具有较高的鲁棒性和准确性。  相似文献   

19.
一种新的机器人手眼关系标定方法   总被引:8,自引:2,他引:8  
杨广林  孔令富  王洁 《机器人》2006,28(4):400-405
通过控制装有摄像机的机械手的运动,给出了一种新的机器人手眼系统标定方法.与以往算法的不同之处在于,在计算手眼关系的平移向量时,对摄像机坐标系进行虚设旋转变换使旋转转化为平移问题.该方法需要机械手平台做两次平移运动和一次旋转运动,只需要场景中两个特征点,所以具有方便性和实用性.同时,也给出了基于主动视觉的空间点深度值计算方法.  相似文献   

20.
Using Specific Displacements to Analyze Motion without Calibration   总被引:2,自引:2,他引:0  
Considering the field of un-calibrated image sequences and self-calibration, this paper analyzes the use of specific displacements (such as fixed axis rotation, pure translations,...) or specific sets of camera parameters. This allows to induce affine or metric constraints, which can lead to self-calibration and 3D reconstruction.A uniformed formalism for such models already developed in the literature plus some novel models are developed here. A hierarchy of special situations is described, in order to tailor the most appropriate camera model to either the actual robotic device supporting the camera, or to tailor the fact we only have a reduced set of data available.This visual motion perception module leads to the estimation of a minimal 3D parameterization of the retinal displacement for a monocular visual system without calibration, and leads to self-calibration and 3D dynamic analysis.The implementation of these equations is analyzed and experimented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号