首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
本文给出了一种以空间不变量的数据来计算摄象机外部参数的方法.空间透视不变量是指在几何变换中如投影或改变观察点时保持不变的形状描述.由于它可以得到一个相对于外界来讲独立的物体景物的特征描述,故可以很广泛的应用到计算机视觉等方面.摄象机标定是确定摄象机摄取的2D图象信息及其3D实际景物的信息之间的变换关系,它包括内部参数和外部参数两个部分.内部参数表征的是摄象机的内部特征和光学特征参数,包括图象中心(Cx,Cy)坐标、图象尺度因子Sx、有效的焦距长度f和透镜的畸变失真系数K;外部参数表示的是摄象机的位置和方向在世界坐标中的坐标参数,它包括平移矩阵T和旋转矩阵R3×3,一般情况下可以写成一个扩展矩阵[RT]3×4.本文基于空间透视不变量的计算数据,给出了一种标定摄象机外部参数的方法,实验结果表明该方法具有很强的鲁棒性.  相似文献   

2.
In this paper, we show how an active binocular head, the IIS head, can be easily calibrated with very high accuracy. Our calibration method can also be applied to many other binocular heads. In addition to the proposal and demonstration of a four-stage calibration process, there are three major contributions in this paper. First, we propose a motorized-focus lens (MFL) camera model which assumes constant nominal extrinsic parameters. The advantage of having constant extrinsic parameters is to having a simple head/eye relation. Second, a calibration method for the MFL camera model is proposed in this paper, which separates estimation of the image center and effective focal length from estimation of the camera orientation and position. This separation has been proved to be crucial; otherwise, estimates of camera parameters would be very noise-sensitive. Thirdly, we show that, once the parameters of the MFL camera model is calibrated, a nonlinear recursive least-square estimator can be used to refine all the 35 kinematic parameters. Real experiments have shown that the proposed method can achieve accuracy of one pixel prediction error and 0.2 pixel epipolar error, even when all the joints, including the left and right focus motors, are moved simultaneously. This accuracy is good enough for many 3D vision applications, such as navigation, object tracking and reconstruction  相似文献   

3.
基于视觉增强现实系统的设计与实现   总被引:7,自引:0,他引:7  
针对目前增强现实领域尚少有人涉足的运动跟踪注册问题,提出了采用四个彩色标志点的光流场估计运动参数,结合刚体的运动特性以及投影透射模型确定运动物体与摄像机间相对位姿的算法。并把该算法用于以光学透射式头盔显示器为主的增强现实系统中。该系统结构简单、轻便、实用性强、易于实现。一般情况下只需四个平面标志物就可实现运动物体的三维跟踪注册;工作范围大,甚至可以应用到室外的增强现实系统中;数值求解过程是线性过程,误差小,可以满足增强现实系统高精度三维注册的要求。  相似文献   

4.
针对缺乏纹理特征的物体,提出了一种基于边的自适应实时三维跟踪方法。在已知物体三维模型的情况下,通过基于历史运动信息的物体边缘检测与跟踪,可以有效准确地求解出摄像机的外参。基于并扩展了现有的基于边的实时跟踪算法,其主要工作体现在以下三个方面: 1)提出自适应阈值和基于历史信息估计当前帧的运动趋势的方法,从而提高边匹配算法在快速运动时的稳定性;2)提出一种基于随机抽样一致性(RANSAC)的边匹配策略,可以有效剔除误匹配的边,从而提高复杂模型的跟踪稳定性;3)利用抽取轮廓边的算法将边跟踪算法从CAD模型扩展到一般的面片模型。实验结果证明了该方法的鲁棒高效,能够满足增强现实、虚拟装配等应用需求。  相似文献   

5.
Camera model and its calibration are required in many applications for coordinate conversions between the two-dimensional image and the real three-dimensional world. Self-calibration method is usually chosen for camera calibration in uncontrolled environments because the scene geometry could be unknown. However when no reliable feature correspondences can be established or when the camera is static in relation to the majority of the scene, self-calibration method fails to work. On the other hand, object-based calibration methods are more reliable than self-calibration methods due to the existence of the object with known geometry. However, most object-based calibration methods are unable to work in uncontrolled environments because they require the geometric knowledge on calibration objects. Though in the past few years the simplest geometry required for a calibration object has been reduced to a 1D object with at least three points, it is still not easy to find such an object in an uncontrolled environment, not to mention the additional metric/motion requirement in the existing methods. Meanwhile, it is very easy to find a 1D object with two end points in most scenes. Thus, it would be very worthwhile to investigate an object-based method based on such a simple object so that it would still be possible to calibrate a camera when both self-calibration and existing object-based calibration fail to work. We propose a new camera calibration method which requires only an object with two end points, the simplest geometry that can be extracted from many real-life objects. Through observations of such a 1D object at different positions/orientations on a plane which is fixed in relation to the camera, both intrinsic (focal length) and extrinsic (rotation angles and translations) camera parameters can be calibrated using the proposed method. The proposed method has been tested on simulated data and real data from both controlled and uncontrolled environments, including situations where no explicit 1D calibration objects are available, e.g. from a human walking sequence. Very accurate camera calibration results have been achieved using the proposed method.  相似文献   

6.
目的 云台相机因监控视野广、灵活度高,在高速公路监控系统中发挥出重要的作用,但因云台相机焦距与角度不定时地随监控需求变化,对利用云台相机的图像信息获取真实世界准确的物理信息造成一定困难,因此进行云台相机非现场自动标定方法的研究对高速公路监控系统的应用具有重要价值。方法 本文提出了一种基于消失点约束与车道线模型约束的云台相机自动标定方法,以建立高速公路监控系统的图像信息与真实世界物理信息之间准确描述关系。首先,利用车辆目标运动轨迹的级联霍夫变换投票实现纵向消失点的准确估计,其次以车道线模型物理度量为约束,并采用枚举策略获取横向消失点的准确估计,最终在已知相机高度的条件下实现高速公路云台相机标定参数的准确计算。结果 将本文方法在不同的场景下进行实验,得到在不同的距离下的平均误差分别为4.63%、4.74%、4.81%、4.65%,均小于5%。结论 对多组高速公路监控场景的测试实验结果表明,本文提出的云台相机自动标定方法对高速公路监控场景的物理测量误差能够满足应用需求,与参考方法相比较而言具有较大的优势和一定的应用价值,得到的相机内外参数可用于计算车辆速度与空间位置等。  相似文献   

7.
《Graphical Models》2008,70(4):57-75
This paper studies the inside looking out camera pose estimation for the virtual studio. The camera pose estimation process, the process of estimating a camera’s extrinsic parameters, is based on closed-form geometrical approaches which use the benefit of simple corner detection of 3D cubic-like virtual studio landmarks. We first look at the effective parameters of the camera pose estimation process for the virtual studio. Our studies include all characteristic landmark parameters like landmark lengths, landmark corner angles and their installation position errors and some camera parameters like lens focal length and CCD resolution. Through computer simulation we investigate and analyze all these parameters’ efficiency in camera extrinsic parameters, including camera rotation and position matrixes. Based on this work, we found that the camera translation vector is affected more than other camera extrinsic parameters because of the noise of effective camera pose estimation parameters. Therefore, we present a novel iterative geometrical noise cancellation method for the closed-form camera pose estimation process. This is based on the collinearity theory that reduces the estimation error of the camera translation vector, which plays a major role in camera extrinsic parameters estimation errors. To validate our method, we test it in a complete virtual studio simulation. Our simulation results show that they are in the same order as those of some commercial systems, such as the BBC and InterSense IS-1200 VisTracker.  相似文献   

8.
Stereovision is an effective technique to use a CCD video camera to determine the 3D position of a target object from two or more simultaneous views of the scene. Camera calibration is a central issue in finding the position of objects in a stereovision system. This is usually carried out by calibrating each camera independently, and then applying a geometric transformation of the external parameters to find the geometry of the stereo setting. After calibration, the distance of various target objects in the scene can be calculated with CCD video cameras, and recovering the 3D structure from 2D images becomes simpler. However, the process of camera calibration is complicated. Based on the ideal pinhole model of a camera, we describe formulas to calculate intrinsic parameters that specify the correct camera characteristics, and extrinsic parameters that describe the spatial relationship between the camera and the world coordinate system. A simple camera calibration method for our CCD video cameras and corresponding experiment results are also given. This work was presented in part at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002  相似文献   

9.
基于视觉的增强现实运动跟踪算法   总被引:6,自引:0,他引:6  
增强现实系统不仅具有虚拟现实的特点同时具有虚实结合的新特性,为实现虚拟物体与真实物体间的完善结合,必须实时地动态跟踪摄像与真实物体间的相对位置和方向,建立观测模,墼是而通过动态三维显示技术迅速地将虚拟物体添加到真实物体之上,然而目前大多数增强现实系统的注册对象均匀静物体,运动物体的注册跟踪尚很少有人涉足。该算法通过标志点的光流场估计真实环境中运动物体的运动参数,根据透视投影原理和刚体的运动特性确定摄像机与运动物体间的相对位置和方向,实现增强现实系统的运动目标跟踪注册。该算法构架简单、实时性强,易于实现,扩展了增强现实系统的应用范围。  相似文献   

10.
Linear or 1D cameras are used in several areas such as industrial inspection and satellite imagery. Since 1D cameras consist of a linear sensor, a motion (usually perpendicular to the sensor orientation) is performed in order to acquire a full image. In this paper, we present a novel linear method to estimate the intrinsic and extrinsic parameters of a 1D camera using a planar object. As opposed to traditional calibration scheme based on 3D-2D correspondences of landmarks, our method uses homographies induced by the images of a planar object. The proposed algorithm is linear, simple and produces good results as shown by our experiments.  相似文献   

11.
目的 RGB-D相机的外参数可以被用来将相机坐标系下的点云转换到世界坐标系的点云,可以应用在3维场景重建、3维测量、机器人、目标检测等领域。 一般的标定方法利用标定物(比如棋盘)对RGB-D彩色相机的外参标定,但并未利用深度信息,故很难简化标定过程,因此,若充分利用深度信息,则极大地简化外参标定的流程。基于彩色图的标定方法,其标定的对象是深度传感器,然而,RGB-D相机大部分则应用基于深度传感器,而基于深度信息的标定方法则可以直接标定深度传感器的姿势。方法 首先将深度图转化为相机坐标系下的3维点云,利用MELSAC方法自动检测3维点云中的平面,根据地平面与世界坐标系的约束关系,遍历并筛选平面,直至得到地平面,利用地平面与相机坐标系的空间关系,最终计算出相机的外参数,即相机坐标系内的点与世界坐标系内的点的转换矩阵。结果 实验以棋盘的外参标定方法为基准,处理从PrimeSense相机所采集的RGB-D视频流,结果表明,外参标定平均侧倾角误差为-1.14°,平均俯仰角误差为4.57°,平均相机高度误差为3.96 cm。结论 该方法通过自动检测地平面,准确估计出相机的外参数,具有很强的自动化,此外,算法具有较高地并行性,进行并行优化后,具有实时性,可应用于自动估计机器人姿势。  相似文献   

12.
This paper addresses the problem of calibrating camera parameters using variational methods. One problem addressed is the severe lens distortion in low-cost cameras. For many computer vision algorithms aiming at reconstructing reliable representations of 3D scenes, the camera distortion effects will lead to inaccurate 3D reconstructions and geometrical measurements if not accounted for. A second problem is the color calibration problem caused by variations in camera responses that result in different color measurements and affects the algorithms that depend on these measurements. We also address the extrinsic camera calibration that estimates relative poses and orientations of multiple cameras in the system and the intrinsic camera calibration that estimates focal lengths and the skew parameters of the cameras. To address these calibration problems, we present multiview stereo techniques based on variational methods that utilize partial and ordinary differential equations. Our approach can also be considered as a coordinated refinement of camera calibration parameters. To reduce computational complexity of such algorithms, we utilize prior knowledge on the calibration object, making a piecewise smooth surface assumption, and evolve the pose, orientation, and scale parameters of such a 3D model object without requiring a 2D feature extraction from camera views. We derive the evolution equations for the distortion coefficients, the color calibration parameters, the extrinsic and intrinsic parameters of the cameras, and present experimental results.  相似文献   

13.
This paper presents a novel approach for image‐based visual servoing (IBVS) of a robotic system by considering the constraints in the case when the camera intrinsic and extrinsic parameters are uncalibrated and the position parameters of the features in 3‐D space are unknown. Based on the model predictive control method, the robotic system's input and output constraints, such as visibility constraints and actuators limitations, can be explicitly taken into account. Most of the constrained IBVS controllers use the traditional image Jacobian matrix, the proposed IBVS scheme is developed by using the depth‐independent interaction matrix. The unknown parameters can appear linearly in the prediction model and they can be estimated by the identification algorithm effectively. In addition, the model predictive control determines the optimal control input and updates the estimated parameters together with the prediction model. The proposed approach can simultaneously handle system constraints, unknown camera parameters and depth parameters. Both the visual positioning and tracking tasks can be achieved desired performances. Simulation results based on a 2‐DOF planar robot manipulator for both the eye‐in‐hand and eye‐to‐hand camera configurations are used to demonstrate the effectiveness of the proposed method.  相似文献   

14.
传统的口腔修复体三维点云数据测量技术难以满足精度要求。为此,提出一种基于线结构光的三维坐标测量方法。在摄像机标定过程中,采用最小二乘法计算光平面方程,使用平移扫描和旋转扫描获取物体表面三维数据,以避免求解摄像机内外参数。实验结果表明,重建后的三维模型可以满足高精度近景三维测量的要求。  相似文献   

15.
改进了结构光三维测量系统的数学模型,将投影仪的内外部参数引入其中,并在该模型的基础上采用了一种投影仪标定的新方法.为了提高黑白摄像机在暗区的对比度用红/蓝棋盘代替了黑/白棋盘.将投影仪看作是摄像机的倒置,统一了标定摄像机和投影仪的方法.在3dMA×仿真环境下分别标定了摄像机和投影仪,标定实验的相对误差小于0.32%.利用已经标定的结构光三维测量系统进行重构仿真实验,测量误差小于0.136mm.  相似文献   

16.
In this paper, we introduce a method to estimate the object’s pose from multiple cameras. We focus on direct estimation of the 3D object pose from 2D image sequences. Scale-Invariant Feature Transform (SIFT) is used to extract corresponding feature points from adjacent images in the video sequence. We first demonstrate that centralized pose estimation from the collection of corresponding feature points in the 2D images from all cameras can be obtained as a solution to a generalized Sylvester’s equation. We subsequently derive a distributed solution to pose estimation from multiple cameras and show that it is equivalent to the solution of the centralized pose estimation based on Sylvester’s equation. Specifically, we rely on collaboration among the multiple cameras to provide an iterative refinement of the independent solution to pose estimation obtained for each camera based on Sylvester’s equation. The proposed approach to pose estimation from multiple cameras relies on all of the information available from all cameras to obtain an estimate at each camera even when the image features are not visible to some of the cameras. The resulting pose estimation technique is therefore robust to occlusion and sensor errors from specific camera views. Moreover, the proposed approach does not require matching feature points among images from different camera views nor does it demand reconstruction of 3D points. Furthermore, the computational complexity of the proposed solution grows linearly with the number of cameras. Finally, computer simulation experiments demonstrate the accuracy and speed of our approach to pose estimation from multiple cameras.  相似文献   

17.
具有深度自适应估计的视觉伺服优化   总被引:1,自引:0,他引:1  
在手眼机器人视觉伺服中,如何确定机器人末端摄像机移动的速度和对物体的深度进行有效的估计还没有较好的解决方法.本文采用一般模型法,通过求解最优化控制问题来设计摄像机的速度,同时,利用物体初始及期望位置的深度估计值,提出了一种自适应估计的算法对物体的深度进行估计,给出了深度变化趋势,实现了基于图像的定位控制.该方法能够使机器人在工作空间范围内从任一初始位置出发到达期望位置,实现了系统的全局渐近稳定且不需要物体的几何模型及深度的精确值.最后给出的仿真实例表明了本方法的有效性.  相似文献   

18.
一种新的相位法三维轮廓测量系统模型及其标定方法研究   总被引:4,自引:1,他引:4  
传统相位法三维轮廓测量系统, 对摄像机和投影装置位置关系要求严格, 难以精确校准, 系统的标定工作操作繁杂, 且精度不高. 针对这个问题, 将传统的相位法测量系统中的相位-高度转换关系推广到三维空间中, 建立新的相位法系统模型. 基于新的系统模型, 提出一种快速标定方法. 新系统对摄像机和投影装置并无任何严格的平行、垂直或相交的要求, 在标定过程中, 对标定面的位置也没有严格的要求. 实验表明, 系统结构易于实现, 标定方法简单有效, 提高了系统标定的可操作性以及标定速度, 系统标定时间在2分钟以内, 系统的测量精度较传统方法有较大提高.  相似文献   

19.
Three-dimensional (3-D) models of outdoor scenes are widely used for object recognition, navigation, mixed reality, and so on. Because such models are often made manually with high costs, automatic 3-D reconstruction has been widely investigated. In related work, a dense 3-D model is generated by using a stereo method. However, such approaches cannot use several hundreds images together for dense depth estimation because it is difficult to accurately calibrate a large number of cameras. In this paper, we propose a dense 3-D reconstruction method that first estimates extrinsic camera parameters of a hand-held video camera, and then reconstructs a dense 3-D model of a scene. In the first process, extrinsic camera parameters are estimated by tracking a small number of predefined markers of known 3-D positions and natural features automatically. Then, several hundreds dense depth maps obtained by multi-baseline stereo are combined together in a voxel space.So, we can acquire a dense 3-D model of the outdoor scene accurately by using several hundreds input images captured by a hand-held video camera.  相似文献   

20.
《Real》1998,4(5):349-359
We have previously demonstrated that the performance of tracking algorithms can be improved by integrating information from multiple cues in a model-driven Bayesian reasoning framework. Here we extend our work to active vision tracking, with variable camera geometry. Many existent active tracking algorithms avoid the problem of variable camera geometry by tracking view independent features, such as corners and lines. However, the performance of algorithms based on those single features will greatly deteriorate in the presence of specularities and dense clutter. We show, by integrating multiple cues and updating the camera geometry on-line, that it is possible to track a complicated object moving arbitrarily in three-dimensional (3D) space.We use a four degree-of-freedom (4-DoF) binocular camera rig to track three focus features of an industrial object, whose complete model is known. The camera geometry is updated by using the rig control commands and kinematic model of the stereo head. The extrinsic parameters are further refined by interpolation from a previously sampled calibration of the head work space.The 2D target position estimates are obtained by a combination of blob detection, edge searching and gray-level matching, with the aid of model geometrical structure projection using current estimates of camera geometry. The information is represented in the form of a probability density distribution, and propagated in a Bayes Net. The Bayesian reasoning that is performed in the 2D image is coupled by the rigid model geometry constraint in 3D space.An αβ filter is used to smooth the tracking pursuit and to predict the position of the object in the next iteration of data acquisition. The solution of the inverse kinematic problem at the predicted position is used to control the position of the stereo head.Finally, experiments show that the target undertaking arbitrarily 3D motion can be successfully tracked in the presence of specularities and dense clutter.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号