首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
文章研究在仅有两幅图像的条件下,如果摄像机内参数保持不变,如何利用场景中的结构信息实现空间物体的分层重构,研究表明,计算出无穷远平面的单应矩阵,可以仿射重构,计算出绝对二次曲线的像,就可以进一步实现欧氏重构。最后给出一个具体的实验,结果表明该算法是可行的。  相似文献   

2.
一种基于角点检测的图像密集匹配算法   总被引:1,自引:2,他引:1  
提出了一种鲁棒的图像自动立体匹配算法.利用Sobel算子对图像中的像素点进行检测,若是边缘点,则使用最小同值分割吸收核方法判断该点是否为角点.在两幅待匹配的图像间计算角点的梯度大小、梯度方向及灰度等的相似度,去除无法对应的角点,建立起待匹配图像中角点的对应关系,并计算基础矩阵.对基础矩阵进行迭代,去除误配点,计算出较精确的基础矩阵.由对极几何约束,采用动态规划方法,寻找左右两幅图像在对应极线上的所有像素点之间的对应,从而建立起两幅图像间像素点的密集匹配对应关系.试验结果表明,算法效果满意.  相似文献   

3.
基于一组对应消失线的度量重建   总被引:1,自引:0,他引:1       下载免费PDF全文
祝海江  吴福朝 《软件学报》2004,15(5):666-675
提出了一种由3幅图像间一组对应消失线进行度量重建的方法.首先利用对应的消失线和模值约束计算出无穷远平面的单应矩阵,然后根据无穷远平面的单应矩阵保持绝对二次曲线的像不动的性质,线性求解摄像机内参数,最后得到度量重建.模拟实验和真实图像实验均验证了这种度量重建方法的可行性和正确性.  相似文献   

4.
基于单平面模板的摄像机定标研究   总被引:2,自引:0,他引:2  
提出了一种摄像机定标方法,只需要摄像机从不同方向拍摄平面模板的多幅图像,摄像机与平面模板间可以自由地移动,运动的参数无需已知。对于每个视点获得图像,提取图像上的网格角点;平面模板与图像间的网格角点对应关系,确定了单应性矩阵;对每幅图像,就可确定一个单应性矩阵,这样就能够进行摄像机定标。该算法先有一个线性解法,然后基于极大似然准则对线性结果进行非线性优化求精。该方法同时也考虑了镜头畸变的影响。实验结果表明该算法简单易用。  相似文献   

5.
首先讨论了真实场景中的角结构几何约束,然后利用这种几何约束提出了一种新的相机自定标算法,利用单幅透视投影图像中的角结构计算出相机的焦距、平移向量和旋转矩阵的初始值,然后利用两幅图像中的几何结构对相机内外参数进行优化,由于在求取相机参数初始值的时候只用到了一幅图像,这样就避免了在相机定标过程中可能出现的临界运动序列(critical motions sequence),从而避免了临界运动序列引起的相机定标退化问题,提高了相机定标过程的鲁棒性,用两幅图像中的结构约束对初始值进行优化,进一步提高了结果的精确度,实验结果证明算法是强壮的。  相似文献   

6.
动态场景图像序列中运动目标检测新方法   总被引:1,自引:0,他引:1       下载免费PDF全文
在动态场景图像序列中检测运动目标时,如何消除因摄影机运动带来的图像帧间全局运动的影响,以便分割图像中的静止背景和运动物体,是一个必须解决的难题。针对复杂背景下动态场景图像序列的特性,给出了一种新的基于场景图像参考点3D位置恢复的图像背景判别方法和运动目标检测方法。首先,介绍了图像序列的层次化运动模型以及基于它的运动分割方法;然后,利用估计出的投影矩阵计算序列图像中各运动层的参考点3D位置,根据同一景物在不同帧中参考点3D位置恢复值的变化特性,来判别静止背景对应的运动层和运动目标对应的运动层,从而分割出图像中的静止背景和运动目标;最后,给出了动态场景图像序列中运动目标检测的详细算法。实验结果表明,新算法较好地解决了在具有多组帧间全局运动参数的动态场景序列图像中检测运动目标的问题,较大地提高了运动目标跟踪算法的有效性和鲁棒性。  相似文献   

7.
研究了由多幅图像恢复摄像机矩阵和空间物体三维几何形状的问题,对Cross等人提出的利用无穷远平面诱导的单应进行重构的算法进行了改进,提出了一种新的算法。该算法只需要一个点和一条直线在所有视图中均可见,解决了原算法要求4个共面参考点的难题。  相似文献   

8.
摄像机畸变参数估计是摄像机标定的重要步骤.针对已有的摄像机畸变估计方法大多数为首先对无畸变下的摄像机参数进行标定,再进一步估计畸变参数,导致畸变参数估计过程复杂的问题.采用单应矩阵直接估计畸变参数,提出一种基于单幅图像的摄像机畸变参数估计算法.首先利用图像主点附近的图像特征点估计空间平面到图像平面的单应矩阵,然后利用该单应矩阵估计图像畸变误差和畸变参数,最后采用非线性优化算法对单应矩阵与畸变参数进行整体优化.模拟数据与实际图像的实验结果验证了本文算法的有效性;由于该算法仅需要1幅图像即可估计畸变参数,因此有效地提高了摄像机标定方法的灵活性.  相似文献   

9.
针对智能监控系统中对多个运动目标进行图像分割这一问题,提出一种引入区域种子的多运动目标分割算法.算法首先利用背景减算法获得包含多个运动目标的前景图像,再利用四叉树分解方法获得与前景图像对应的稀疏矩阵,通过稀疏矩阵中数值的分布情况,计算出包含运动目标的区域种子点,从这些种子点出发,利用主动轮廓模型进行并行目标轮廓提取,最终完成多运动目标图像分割.实验结果证明本文算法能有效分割出前景图像中多个运动目标,分割结果与人眼视觉的判断相近,并行轮廓提取使算法具有良好的实时性.  相似文献   

10.
为了改进胶囊内窥镜观测的准确性和真实性,提出了基于胶囊内窥镜序列图像的胃肠道三维重建的方法.首先利用SIFT算法提取前后两幅序列图像中尽可能多的对应特征点;计算获取各特征点在成像面上的二维坐标;进一步利用8点算法计算胶囊内镜运动变化的旋转矩阵和平移矢量.进而计算得到每个特征点的相对三维坐标和世界三维坐标;然后,采用Delaunay三角剖分算法对各三维点进行网格化,并完成场景的三维重建.实验表明相机与被测点距离在100 mm之内时,得到的深度误差小于1 mm;距离250 mm内时,相对误差在3%之内.说明所提出的算法是可行的.  相似文献   

11.
In this paper, we present a pipeline for camera pose and trajectory estimation, and image stabilization and rectification for dense as well as wide baseline omnidirectional images. The proposed pipeline transforms a set of images taken by a single hand-held camera to a set of stabilized and rectified images augmented by the computed camera 3D trajectory and a reconstruction of feature points facilitating visual object recognition. The paper generalizes previous works on camera trajectory estimation done on perspective images to omnidirectional images and introduces a new technique for omnidirectional image rectification that is suited for recognizing people and cars in images. The performance of the pipeline is demonstrated on real image sequences acquired in urban as well as natural environments.  相似文献   

12.
In computerized numerical control (CNC) machine tools, it is often a time-consuming and error-prone process to verify the Euclidean position accordance between the actual machining setup and its designed three-dimensional (3D) digital model. The model mainly contains the work piece and jigs. The mismatch between them will cause a failure of simulation to precisely detect the collision. The paper presents an on-machine 3D vision system to quickly verify the similarity between the actual setup and its digital model by real and virtual image processing. In this paper, the system is proposed first. Afterwards, a simple on-machine camera calibration process is presented. This calibration process determines all the camera's parameters with respect to the machine tool's coordinate frame. The accurate camera mathematical model (or virtual camera) is derived according to the actual imaging projection. Both camera-captured real images and system-generated virtual images are compensated to make them theoretically and practically identical. The mathematical equations have been derived. Using the virtual image as a reference and then superimposing the real image onto it, the operator can intuitively verify the Euclidean position in accordance to the actual setup and its 3D digital model.  相似文献   

13.
双目视觉测量中三维坐标的求取方法研究   总被引:7,自引:0,他引:7  
双目视觉测量将同一时刻拍摄的两副物体激光条纹图像,经过特征提取和立体匹配得到两两对应的像素点对,根据摄像机标定建立的物空间坐标到像平面坐标对应的矩阵,利用最小二乘法求取物体三维空间点的坐标.但是,由于最小二乘法没有考虑所建立的超限定方程组所代表的几何意义,计算出的点坐标精度不高.提出一种考虑方程组所代表几何意义的方法,利用异面直线公垂线中点去逼近物体空间点.  相似文献   

14.
针对基于Time-of-Flight(TOF)相机的彩色目标三维重建需标定CCD相机与TOF相机联合系统的几何参数,在研究现有的基于彩色图像和TOF深度图像标定算法的基础上,提出了一种基于平面棋盘模板的标定方法。拍摄了固定在平面标定模板上的彩色棋盘图案在不同角度下的彩色图像和振幅图像,改进了Harris角点提取,根据棋盘格上角点与虚拟像点的共轭关系,建立了相机标定系统模型,利用Levenberg-Marquardt算法求解,进行了标定实验。获取了TOF与CCD相机内参数,并利用像平面之间的位姿关系估计两相机坐标系的相对姿态,最后进行联合优化,获取了相机之间的旋转矩阵与平移向量。实验结果表明,提出的算法优化了求解过程,提高了标定效率,能够获得较高的精度。  相似文献   

15.
吴庆双  付仲良  孟庆祥 《计算机应用》2011,31(11):3010-3014
提出了一种新的结合摄影测量和计算机视觉相关理论的摄像机自标定方法。首先通过序列图像的匹配点对,利用计算机视觉理论中的8点法求得摄像机基础矩阵F,通过矩阵F利用Kruppa方程求得矩阵C,对矩阵C进行Cholesky分解得到摄像机的内参数矩阵K,然后将求出的内参数作为初始值,利用摄影测量理论进行相对定向和绝对定向,最小二乘前方交会计算得到匹配点对的三维空间坐标,最后由匹配点对的三维空间坐标及其图像坐标,采用三维直接线性变换和光束法平差方法解算出摄像机内、外参数及畸变系数。该方法不依赖于特定的场景几何约束条件,只要序列图像之间有匹配点对,就可以进行自标定工作,具有广泛的适用性。模拟数据和真实图像的实验结果表明:该方法计算过程简单,标定精度高,是一种值得借鉴的摄像机自标定方法。  相似文献   

16.
The progression in the field of stereoscopic imaging has resulted in impressive 3D videos. This technology is now used for commercial and entertainment purposes and sometimes even for medical applications. Currently, it is impossible to produce quality anaglyph video using a single camera under different moving and atmospheric conditions with the corresponding depth, local colour, and structural information. The proposed study challenges the previous researches by introducing single camera based method for anaglyph reconstruction and it mainly concentrates on human visual perception, where as the previous methods used dual camera, depth sensor, multi view, which demand not only long duration they also suffer from photometric distortion due to variation in angular alignment. This study also contributes clear individual image without any occlusion with another image. We use an approach based on human vision to determine the corresponding depth information. The source frames are shifted slightly in opposite directions as the distance between the pupils increases. We integrate the colour components of the shifted frames to generate contrasting colours for each one of the marginally shifted frames. The colour component images are then reconstructed as a cyclopean image. We show the results of our method by applying it to quickly varying video sequences and compare its performance to other existing methods.  相似文献   

17.
In this paper we address the problem of recovering 3D non-rigid structure from a sequence of images taken with a stereo pair. We have extended existing non-rigid factorization algorithms to the stereo camera case and presented an algorithm to decompose the measurement matrix into the motion of the left and right cameras and the 3D shape, represented as a linear combination of basis-shapes. The added constraints in the stereo camera case are that both cameras are viewing the same structure and that the relative orientation between both cameras is fixed. Our focus in this paper is on the recovery of flexible 3D shape rather than on the correspondence problem. We propose a method to compute reliable 3D models of deformable structure from stereo images. Our experiments with real data show that improved reconstructions can be achieved using this method. The algorithm includes a non-linear optimization step that minimizes image reprojection error and imposes the correct structure to the motion matrix by choosing an appropriate parameterization. We show that 3D shape and motion estimates can be successfully disambiguated after bundle adjustment and demonstrate this on synthetic and real image sequences. While this optimization step is proposed for the stereo camera case, it can be readily applied to the case of non-rigid structure recovery using a monocular video sequence. Electronic supplementary material Electronic supplementary material is available for this article at and accessible for authorised users.  相似文献   

18.
19.
This work proposes a method of camera self-calibration having varying intrinsic parameters from a sequence of images of an unknown 3D object. The projection of two points of the 3D scene in the image planes is used with fundamental matrices to determine the projection matrices. The present approach is based on the formulation of a nonlinear cost function from the determination of a relationship between two points of the scene and their projections in the image planes. The resolution of this function enables us to estimate the intrinsic parameters of different cameras. The strong point of the present approach is clearly seen in the minimization of the three constraints of a self-calibration system (a pair of images, 3D scene, any camera): The use of a single pair of images provides fewer equations, which minimizes the execution time of the program, the use of a 3D scene reduces the planarity constraints, and the use of any camera eliminates the constraints of cameras having constant parameters. The experiment results on synthetic and real data are presented to demonstrate the performance of the present approach in terms of accuracy, simplicity, stability, and convergence.  相似文献   

20.
In this paper, we introduce a method to estimate the object’s pose from multiple cameras. We focus on direct estimation of the 3D object pose from 2D image sequences. Scale-Invariant Feature Transform (SIFT) is used to extract corresponding feature points from adjacent images in the video sequence. We first demonstrate that centralized pose estimation from the collection of corresponding feature points in the 2D images from all cameras can be obtained as a solution to a generalized Sylvester’s equation. We subsequently derive a distributed solution to pose estimation from multiple cameras and show that it is equivalent to the solution of the centralized pose estimation based on Sylvester’s equation. Specifically, we rely on collaboration among the multiple cameras to provide an iterative refinement of the independent solution to pose estimation obtained for each camera based on Sylvester’s equation. The proposed approach to pose estimation from multiple cameras relies on all of the information available from all cameras to obtain an estimate at each camera even when the image features are not visible to some of the cameras. The resulting pose estimation technique is therefore robust to occlusion and sensor errors from specific camera views. Moreover, the proposed approach does not require matching feature points among images from different camera views nor does it demand reconstruction of 3D points. Furthermore, the computational complexity of the proposed solution grows linearly with the number of cameras. Finally, computer simulation experiments demonstrate the accuracy and speed of our approach to pose estimation from multiple cameras.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号