首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 156 毫秒
1.
由平行平面的投影确定无穷远平面的单应矩阵   总被引:1,自引:0,他引:1  
在三维计算机视觉中,无穷远平面的单应矩阵扮演了极其重要的角色,可使众多视觉问题的求解得到简化.主要讨论如何利用平行平面的投影来求解两个视点间的无穷远平面的单应矩阵,用代数方法构造性地证明了下述结论:(1) 如果场景中含有一组平行平面,则可以通过求解一个一元4次方程来确定两个视点间的无穷远平面对应的单应矩阵;(2) 如果场景中含有两组平行平面,则可以线性地确定两个视点间的无穷远平面对应的单应矩阵.并对上述结果给出了相应的几何解释和具体算法.所给出的结果在三维计算机视觉,特别是摄像机自标定中具有一定的理论意义和应用价值.  相似文献   

2.
该文提出一种对场景进行多视点成像的方法。该方法首先为场景中的多边形生成多边形模板,一个多边形模板,包括一条轮廓路径和一组纹条,而一个纹条是平行成像画的一个平面与多边形相交的直线段。由于纹条相对于不同视点的透视投影的变化是线性的,因此,绘制多边形时可以基于模板逐个纹条地处理,而不必按照传统的扫描转换方法逐个点地处理,绘制速度可以提高很多。同时,与视点无关的光照和纹理可以预先计算并保存在模板中,以便在成像时利用基于图像绘制的技术来生成高质量的图像。该方法中,视点可以放置在三维空间的任意位置,并且在场景漫游时可以根据视点位置自动地实现多分辨率绘制。  相似文献   

3.
基于平面的建筑物表面模型重建算法的研究   总被引:1,自引:0,他引:1  
针对建筑物模型的规则性,提出了一种基于平面的建筑物模型重建算法,可以从单幅透视图像恢复建筑物的表面模型.该算法主要分为相机定标、基平面的提取、平面位置和方向的计算等几个子过程.相机定标主要用于求解相机的焦距,是一个非常重要的部分.该算法以建筑物场景中的几何结构作为约束条件,从单幅图像中求解相机的焦距;然后计算基平面位置和法向;最后通过交互式操作指明场景中各平面之间的相互关系,递归求解各平面的位置和法向,达到根据图像重建建筑物场景表面模型的目的.  相似文献   

4.
一种基于4对图像对应点的欧氏重建方法   总被引:1,自引:0,他引:1       下载免费PDF全文
摄像机自标定算法通常是非线性的,为了得到线性的方法,提出了一种在RANSAC框架下由4对图像对应点线性标定摄像机并对场景进行鲁棒性欧氏重建的方法。当摄像机作两组平移运动时,若在两组平移运动之间摄像机具有不同的姿态,则从4对图像对应点可以线性地重建场景的欧氏几何。模拟实验和真实图像实验均证明了本文方法的可行性。  相似文献   

5.
一种三维准欧氏重建方法   总被引:1,自引:1,他引:1  
研究在不必精确知道相机内参数的情况下进行场景的三维重建,充分利用建筑场景的平面信息,假设场景由一些基本平面元素组成,从而自动生成场景的拓扑结构.虽然重建结果从严格意义上仍是射影重建,但已经非常接近欧氏重建结果,文中称之为准欧氏重建,该结果还可以为其它优化算法提供很好的初值.真实图像的实验结果证实了本文算法,重建模型达到了很好的可视效果.  相似文献   

6.
基于场景几何约束未标定两视图的三维模型重建   总被引:7,自引:1,他引:7       下载免费PDF全文
提出了一种从两幅未标定图象重建场景三维模型的方法 .这种方法充分利用了人造结构场景中大量存在的平行性和正交性几何约束 ,即利用每幅视图中三组互相垂直的平行线 ,计算出 3个影灭点 ,从而对每幅视图进行标定 .对两幅未标定图象 ,从基本矩阵只能得到射影重构 ,如果每幅图象都已标定 ,则可将基本矩阵转化为本质矩阵 .三维重构过程有两个步骤 :先是恢复相机的位置和运动 ;后是用三角测量法计算出点的三维坐标 .对多平面组成的场景进行三维重构实验 ,所得三维模型产生新的视点图象 ,与所观察的场景一致 ,重构的两个平面夹角与实际值相近 ,实验结果表明 ,该算法是行之有效的  相似文献   

7.
从二维图像得到场景的三维模型是计算机视觉和虚拟现实的重要研究内容。本文通过用户的简单交互,利用平面型场景的同形关系自动进行两幅大差异图像之间的角点匹配,将这些匹配结果作为初始点,再利用RANSAC鲁棒算法估计基本矩阵。以此结果进行仿射重建,然后在简化相机模型的基础上通过给出的约束条件直接实现欧氏重建,真实数据的实验结果表明了该算法的有效性。  相似文献   

8.
盛斌  吴恩华 《软件学报》2008,19(7):1806-1816
首先推导与归纳了图像三维变换中像素深度场的变换规律,同时提出了基于深度场和极线原则的像素可见性别方法,根据上述理论和方法,提出一种基于深度图像的建模与绘制(image-based modeling and rendering,简称IBMR)技术,称为虚平面映射.该技术可以基于图像空间内任意视点对场景进行绘制.绘制时,先在场景中根据视线建立若干虚拟平面,将源深度图像中的像素转换到虚平面上,然后通过对虚平面上像素的中间变换,将虚平面转换成平面纹理,再利用虚平面的相互拼接,将视点的成像以平面纹理映射的方式完成.新方法还能在深度图像内侧,基于当前视点快速获得该视点的全景图,从而实现视点的实时漫游.新方法视点运动空间大、存储需求小,且可以发挥图形硬件的纹理映射功能,并能表现物体表面的三维凹凸细节和成像视差效果,克服了此前类似算法的局限和不足.  相似文献   

9.
基于角仿射不变的特征匹配   总被引:2,自引:0,他引:2       下载免费PDF全文
同一场景的不同图像匹配是计算机视觉中的一个基本问题,在诸如三维重建、对象识别和分类、图像配准和相机自校正等应用中,特征匹配都是一个关键步骤。为解决三维场景重建问题,通过改进目前特征匹配的不足,提出了一种基于角仿射不变的特征匹配算法。该方法是使用角作为图像匹配选取的特征,通过仿射不变处理来消除图像缩放、扭曲、旋转和平移产生的影响,实验表明,该算法具有良好的匹配性能,可以对差异较大的图像对进行特征匹配。  相似文献   

10.
基于平面的Warping技术   总被引:5,自引:0,他引:5  
张严辞  吴恩华 《软件学报》2002,13(7):1242-1249
提出了一种基于平面的逆向Warping算法,用于根据多幅参考图像生成任意视点下的新图像.首先通过参考图像的深度信息来重建三维平面,然后寻找这些重建平面间的对应关系,并比较它们对空间平面同一部分的采样密度,以获得最好的采样结果.在生成新视点图像时,首先对那些采样密度最好的重建平面进行可见性判断,然后将其投影到新视点下,在此基础上求得目标图像上各点的深度,最后将目标图像上的点逆向Warping到相应的参考图像中,以获取它们的颜色值.同时,对于参考图像中不能被重建成平面的像素点,用正向Warping的方法对其进  相似文献   

11.
This paper proposes a new method for self-calibrating a set of stationary non-rotating zooming cameras. This is a realistic configuration, usually encountered in surveillance systems, in which each zooming camera is physically attached to a static structure (wall, ceiling, robot, or tripod). In particular, a linear, yet effective method to recover the affine structure of the observed scene from two or more such stationary zooming cameras is presented. The proposed method solely relies on point correspondences across images and no knowledge about the scene is required. Our method exploits the mostly translational displacement of the so-called principal plane of each zooming camera to estimate the location of the plane at infinity. The principal plane of a camera, at any given setting of its zoom, is encoded in its corresponding perspective projection matrix from which it can be easily extracted. As a displacement of the principal plane of a camera under the effect of zooming allows the identification of a pair of parallel planes, each zooming camera can be used to locate a line on the plane at infinity. Hence, two or more such zooming cameras in general positions allow the obtainment of an estimate of the plane at infinity making it possible, under the assumption of zero-skew and/or known aspect ratio, to linearly calculate the camera's parameters. Finally, the parameters of the camera and the coordinates of the plane at infinity are refined through a nonlinear least-squares optimization procedure. The results of our extensive experiments using both simulated and real data are also reported in this paper.  相似文献   

12.
Single View Metrology   总被引:19,自引:0,他引:19  
We describe how 3D affine measurements may be computed from a single perspective view of a scene given only minimal geometric information determined from the image. This minimal information is typically the vanishing line of a reference plane, and a vanishing point for a direction not parallel to the plane. It is shown that affine scene structure may then be determined from the image, without knowledge of the camera's internal calibration (e.g. focal length), nor of the explicit relation between camera and world (pose).In particular, we show how to (i) compute the distance between planes parallel to the reference plane (up to a common scale factor); (ii) compute area and length ratios on any plane parallel to the reference plane; (iii) determine the camera's location. Simple geometric derivations are given for these results. We also develop an algebraic representation which unifies the three types of measurement and, amongst other advantages, permits a first order error propagation analysis to be performed, associating an uncertainty with each measurement.We demonstrate the technique for a variety of applications, including height measurements in forensic images and 3D graphical modelling from single images.  相似文献   

13.
We present an improved algorithm for two-image camera self-calibration and Euclidean structure recovery, where the effective focal lengths of both cameras are assumed to be the only unknown intrinsic parameters. By using the absolute quadric, it is shown that the effective focal lengths can be computed linearly from two perspective images without imposing scene or motion constraints. Moreover, a quadratic equation derived from the absolute quadric is proposed for solving the parameters of the plane at infinity from two images, which upgrades a projective reconstruction to a Euclidean reconstruction.  相似文献   

14.
A method is described to recover the three-dimensional affine structure of a scene consisting of at least five points identified in two perspective views with a relative object-camera translation in between. When compared to the results for arbitrary stereo views, a more detailed reconstruction is possible using less information. The method presented only assumes that the two images are obtained by identical cameras, but no knowledge about the intrinsic parameters of the camera(s) or about the performed translation is assumed. By the same method, affine 3D reconstruction from a single view can be achieved for parallel structures. In that case, four points suffice for affine reconstruction  相似文献   

15.
We present an algorithm to model 3D workspace and to understand test scene for mobile robot’s navigation or human computer interaction. This has done by line-based modeling and recognition algorithm. Line-based recognition using 3D lines has been tried by many researchers however its reliability still needs improvement due to ambiguity of 3D line feature information from original images. To improve the outcome, we approach firstly to find real planes using given 3D lines and then to implement recognition process. The methods we use are principle component analysis (PCA), plane sweep, occlusion query, and iterative closest point (ICP). During the implementation, we also use 3D map information for localization. We apply this algorithm to real test scene images and find out our result can be useful to identify doors or walls in indoor environment with better efficiency.  相似文献   

16.
The article describes a reconstruction pipeline that generates piecewise-planar models of man-made environments using two calibrated views. The 3D space is sampled by a set of virtual cut planes that intersect the baseline of the stereo rig and implicitly define possible pixel correspondences across views. The likelihood of these correspondences being true matches is measured using signal symmetry analysis [1], which enables to obtain profile contours of the 3D scene that become lines whenever the virtual cut planes intersect planar surfaces. The detection and estimation of these lines cuts is formulated as a global optimization problem over the symmetry matching cost, and pairs of reconstructed lines are used to generate plane hypotheses that serve as input to PEARL clustering [2]. The PEARL algorithm alternates between a discrete optimization step, which merges planar surface hypotheses and discards detections with poor support, and a continuous optimization step, which refines the plane poses taking into account surface slant. The pipeline outputs an accurate semi-dense Piecewise-Planar Reconstruction of the 3D scene. In addition, the input images can be segmented into piecewise-planar regions using a standard labeling formulation for assigning pixels to plane detections. Extensive experiments with both indoor and outdoor stereo pairs show significant improvements over state-of-the-art methods with respect to accuracy and robustness.  相似文献   

17.
Sonar is the most common imaging modality in underwater, and high-resolution high data rate 2-D video systems have been emerging in recent years. As for visually guided terrestrial robot navigation and target-based positioning, the estimation of 3-D motion by tracking features in recorded 2-D sonar images is also a highly desirable capability for submersible platforms. Additionally, theoretical results dealing with robustness and multiplicity of solution constitute important fundamental findings due to nature of sonar data, namely, high noise level, narrow field of view coverage, scarcity of robust features, and incorrect matches.This paper explores the inherent ambiguities of 3-D motion and scene structure interpretation from 2-D forward-scan sonar image sequences. Analyzing the sonar image motion transformation model, which depends on the affine components of the projective transformation (or homography) of two plane views, we show that two interpretations are commonly inferred. The true and spurious planes form mirror images relative to the zero-elevation plane of the sonar reference frame. Even under each of pure rotation or translation, a spurious motion exists comprising both translational and rotational components. In some cases, the two solutions share certain motion components, where the imaged surface becomes parallel to a plane defined by two of the sonar coordinate axes. A unique solution exists under the very special condition where the sonar motion aligns the imaged plane with the zero-elevation planes. We also derive the relationship between the two interpretations, thus allowing closed-form computation of both solutions.  相似文献   

18.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号