首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
针对传统旋转运动参数估计都是采用两帧图像对齐技术,提出了为多帧运动参数估计方法,即使用多帧子空间约束技术.证明了当摄像机参数不变时,多帧运动参数集合可嵌入一个低维线性子空间上;使用奇异值分解方法来降低线性子空间的秩,用最小二乘技术求解所有帧的运动参数.该方法不需要恢复任何3D信息;由于多帧参数估计法比两帧有更多的约束,因此取得更精确的图像对齐效果.该方法可用小图像进行参数估计.  相似文献   

2.
When a rigid scene is imaged by a moving camera, the set of all displacements of all points across multiple frames often resides in a low-dimensional linear subspace. Linear subspace constraints have been used successfully in the past for recovering 3D structure and 3D motion information from multiple frames (e.g., by using the factorization method of Tomasi and Kanade (1992, International Journal of Computer Vision, 9:137–154)). These methods assume that the 2D correspondences have been precomputed. However, correspondence estimation is a fundamental problem in motion analysis. In this paper we show how the multi-frame subspace constraints can be used for constraining the 2D correspondence estimation process itself.We show that the multi-frame subspace constraints are valid not only for affine cameras, but also for a variety of imaging models, scene models, and motion models. The multi-frame subspace constraints are first translated from constraints on correspondences to constraints directly on image measurements (e.g., image brightness quantities). These brightness-based subspace constraints are then used for estimating the correspondences, by requiring that all corresponding points across all video frames reside in the appropriate low-dimensional linear subspace.The multi-frame subspace constraints are geometrically meaningful, and are {not} violated at depth discontinuities, nor when the camera-motion changes abruptly. These constraints can therefore replace {heuristic} constraints commonly used in optical-flow estimation, such as spatial or temporal smoothness.  相似文献   

3.
The image motion of a planar surface between two camera views is captured by a homography (a 2D projective transformation). The homography depends on the intrinsic and extrinsic camera parameters, as well as on the 3D plane parameters. While camera parameters vary across different views, the plane geometry remains the same. Based on this fact, we derive linear subspace constraints on the relative homographies of multiple (⩾ 2) planes across multiple views. The paper has three main contributions: 1) We show that the collection of all relative homographies (homologies) of a pair of planes across multiple views, spans a 4-dimensional linear subspace. 2) We show how this constraint can be extended to the case of multiple planes across multiple views. 3) We show that, for some restricted cases of camera motion, linear subspace constraints apply also to the set of homographies of a single plane across multiple views. All the results derived are true for uncalibrated cameras. The possible utility of these multiview constraints for improving homography estimation and for detecting nonrigid motions are also discussed  相似文献   

4.
Dynamic analysis of video sequences often relies on the segmentation of the sequence into regions of consistent motions. Approaching this problem requires a definition of which motions are regarded as consistent. Common approaches to motion segmentation usually group together points or image regions that have the same motion between successive frames (where the same motion can be 2D, 3D, or non-rigid). In this paper we define a new type of motion consistency, which is based on temporal consistency of behaviors across multiple frames in the video sequence. Our definition of consistent “temporal behavior” is expressed in terms of multi-frame linear subspace constraints. This definition applies to 2D, 3D, and some non-rigid motions without requiring prior model selection. We further show that our definition of motion consistency extends to data with directional uncertainty, thus leading to a dense segmentation of the entire image. Such segmentation is obtained by applying the new motion consistency constraints directly to covariance-weighted image brightness measurements. This is done without requiring prior correspondence estimation nor feature tracking.  相似文献   

5.
We approach mosaicing as a camera tracking problem within a known parameterized surface. From a video of a camera moving within a surface, we compute a mosaic representing the texture of that surface, flattened onto a planar image. Our approach works by defining a warp between images as a function of surface geometry and camera pose. Globally optimizing this warp to maximize alignment across all frames determines the camera trajectory, and the corresponding flattened mosaic image. In contrast to previous mosaicing methods which assume planar or distant scenes, or controlled camera motion, our approach enables mosaicing in cases where the camera moves unpredictably through proximal surfaces, such as in medical endoscopy applications.  相似文献   

6.
A Multi-Frame Structure-from-Motion Algorithm under Perspective Projection   总被引:2,自引:2,他引:0  
We present a fast, robust algorithm for multi-frame structure from motion from point features which works for general motion and large perspective effects. The algorithm is for point features but easily extends to a direct method based on image intensities. Experiments on synthetic and real sequences show that the algorithm gives results nearly as accurate as the maximum likelihood estimate in a couple of seconds on an IRIS 10000. The results are significantly better than those of an optimal two-image estimate. When the camera projection is close to scaled orthographic, the accuracy is comparable to that of the Tomasi/Kanade algorithm, and the algorithms are comparably fast. The algorithm incorporates a quantitative theoretical analysis of the bas-relief ambiguity and exemplifies how such an analysis can be exploited to improve reconstruction. Also, we demonstrate a structure-from-motion algorithm for partially calibrated cameras, with unknown focal length varying from image to image. Unlike the projective approach, this algorithm fully exploits the partial knowledge of the calibration. It is given by a simple modification of our algorithm for calibrated sequences and is insensitive to errors in calibrating the camera center. Theoretically, we show that unknown focal-length variations strengthen the effects of the bas-relief ambiguity. This paper includes extensive experimental studies of two-frame reconstruction and the Tomasi/Kanade approach in comparison to our algorithm. We find that two-frame algorithms are surprisingly robust and accurate, despite some problems with local minima. We demonstrate experimentally that a nearly optimal two-frame reconstruction can be computed quickly, by a minimization in the motion parameters alone. Lastly, we show that a well known problem with the Tomasi/Kanade algorithm is often not a significant one.  相似文献   

7.
Monitoring of large sites requires coordination between multiple cameras, which in turn requires methods for relating events between distributed cameras. This paper tackles the problem of automatic external calibration of multiple cameras in an extended scene, that is, full recovery of their 3D relative positions and orientations. Because the cameras are placed far apart, brightness or proximity constraints cannot be used to match static features, so we instead apply planar geometric constraints to moving objects tracked throughout the scene. By robustly matching and fitting tracked objects to a planar model, we align the scene's ground plane across multiple views and decompose the planar alignment matrix to recover the 3D relative camera and ground plane positions. We demonstrate this technique in both a controlled lab setting where we test the effects of errors in the intrinsic camera parameters, and in an uncontrolled, outdoor setting. In the latter, we do not assume synchronized cameras and we show that enforcing geometric constraints enables us to align the tracking data in time. In spite of noise in the intrinsic camera parameters and in the image data, the system successfully transforms multiple views of the scene's ground plane to an overhead view and recovers the relative 3D camera and ground plane positions  相似文献   

8.
In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.  相似文献   

9.
This study investigates the problem of estimating camera calibration parameters from image motion fields induced by a rigidly moving camera with unknown parameters, where the image formation is modeled with a linear pinhole-camera model. The equations obtained show the flow to be separated into a component due to the translation and the calibration parameters and a component due to the rotation and the calibration parameters. A set of parameters encoding the latter component is linearly related to the flow, and from these parameters the calibration can be determined.However, as for discrete motion, in general it is not possible to decouple image measurements obtained from only two frames into translational and rotational components. Geometrically, the ambiguity takes the form of a part of the rotational component being parallel to the translational component, and thus the scene can be reconstructed only up to a projective transformation. In general, for full calibration at least four successive image frames are necessary, with the 3D rotation changing between the measurements.The geometric analysis gives rise to a direct self-calibration method that avoids computation of optical flow or point correspondences and uses only normal flow measurements. New constraints on the smoothness of the surfaces in view are formulated to relate structure and motion directly to image derivatives, and on the basis of these constraints the transformation of the viewing geometry between consecutive images is estimated. The calibration parameters are then estimated from the rotational components of several flow fields. As the proposed technique neither requires a special set up nor needs exact correspondence it is potentially useful for the calibration of active vision systems which have to acquire knowledge about their intrinsic parameters while they perform other tasks, or as a tool for analyzing image sequences in large video databases.  相似文献   

10.
基于二次曲线的线阵相机标定技术   总被引:1,自引:0,他引:1       下载免费PDF全文
针对绕固定轴旋转的线阵相机,提出一种基于二次曲线的相机标定方法。该方法只需要相机在2个或更多不同的方位拍摄图像,通过相机的转动在每个方位拍摄多帧图像。靶标采用一个包含3个或更多二次曲线的平板。相机和靶标可以自由移动,不需要知道运动参数。通过坐标变换,把每个方位所拍摄的多帧线阵图像排列成面阵图像。从面阵图像中提取二次曲线作为标定基元,以此简化基元对应问题。仿真实验结果表明,该方法精度较高,鲁棒性较好。  相似文献   

11.
We present an algorithm that estimates dense planar-parallax motion from multiple uncalibrated views of a 3D scene. This generalizes the "plane+parallax" recovery methods to more than two frames. The parallax motion of pixels across multiple frames (relative to a planar surface) is related to the 3D scene structure and the camera epipoles. The parallax field, the epipoles, and the 3D scene structure are estimated directly from image brightness variations across multiple frames, without precomputing correspondences.  相似文献   

12.
13.
花开胜  王林 《计算机工程》2012,38(15):264-267
针对道路交通事故现场图自动绘制中的摄像机标定问题,提出一种平面模板摄像机标定方法。在畸变较小的图像中心区域求取初值,采用基于内部映射牛顿法的子空间置信域法求解部分参数,引入畸变模型,通过直线特征约束求得剩余参数。实验结果表明,该方法能简化标定过程,减少运算量,提高计算速度。  相似文献   

14.
针对现有室内环境存在多个平面特征的特点,提出了一种使用平面特征优化相机位姿和建图的实时室内RGB-D 同时定位与地图构建(SLAM)系统。系统在前端采用迭代最近点(ICP)算法和直接法联合估计相机位姿,在后端提取相机关键帧平面特征并建立多种基于平面特征的约束关系,优化相机关键帧的位姿和平面特征参数,同时增量式地构建环境的平面结构模型。在多个公开数据序列上的实验结果表明,在平面信息丰富的环境中,平面特征约束能够减小位姿估计的累积误差,系统能够构建场景平面模型并只消耗少量的存储空间,在真实环境下的实验结果验证了系统在室内增强现实领域具有可行性和应用价值。  相似文献   

15.
We present a method for detecting motion regions in video sequences observed by a moving camera in the presence of a strong parallax due to static 3D structures. The proposed method classifies each image pixel into planar background, parallax, or motion regions by sequentially applying 2D planar homographies, the epipolar constraint, and a novel geometric constraint called the "structure consistency constraint." The structure consistency constraint, being the main contribution of this paper, is derived from the relative camera poses in three consecutive frames and is implemented within the "Plane + Parallax" framework. Unlike previous planar-parallax constraints proposed in the literature, the structure consistency constraint does not require the reference plane to be constant across multiple views. It directly measures the inconsistency between the projective structures from the same point under camera motion and reference plane change. The structure consistency constraint is capable of detecting moving objects followed by a moving camera in the same direction, a so-called degenerate configuration where the epipolar constraint fails. We demonstrate the effectiveness and robustness of our method with experimental results of real-world video sequences.  相似文献   

16.
Two relevant issues in vision-based navigation are the field-of-view constraints of conventional cameras and the model and structure dependency of standard approaches. A good solution of these problems is the use of the homography model with omnidirectional vision. However, a plane of the scene will cover only a small part of the omnidirectional image, missing relevant information across the wide range field of view, which is the main advantage of omnidirectional sensors. The interest of this paper is in a new approach for computing multiple homographies from virtual planes using omnidirectional images and its application in an omnidirectional vision-based homing control scheme. The multiple homographies are robustly computed, from a set of point matches across two omnidirectional views, using a method that relies on virtual planes independently of the structure of the scene. The method takes advantage of the planar motion constraint of the platform and computes virtual vertical planes from the scene. The family of homographies is also constrained to be embedded in a three-dimensional linear subspace to improve numerical consistency. Simulations and real experiments are provided to evaluate our approach.  相似文献   

17.
石迎波  吴成柯 《计算机工程》2008,34(10):218-220
提出一种应用于H.264中的多参考帧快速整像素运动估计算法,该算法通过有效地预测多参考帧中的搜索起始点和将菱形算法扩展到多参考帧情形,在很大程度上降低多参考帧运动搜索的计算量。试验结果表明,与H.264参考软件JM9.6中的快速算法UMHexagonS相比,该算法保持了较好的图像质量且码率几乎不变,运动估计时间平均减少约60%。  相似文献   

18.
基于一维标定物的多摄像机标定   总被引:4,自引:0,他引:4  
王亮  吴福朝 《自动化学报》2007,33(3):225-231
一维标定物是由三个或三个以上彼此距离已知的共线点构成的. 现有文献指出只有当一维标定物做平面运动或者绕固定点转动时,才能实现摄像机的标定. 本文的研究结果表明,当多个摄像机同时观察作任意刚体运动的一维标定物时,则该摄像机组能被线性地标定. 本文给出一种线性标定算法,并使用最大似然准则对线性算法结果进行精化. 模拟实验和真实图像实验都表明本文的算法是有效可行的.  相似文献   

19.
Existing algorithms for camera calibration and metric reconstruction are not appropriate for image sets containing geometrically transformed images for which we cannot apply the camera constraints such as square or zero-skewed pixels. In this paper, we propose a framework to use scene constraints in the form of camera constraints. Our approach is based on image warping using images of parallelograms. We show that the warped image using parallelograms constrains the camera both intrinsically and extrinsically. Image warping converts the calibration problems of transformed images into the calibration problem with highly constrained cameras. In addition, it is possible to determine affine projection matrices from the images without explicit projective reconstruction. We introduce camera motion constraints of the warped image and a new parameterization of an infinite homography using the warping matrix. Combining the calibration and the affine reconstruction results in the fully metric reconstruction of scenes with geometrically transformed images. The feasibility of the proposed algorithm is tested with synthetic and real data. Finally, examples of metric reconstructions are shown from the geometrically transformed images obtained from the Internet.  相似文献   

20.
Photo-consistency estimation is an important part for many image-based modeling techniques.This paper presents a novel radiance-based color calibration method to reduce the uncertainty of photo-consistency estimation across multiple cameras.The idea behind our method is to convert colors into a uniform radiometric color space in which multiple image data are corrected.Experimental results demonstrate that our method can achieve comparable color calibration effect without adjusting camera parameters and is more robust than other existing method.Additionally,we obtain an auto-determined threshold for photo-consistency check,which will lead to a better performance than existing photo-consistency based reconstruction algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号