首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In order to monitor sufficiently large areas of interest for surveillance or any event detection, we need to look beyond stationary cameras and employ an automatically configurable network of nonoverlapping cameras. These cameras need not have an overlapping field of view and should be allowed to move freely in space. Moreover, features like zooming in/out, readily available in security cameras these days, should be exploited in order to focus on any particular area of interest if needed. In this paper, a practical framework is proposed to self-calibrate dynamically moving and zooming cameras and determine their absolute and relative orientations, assuming that their relative position is known. A global linear solution is presented for self-calibrating each zooming/focusing camera in the network. After self-calibration, it is shown that only one automatically computed vanishing point and a line lying on any plane orthogonal to the vertical direction is sufficient to infer the dynamic network configuration. Our method generalizes previous work which considers restricted camera motions. Using minimal assumptions, we are able to successfully demonstrate promising results on synthetic, as well as on real data.  相似文献   

2.
We address the well-known problem of estimating the motion and structure of a plane, but in the case where the visual system is not calibrated and in a monocular image-sequence. We first define plane collineations and analyse some of their properties when used to analyse the retinal motion in an uncalibrated image sequence. We show how to relate them to the Euclidean parameters of the scene. In particular, we discuss how to detect and estimate the collineation of the plane at infinity and use this quantity for auto-calibration.

More precisely

  • - We have been able to elaborate a method to estimate robustly any collineation in the image as soon as at least four projections have been established, especially for points at infinity and the collineation of this virtual infinite plane;
  • - It is shown that, given at least four points of a stationary plane and two stationary points not on the plane (or equivalently 2 planes) we can compute the focus of expansion;
  • - A step further, we have defined a bi-ratio of distances for a point with respect to a plane which allows us to analyse not only the relative position of this point with respect to the plane but also quantify this distance;
  • - Moreover a necessary and sufficient condition for a collineation to correspond to a stationary plane is given in the affine case;
  • - It is also discussed that when given three views and the plane at infinity, the intrinsics calibration parameters of the camera can be recovered from linear equations.
  • Robust estimations of collineation and statistical tests are then developed and illustrated by some experimental results.  相似文献   

    3.
    A calibrated camera is essential for computer vision systems: the prime reason being that such a camera acts as an angle measuring device. Once the camera is calibrated, applications like three-dimensional reconstruction or metrology or other applications requiring real world information from the video sequences can be envisioned. Motivated by this, we address the problem of calibrating multiple cameras, with an overlapping field of view observing pedestrians in a scene walking on an uneven terrain. This problem of calibration on an uneven terrain has so far not been addressed in the vision community. We automatically estimate vertical and horizontal vanishing points by observing pedestrians in each camera and use the corresponding vanishing points to estimate the infinite homography existing between the different cameras. This homography provides constraints on intrinsic (or interior) camera parameters while also enabling us to estimate the extrinsic (or exterior) camera parameters. We test the proposed method on real as well as synthetic data, in addition to motion capture dataset and compare our results with the state of the art.  相似文献   

    4.
    Self-Calibration of Rotating and Zooming Cameras   总被引:4,自引:0,他引:4  
    In this paper we describe the theory and practice of self-calibration of cameras which are fixed in location and may freely rotate while changing their internal parameters by zooming. The basis of our approach is to make use of the so-called infinite homography constraint which relates the unknown calibration matrices to the computed inter-image homographies. In order for the calibration to be possible some constraints must be placed on the internal parameters of the camera.We present various self-calibration methods. First an iterative non-linear method is described which is very versatile in terms of the constraints that may be imposed on the camera calibration: each of the camera parameters may be assumed to be known, constant throughout the sequence but unknown, or free to vary. Secondly, we describe a fast linear method which works under the minimal assumption of zero camera skew or the more restrictive conditions of square pixels (zero skew and known aspect ratio) or known principal point. We show experimental results on both synthetic and real image sequences (where ground truth data was available) to assess the accuracy and the stability of the algorithms and to compare the result of applying different constraints on the camera parameters. We also derive an optimal Maximum Likelihood estimator for the calibration and the motion parameters. Prior knowledge about the distribution of the estimated parameters (such as the location of the principal point) may also be incorporated via Maximum a Posteriori estimation.We then identify some near-ambiguities that arise under rotational motions showing that coupled changes of certain parameters are barely observable making them indistinguishable. Finally we study the negative effect of radial distortion in the self-calibration process and point out some possible solutions to it.An erratum to this article can be found at  相似文献   

    5.
    Stereovision is an effective technique to use a CCD video camera to determine the 3D position of a target object from two or more simultaneous views of the scene. Camera calibration is a central issue in finding the position of objects in a stereovision system. This is usually carried out by calibrating each camera independently, and then applying a geometric transformation of the external parameters to find the geometry of the stereo setting. After calibration, the distance of various target objects in the scene can be calculated with CCD video cameras, and recovering the 3D structure from 2D images becomes simpler. However, the process of camera calibration is complicated. Based on the ideal pinhole model of a camera, we describe formulas to calculate intrinsic parameters that specify the correct camera characteristics, and extrinsic parameters that describe the spatial relationship between the camera and the world coordinate system. A simple camera calibration method for our CCD video cameras and corresponding experiment results are also given. This work was presented in part at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002  相似文献   

    6.
    Monitoring of large sites requires coordination between multiple cameras, which in turn requires methods for relating events between distributed cameras. This paper tackles the problem of automatic external calibration of multiple cameras in an extended scene, that is, full recovery of their 3D relative positions and orientations. Because the cameras are placed far apart, brightness or proximity constraints cannot be used to match static features, so we instead apply planar geometric constraints to moving objects tracked throughout the scene. By robustly matching and fitting tracked objects to a planar model, we align the scene's ground plane across multiple views and decompose the planar alignment matrix to recover the 3D relative camera and ground plane positions. We demonstrate this technique in both a controlled lab setting where we test the effects of errors in the intrinsic camera parameters, and in an uncontrolled, outdoor setting. In the latter, we do not assume synchronized cameras and we show that enforcing geometric constraints enables us to align the tracking data in time. In spite of noise in the intrinsic camera parameters and in the image data, the system successfully transforms multiple views of the scene's ground plane to an overhead view and recovers the relative 3D camera and ground plane positions  相似文献   

    7.
    In this paper the theoretical and practical feasibility of self-calibration in the presence of varying intrinsic camera parameters is under investigation. The paper's main contribution is to propose a self-calibration method which efficiently deals with all kinds of constraints on the intrinsic camera parameters. Within this framework a practical method is proposed which can retrieve metric reconstruction from image sequences obtained with uncalibrated zooming/focusing cameras. The feasibility of the approach is illustrated on real and synthetic examples. Besides this a theoretical proof is given which shows that the absence of skew in the image plane is sufficient to allow for self-calibration. A counting argument is developed which—depending on the set of constraints—gives the minimum sequence length for self-calibration and a method to detect critical motion sequences is proposed.  相似文献   

    8.
    We present a method for active self-calibration of multi-camera systems consisting of pan-tilt zoom cameras. The main focus of this work is on extrinsic self-calibration using active camera control. Our novel probabilistic approach avoids multi-image point correspondences as far as possible. This allows an implicit treatment of ambiguities. The relative poses are optimized by actively rotating and zooming each camera pair in a way that significantly simplifies the problem of extracting correct point correspondences. In a final step we calibrate the entire system using a minimal number of relative poses. The selection of relative poses is based on their uncertainty. We exploit active camera control to estimate consistent translation scales for triplets of cameras. This allows us to estimate missing relative poses in the camera triplets. In addition to this active extrinsic self-calibration we present an extended method for the rotational intrinsic self-calibration of a camera that exploits the rotation knowledge provided by the camera’s pan-tilt unit to robustly estimate the intrinsic camera parameters for different zoom steps as well as the rotation between pan-tilt unit and camera. Quantitative experiments on real data demonstrate the robustness and high accuracy of our approach. We achieve a median reprojection error of $0.95$ pixel.  相似文献   

    9.
    We present an improved algorithm for two-image camera self-calibration and Euclidean structure recovery, where the effective focal lengths of both cameras are assumed to be the only unknown intrinsic parameters. By using the absolute quadric, it is shown that the effective focal lengths can be computed linearly from two perspective images without imposing scene or motion constraints. Moreover, a quadratic equation derived from the absolute quadric is proposed for solving the parameters of the plane at infinity from two images, which upgrades a projective reconstruction to a Euclidean reconstruction.  相似文献   

    10.
    We describe a method to compute the internal parameters (focal and principal point) of a camera with known position and orientation, based on the observation of two or more conics on a known plane. The conics can even be degenerate (e.g., pairs of lines). The proposed method can be used to re-estimate the internal parameters of a fully calibrated camera after zooming to a new, unknown, focal length. It also allows estimating the internal parameters when a second, fully calibrated camera observes the same conics. The parameters estimated through the proposed method are coherent with the output of more traditional procedures that require a higher number of calibration images. A deep analysis of the geometrical configurations that influence the proposed method is also reported.  相似文献   

    11.
    Some aspects of zoom lens camera calibration   总被引:3,自引:0,他引:3  
    Zoom lens camera calibration is an important and difficult problem for two reasons at least. First, the intrinsic parameters of such a camera change over time, it is difficult to calibrate them on-line. Secondly, the pin-hole model for single lens system can not be applied directly to a zoom lens system. In this paper, we address some aspects of this problem, such as determining principal point by zooming, modeling and calibration of lens distortion and focal length, as well as some practical aspects. Experimental results on calibrating cameras with computer controlled zoom, focus and aperture are presented  相似文献   

    12.

    Real-time estimates of a crowd size is a central task in civilian surveillance. In this paper we present a novel system counting people in a crowd scene with overlapping cameras. This system fuses all single view foreground information to localize each person present on the scene. The purpose of our fusion strategy is to use the foreground pixels of each single views to improve real-time objects association between each camera of the network. The foreground pixels are obtained by using an algorithm based on codebook. In this work, we aggregate the resulting silhouettes over cameras network, and compute a planar homography projection of each camera’s visual hull into ground plane. The visual hull is obtained by finding the convex hull of the foreground pixels. After the projection into the ground plane, we fuse the obtained polygons by using the geometric properties of the scene and on the quality of each camera detection. We also suggest a region-based approach tracking strategy which keeps track of people movements and of their identities along time, also enabling tolerance to occasional misdetections. This tracking strategy is implemented on the result of the views fusion and allows to estimate the crowd size dependently on each frame. Assessment of experiments using public datasets proposed for the evaluation of counting people system demonstrates the performance of our fusion approach. These results prove that the fusion strategy can run in real-time and is efficient for making data association. We also prove that the combination of our fusion approach and the proposed tracking improve the people counting.

      相似文献   

    13.
    Stereo calibration from rigid motions   总被引:1,自引:0,他引:1  
    We describe a method for calibrating a stereo pair of cameras using general or planar motions. The method consists of upgrading a 3D projective representation to affine and to Euclidean without any knowledge, neither about the motion parameters nor about the 3D layout. We investigate the algebraic properties relating projective representation to the plane at infinity and to the intrinsic camera parameters when the camera pair is considered as a moving rigid body. We show that all the computations can be carried out using standard linear resolutions techniques. An error analysis reveals the relative importance of the various steps of the calibration process: projective-to-affine and affine-to-metric upgrades. Extensive experiments performed with calibrated and natural data confirm the error analysis as well as the sensitivity study performed with simulated data  相似文献   

    14.
    This work proposes a method of camera self-calibration having varying intrinsic parameters from a sequence of images of an unknown 3D object. The projection of two points of the 3D scene in the image planes is used with fundamental matrices to determine the projection matrices. The present approach is based on the formulation of a nonlinear cost function from the determination of a relationship between two points of the scene and their projections in the image planes. The resolution of this function enables us to estimate the intrinsic parameters of different cameras. The strong point of the present approach is clearly seen in the minimization of the three constraints of a self-calibration system (a pair of images, 3D scene, any camera): The use of a single pair of images provides fewer equations, which minimizes the execution time of the program, the use of a 3D scene reduces the planarity constraints, and the use of any camera eliminates the constraints of cameras having constant parameters. The experiment results on synthetic and real data are presented to demonstrate the performance of the present approach in terms of accuracy, simplicity, stability, and convergence.  相似文献   

    15.
    The view-independent visualization of 3D scenes is most often based on rendering accurate 3D models or utilizes image-based rendering techniques. To compute the 3D structure of a scene from a moving vision sensor or to use image-based rendering approaches, we need to be able to estimate the motion of the sensor from the recorded image information with high accuracy, a problem that has been well-studied. In this work, we investigate the relationship between camera design and our ability to perform accurate 3D photography, by examining the influence of camera design on the estimation of the motion and structure of a scene from video data. By relating the differential structure of the time varying plenoptic function to different known and new camera designs, we can establish a hierarchy of cameras based upon the stability and complexity of the computations necessary to estimate structure and motion. At the low end of this hierarchy is the standard planar pinhole camera for which the structure from motion problem is non-linear and ill-posed. At the high end is a camera, which we call the full field of view polydioptric camera, for which the motion estimation problem can be solved independently of the depth of the scene which leads to fast and robust algorithms for 3D Photography. In between are multiple view cameras with a large field of view which we have built, as well as omni-directional sensors.  相似文献   

    16.
    In 3D reconstruction, the recovery of the calibration parameters of the cameras is paramount since it provides metric information about the observed scene, e.g., measures of angles and ratios of distances. Autocalibration enables the estimation of the camera parameters without using a calibration device, but by enforcing simple constraints on the camera parameters. In the absence of information about the internal camera parameters such as the focal length and the principal point, the knowledge of the camera pixel shape is usually the only available constraint. Given a projective reconstruction of a rigid scene, we address the problem of the autocalibration of a minimal set of cameras with known pixel shape and otherwise arbitrarily varying intrinsic and extrinsic parameters. We propose an algorithm that only requires 5 cameras (the theoretical minimum), thus halving the number of cameras required by previous algorithms based on the same constraint. To this purpose, we introduce as our basic geometric tool the six-line conic variety (SLCV), consisting in the set of planes intersecting six given lines of 3D space in points of a conic. We show that the set of solutions of the Euclidean upgrading problem for three cameras with known pixel shape can be parameterized in a computationally efficient way. This parameterization is then used to solve autocalibration from five or more cameras, reducing the three-dimensional search space to a two-dimensional one. We provide experiments with real images showing the good performance of the technique.  相似文献   

    17.
    《国际计算机数学杂志》2012,89(14):3111-3137
    Reconstruction of three dimensional (3D) object structure from multiple images is a fundamental problem in computational vision. Many applications in computer vision require the use of structure information of 3D objects. The objective of this work is to develop a stable method of 3D reconstruction of an object, which works without the availability of camera parameters, once the plane at infinity is obtained using the approximate scene information. First, a framework has been designed based on a modification of the auto-calibration procedure for 3D structure computation using singular value decomposition. In the second part of the work, ambiguities present at the various stages of 3D reconstruction have been analysed. Error norms have been proposed, and studied to quantify the ambiguity in the reconstruction process. We attempt to analyse the effect of pose difference between camera views and focal length parameters on the reconstruction process, using experimentation with simulated and real-world data.  相似文献   

    18.
    The 1D radial camera maps all points on a plane, containing the principal axis, onto the radial line which is the intersection of that plane and the image plane. It is a sufficiently general model to express both central and non-central cameras, since the only assumption it makes is of known center of distortion. In this paper, we study the multi-focal tensors arising out of 1D radial cameras. There exist no two-view constraints (like the fundamental matrix) for 1D radial cameras. However, the 3-view and 4-view cases are interesting. For the 4-view case we have the radial quadrifocal tensor, which has 15 d.o.f and 2 internal constraints. For the 3-view case, we have the radial trifocal tensor, which has 7 d.o.f and no internal constraints. Under the assumption of a purely rotating central camera, this can be used to do a non-parametric estimation of the radial distortion of a 1D camera. Even in the case of a non-rotating camera it can be used to do parametric estimation, assuming a planar scene. Finally we examine the mixed trifocal tensor, which models the case of two 1D radial cameras and one standard pin-hole camera. Of the above radial multifocal tensors, only the radial trifocal tensor is useful practically, since it doesn’t require any knowledge of the scene and is extremely robust. We demonstrate results based on real-images for this.  相似文献   

    19.
    Aiming at the application of zooming image distance measurement to mobile robot, the related key technologies are investigated in detail in this paper. Firstly, camera parameter calibration is conducted. The camera focus, optical center displacement between two foci, principal point and aberration coefficients are calculated accurately. Then, robust feature matching based on SIFT is realized by the geometrical constraint of a zooming image. Finally, the 3D reconstruction model of zooming image is established. The experimental results based on real sample images validate the practicability of the related algorithms.  相似文献   

    20.
    We describe a new algorithm for the obtainment of the affine and Euclidean calibration of a camera under general motion. The algorithm exploits the relationships of the horopter curves associated to each pair of cameras with the plane at infinity and the absolute conic. Using these properties we define cost functions whose minimization by means of general purpose techniques provides the required calibration. The experiments show the good convergence properties, computational efficiency and robust performance of the new techniques.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

    京公网安备 11010802026262号