首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
提出一种新的用于散焦求深度的摄像机内参数标定算法。该算法依靠改变摄像机镜头光圈指数获取同一场景的任意两幅散焦程度不同的图像,提取两幅图像间模糊程度差异信息,结合分析透镜成像几何标定出摄像机的相应内参数。此算法解除了2006年由Park所提出的标定方法中必须有一幅聚焦图像的限制,并且无须对图像进行复杂的放大率标准化处理。模拟实验与真实实验均验证了算法的有效性和精确性。  相似文献   

2.
The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between depth of field and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, today's cameras have DOFs that correspond to a single slab that is perpendicular to the optical axis. In this paper, we present an imaging system that enables one to control the DOF in new and powerful ways. Our approach is to vary the position and/or orientation of the image detector during the integration time of a single photograph. Even when the detector motion is very small (tens of microns), a large range of scene depths (several meters) is captured, both in and out of focus. Our prototype camera uses a micro-actuator to translate the detector along the optical axis during image integration. Using this device, we demonstrate four applications of flexible DOF. First, we describe extended DOF where a large depth range is captured with a very wide aperture (low noise) but with nearly depth-independent defocus blur. Deconvolving a captured image with a single blur kernel gives an image with extended DOF and high SNR. Next, we show the capture of images with discontinuous DOFs. For instance, near and far objects can be imaged with sharpness, while objects in between are severely blurred. Third, we show that our camera can capture images with tilted DOFs (Scheimpflug imaging) without tilting the image detector. Finally, we demonstrate how our camera can be used to realize nonplanar DOFs. We believe flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics.  相似文献   

3.
Implicit and explicit camera calibration: theory and experiments   总被引:22,自引:0,他引:22  
By implicit camera calibration, we mean the process of calibrating a camera without explicitly computing its physical parameters. Implicit calibration can be used for both three-dimensional (3-D) measurement and generation of image coordinates. In this paper, we present a new implicit model based on the generalized projective mappings between the image plane and two calibration planes. The back-projection and projection processes are modelled separately to ease the computation of distorted image coordinates from known world points. A set of constraints of perspectivity is derived to relate the transformation parameters of the two calibration planes. Under the assumption of the radial distortion model, we present a computationally efficient method for explicitly correcting the distortion of image coordinates in frame buffer without involving the computation of camera position and orientation. By combining with any linear calibration techniques, this method makes explicit the camera physical parameters. Extensive experimental comparison of our methods with the classic photogrammetric method and Tsai's (1986) method in the aspects of 3-D measurement (both absolute and relative errors), the prediction of image coordinates, and the effect of the number of calibration points, is made using real images from 15 different depth values  相似文献   

4.
虚拟摄像机的生成是虚拟广告系统中一项关键技术,提出一种静止虚拟摄像机生成方法。首先,在视频图像中自动检测比赛场地的特征点,然后根据特征点在图像坐标系与世界坐标系上的匹配关系,利用摄像机定标技术计算出真实摄像机的内外参数,接着把摄像机的内参数和外参数分别转换为虚拟摄像机的视点转换矩阵和投影矩阵,最后在此虚拟摄像机下对虚拟场景进行渲染,把渲染出的图像插入到视频中。实验结果表明该方法简单、有效、易实现。  相似文献   

5.
A method for the calibration of a 3-D laser scanner   总被引:1,自引:0,他引:1  
The calibration of a three-dimensional digitizer is a very important issue to take into consideration that good quality, reliability, accuracy and high repeatability are the features which a good digitizer is expected to have. The aim of this paper is to propose a new method for the calibration of a 3-D laser scanner, mainly for robotic applications. The acquisition system consists of a laser emitter and a webcam with fixed relative positions. In addition, a cylindrical lens is provided with the laser housing so that it is capable to project a plane light. An optical filter was also used in order to segment the laser stripe from the rest of the scene. For the calibration procedure it was used a digital micrometer that move a target with known dimensions. The calibration method is based on the modeling of the geometrical relationship between the 3-D coordinates of the laser stripe on the target and its digital coordinates in the image plane. By this method it is possible to calibrate the intrinsic parameters of the video system, the position of the image plane and the laser plane in a given frame, all in the same time.  相似文献   

6.
《Graphical Models》2001,63(5):277-303
Camera calibration is the estimation of parameters (both intrinsic and extrinsic) associated with a camera being used for imaging. Given the world coordinates of a number of precisely placed points in a 3D space, camera calibration requires the measurement of the 2D projection of those scene points on the image plane. While the coordinates of the points in space can be known precisely, the image coordinates that are determined from the digital image are often inaccurate and hence noisy. In this paper, we look at the statistics of the behavior of the camera calibration parameters, which are important for stereo matching, when the image plane measurements are corrupted by noise. We derive analytically the behavior of the camera calibration matrix under noisy conditions and further show that the elements of the camera calibration matrix have a Gaussian distribution if the noise introduced into the measurement system is Gaussian. Under certain approximations we derive relationships between the camera calibration parameters and the noisy camera calibration matrix and compare it with Monte Carlo simulations.  相似文献   

7.
为实现未知摄像机参数的镜头畸变校正,提出了一种先标定畸变中心、再标定畸变系数的方法。先在镜头的不同焦距处对靶标成两次像,利用相同靶标点在两幅图像中的相对位置关系求解畸变中心;再根据直线的透视投影不变性,通过变步长的最优化方法搜索畸变系数。模拟实验表明,在靶标点数为25,噪声水平为0.2像素时,畸变中心的平均误差为(0.2243,0.1636)像素,畸变系数误差为0.28%。真实图像实验表明,用该方法得到的畸变中心和畸变系数能够很好地校正图像。该方法不需要标定摄像机的内外部参数,也无需知道直线网格的世界坐标,简便易行。  相似文献   

8.
Traditional depth estimation methods typically exploit the effect of either the variations in internal parameters such as aperture and focus (as in depth from defocus), or variations in extrinsic parameters such as position and orientation of the camera (as in stereo). When operating off-the-shelf (OTS) cameras in a general setting, these parameters influence the depth of field (DOF) and field of view (FOV). While DOF mandates one to deal with defocus blur, a larger FOV necessitates camera motion during image acquisition. As a result, for unfettered operation of an OTS camera, it becomes inevitable to account for pixel motion as well as optical defocus blur in the captured images. We propose a depth estimation framework using calibrated images captured under general camera motion and lens parameter variations. Our formulation seeks to generalize the constrained areas of stereo and shape from defocus (SFD)/focus (SFF) by handling, in tandem, various effects such as focus variation, zoom, parallax and stereo occlusions, all under one roof. One of the associated challenges in such an unrestrained scenario is the problem of removing user-defined foreground occluders in the reference depth map and image (termed inpainting of depth and image). Inpainting is achieved by exploiting the cue from motion parallax to discover (in other images) the correspondence/color information missing in the reference image. Moreover, considering the fact that the observations could be differently blurred, it is important to ensure that the degree of defocus in the missing regions (in the reference image) is coherent with the local neighbours (defocus inpainting).  相似文献   

9.
This paper discusses the principles for the acquisition of a three-dimensional (3-D) computational model of the treatment area of a burn victim for a vision-servo-guided robot which ablates the victim's burned skin tissue by delivering a high-energy laser light to the burned tissue. The medical robotics assistant system consists of: a robot whose end effector is equipped with a laser head, whence the laser beam emanates, and a vision system which is used to acquire the 3-D coordinates of some points on the body surface; 3-D surface modeling routines for generating the surface model of the treatment area; and control and interface hardware and software for control and integration of all the system components. Discussion of the vision and surface modeling component of the medical robotics assistant system is the focus of this paper. The robot-assisted treatment process has two phases: an initial survey phase during which a model of the treatment area on the skin is built and used to plan an appropriate trajectory for the robot in the subsequent phase—the treatment phase, during which the laser surgery is performed. During the survey phase, the vision system employs a camera to acquire points on the surface of the patient's body by using the camera to capture the contour traced by a plane of light generated by a low power laser, distinct from the treatment laser. The camera's image is then processed. Selected points on the camera's two-dimensional image frame are used as input to a process that generates 3-D body surface points as the intersection point of the plane of light and the line of sight between the camera's image point and the body surface point. The acquired body surface points are then used to generate a computational model of the treatment area using the non-uniform rational B-splines (NURBS) surface modeling technique. The constructed NURBS surface model is used to generate a treatment plan for the execution of the treatment phase. The robot plan for treatment is discussed in another paper. The prototype of the entire burn treatment system is at an advanced stage of development and tests of the engineering principles on inanimate objects, discussed herein, are being conducted.  相似文献   

10.
11.
The great flexibility of a view camera allows the acquisition of high quality images that would not be possible any other way. Bringing a given object into focus is however a long and tedious task, although the underlying optical laws are known. A fundamental parameter is the aperture of the lens entrance pupil because it directly affects the depth of field. The smaller the aperture, the larger the depth of field. However a too small aperture destroys the sharpness of the image because of diffraction on the pupil edges. Hence, the desired optimal configuration of the camera is such that the object is in focus with the greatest possible lens aperture. In this paper, we show that when the object is a convex polyhedron, an elegant solution to this problem can be found. It takes the form of a constrained optimization problem, for which theoretical and numerical results are given. The optimization algorithm has been implemented on the prototype of a robotised view camera.  相似文献   

12.
Simple and efficient method of calibrating a motorized zoom lens   总被引:3,自引:0,他引:3  
In this work, three servo motors are used to independently control the aperture, zoom, and focus of our zoom lens. Our goal is to calibrate, efficiently, the camera parameters for all the possible configurations of lens settings. We use a calibration object suitable for zoom lens calibration to deal with the defocusing problem. Instead of calibrating the zoom lens with respect to the three lens settings simultaneously, we perform the monofocal camera calibration, adaptively, over the ranges of the zoom and focus settings while fixing the aperture setting at a preset value. Bilinear interpolation is used to provide the values of the camera parameters for those lens settings where no observations are taken. The adaptive strategy requires the monofocal camera calibration only for the lens settings where the interpolated camera parameters are not accurate enough, and is hence referred to as the calibration-on-demand method. Our experiments show that the proposed calibration-on-demand method can provide accurate camera parameters for all the lens settings of a motorized zoom lens, even though the camera calibration is performed only for a few sampled lens settings.  相似文献   

13.
Some aspects of zoom lens camera calibration   总被引:3,自引:0,他引:3  
Zoom lens camera calibration is an important and difficult problem for two reasons at least. First, the intrinsic parameters of such a camera change over time, it is difficult to calibrate them on-line. Secondly, the pin-hole model for single lens system can not be applied directly to a zoom lens system. In this paper, we address some aspects of this problem, such as determining principal point by zooming, modeling and calibration of lens distortion and focal length, as well as some practical aspects. Experimental results on calibrating cameras with computer controlled zoom, focus and aperture are presented  相似文献   

14.
为实现基于投影仪和摄像机的结构光视觉系统连续扫描,需要计算投影仪投影的任意光平面与摄像机图像平面的空间位置关系,进而需要求取摄像机光心与投影仪光心之间的相对位置关系。求取摄像机的内参数,在标定板上选取四个角点作为特征点并利用摄像机内参数求取该四个特征点的外参数,从而知道四个特征点在摄像机坐标系中的坐标。利用投影仪自身参数求解特征点在投影仪坐标系中的坐标,从而计算出摄像机光心与投影仪光心之间的相对位置关系,实现结构光视觉标定。利用标定后的视觉系统,对标定板上的角点距离进行测量,最大相对误差为0.277%,表明该标定算法可以应用于基于投影仪和摄像机的结构光视觉系统。  相似文献   

15.
An approach to integrating stereo disparity, camera vergence, and lens focus to exploit their complementary strengths and weaknesses through active control of camera focus and orientations is presented. In addition, the aperture and zoom settings of the cameras are controlled. The result is an active vision system that dynamically and cooperatively interleaves image acquisition with surface estimation. A dense composite map of a single contiguous surface is synthesized by automatically scanning the surface and combining estimates of adjacent, local surface patches. This problem is formulated as one of minimizing a pair of objective functions. The first such function is concerned with the selection of a target for fixation. The second objective function guides the surface estimation process in the vicinity of the fixation point. Calibration parameters of the cameras are treated as variables during optimization, thus making camera calibration an integral, flexible component of surface estimation. An implementation of this method is described, and a performance evaluation of the system is presented. An average absolute error of less than 0.15% in estimated depth was achieved for a large surface having a depth of approximately 2 m  相似文献   

16.
Real-time and high performance occluded object imaging is a big challenge to many computer vision applications. In recent years, camera array synthetic aperture theory proves to be a potential powerful way to solve this problem. However, due to the high cost of complex system hardware, the severe blur of occluded object imaging, and the slow speed of image processing, the exiting camera array synthetic aperture imaging algorithms and systems are difficult to apply in practice. In this paper, we present a novel handheld system to handle those challenges. The objective of this work is to design a convenient system for real-time high quality object imaging even under severe occlusion. The main characteristics of our work include: (1) To the best of our knowledge, this is the first real-time handheld system for seeing occluded object in synthetic imaging domain using color and depth images. (2) A novel sequential synthetic aperture imaging framework is designed to achieve seamless interaction among multiple novel modules, and this framework includes object probability generation, virtual camera array generation, and sequential synthetic aperture imaging. (3) In the virtual camera array generation module, based on the integration of color and depth information, a novel feature set iterative optimization algorithm is presented, which can improve the robustness and accuracy of camera pose estimation even in dynamic occlusion scene. Experimental results in challenging scenarios demonstrate the superiority of our system both in robustness and efficiency compared against the state-of-the-art algorithms.  相似文献   

17.
In order to calibrate cameras in an accurate manner, lens distortion models have to be included in the calibration procedure. Usually, the lens distortion models used in camera calibration depend on radial functions of image pixel coordinates. Such models are well-known, simple and can be estimated using just image information. However, these models do not take into account an important physical constraint of lens distortion phenomena, namely: the amount of lens distortion induced in an image point depends on the scene point depth with respect to the camera projection plane. In this paper we propose a new accurate depth dependent lens distortion model. To validate this approach, we apply the new lens distortion model to camera calibration in planar view scenarios (that is 3D scenarios where the objects of interest lie on a plane). We present promising experimental results on planar pattern images and on sport event scenarios. Nevertheless, although we emphasize the feasibility of the method for planar view scenarios, the proposed model is valid in general and can be used in any scenario where the point depth can be estimated.  相似文献   

18.
多传感器融合技术已经广泛应用在智能汽车环境感知领域中;雷达和摄像机的空间标定是伴随信息实时融合的道路检测技术的基础;结合智能汽车的实际应用,提出了针对激光雷达和摄像机的空间标定方法;通过特制的标定板来获得雷达数据和图像数据,选取激光雷达坐标系作为世界坐标系,通过参数拟合的方法来求取图像坐标系与雷达坐标系的变换关系,进而实现两种传感器的空间配准;该方法只需要标定板就能够完成激光雷达和摄像机的空间标定,标定精度高,实现了多个传感器世界坐标系的统一,避免了后续处理中数据解释的二义性;实验结果表明这种方法简单准确,满足系统要求。  相似文献   

19.
We present the focal flow sensor. It is an unactuated, monocular camera that simultaneously exploits defocus and differential motion to measure a depth map and a 3D scene velocity field. It does this using an optical-flow-like, per-pixel linear constraint that relates image derivatives to depth and velocity. We derive this constraint, prove its invariance to scene texture, and prove that it is exactly satisfied only when the sensor’s blur kernels are Gaussian. We analyze the inherent sensitivity of the focal flow cue, and we build and test a prototype. Experiments produce useful depth and velocity information for a broader set of aperture configurations, including a simple lens with a pillbox aperture.  相似文献   

20.
In computerized numerical control (CNC) machine tools, it is often a time-consuming and error-prone process to verify the Euclidean position accordance between the actual machining setup and its designed three-dimensional (3D) digital model. The model mainly contains the work piece and jigs. The mismatch between them will cause a failure of simulation to precisely detect the collision. The paper presents an on-machine 3D vision system to quickly verify the similarity between the actual setup and its digital model by real and virtual image processing. In this paper, the system is proposed first. Afterwards, a simple on-machine camera calibration process is presented. This calibration process determines all the camera's parameters with respect to the machine tool's coordinate frame. The accurate camera mathematical model (or virtual camera) is derived according to the actual imaging projection. Both camera-captured real images and system-generated virtual images are compensated to make them theoretically and practically identical. The mathematical equations have been derived. Using the virtual image as a reference and then superimposing the real image onto it, the operator can intuitively verify the Euclidean position in accordance to the actual setup and its 3D digital model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号