首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 906 毫秒
1.
为实现婴幼儿眼底疾病的及时诊断与治疗,开发的婴幼儿视网膜广域成像系统可以解决成像时大视场要求.系统包括照明部分、成像部分和图像采集装置.设计了角膜接触眼底镜,采用像差小、折射率大的非球面设计,满足大视场的要求;设计了图像采集装置,使用C++编写图像采集处理软件,进行图像采集、相机标定和几何畸变校正.实验结果表明:采用多环光纤光线照明的婴幼儿视网膜广域成像系统可以实现85°视场的较清晰广域成像;采用基于相机标定的图像几何畸变校正方法,可有效补偿视网膜成像光路的畸变.该婴幼儿视网膜广域成像系统可以为眼科疾病的诊断、筛查提供客观、清晰的依据.  相似文献   

2.
Efficient and comfortable acquisition of large 3D scenes is an important topic for many current and future applications in the field of robotics, factory and office visualization, 3DTV and cultural heritage.In this paper we present both an omnidirectional stereo vision approach for 3D modeling based on graph cut techniques and also a new mobile 3D model acquisition platform where it is employed. The platform comprises a panoramic camera and a 2D laser range scanner for self localization by scan matching. 3D models are acquired just by moving the platform around and recording images in regular intervals. Additionally, we concurrently build 3D models using two supplementary laser range scanners. This enables the investigation of the stereo algorithm’s quality by comparing it with the laser scanner based 3D model as ground truth. This offers a more objective point of view on the achieved 3D model quality.  相似文献   

3.
Omnistereo: panoramic stereo imaging   总被引:3,自引:0,他引:3  
An omnistereo panorama consists of a pair of panoramic images, where one panorama is for the left eye and another panorama is for the right eye. The panoramic stereo pair provides a stereo sensation up to a full 360 degrees. Omnistereo panoramas can be constructed by mosaicing images from a single rotating camera. This approach also enables the control of stereo disparity, giving larger baselines for faraway scenes, and a smaller baseline for closer scenes. Capturing panoramic omnistereo images with a rotating camera makes it impossible to capture dynamic scenes at video rates and limits omnistereo imaging to stationary scenes. We present two possibilities for capturing omnistereo panoramas using optics without any moving parts. A special mirror is introduced such that viewing the scene through this mirror creates the same rays as those used with the rotating cameras. The lens used for omnistereo panorama is also introduced, together with the design of the mirror. Omnistereo panoramas can also be rendered by computer graphics methods to represent virtual environments  相似文献   

4.
Efficient visibility computation is a prominent requirement when designing automated camera control techniques for dynamic 3D environments; computer games, interactive storytelling or 3D media applications all need to track 3D entities while ensuring their visibility and delivering a smooth cinematic experience. Addressing this problem requires to sample a large set of potential camera positions and estimate visibility for each of them, which in practice is intractable despite the efficiency of ray-casting techniques on recent platforms. In this work, we introduce a novel GPU-rendering technique to efficiently compute occlusions of tracked targets in Toric Space coordinates – a parametric space designed for cinematic camera control. We then rely on this occlusion evaluation to derive an anticipation map predicting occlusions for a continuous set of cameras over a user-defined time window. We finally design a camera motion strategy exploiting this anticipation map to minimize the occlusions of tracked entities over time. The key features of our approach are demonstrated through comparison with traditionally used ray-casting on benchmark scenes, and through an integration in multiple game-like 3D scenes with heavy, sparse and dense occluders.  相似文献   

5.
An occlusion metric for selecting robust camera configurations   总被引:2,自引:0,他引:2  
Vision based tracking systems for surveillance and motion capture rely on a set of cameras to sense the environment. The exact placement or configuration of these cameras can have a profound affect on the quality of tracking which is achievable. Although several factors contribute, occlusion due to moving objects within the scene itself is often the dominant source of tracking error. This work introduces a configuration quality metric based on the likelihood of dynamic occlusion. Since the exact geometry of occluders can not be known a priori, we use a probabilistic model of occlusion. This model is extensively evaluated experimentally using hundreds of different camera configurations and found to correlate very closely with the actual probability of feature occlusion. Authors X. Chen and J. Davis were in Computer Graphics Lab at Stanford University at time of research.  相似文献   

6.
In this paper, we propose to capture image-based rendering scenes using a novel approach called active rearranged capturing (ARC). Given the total number of available cameras, ARC moves them strategically on the camera plane in order to minimize the sum of squared rendering errors for a given set of light rays to be rendered. Assuming the scene changes slowly so that the optimized camera locations are valid in the next time instance, we formulate the problem as a recursive weighted vector quantization problem, which can be solved efficiently. The ARC approach is verified on both synthetic and real-world scenes. In particular, a large self-reconfigurable camera array is built to demonstrate ARC's performance on real-world scenes. The system renders virtual views at 5-10 frames/s depending on the scene complexity on a moderately equipped computer. Given the virtual view point, the cameras move on a set of rails to perform ARC and improve the rendering quality on the fly  相似文献   

7.
This paper presents an intelligent, automatically controlled camera based on visual feedback. The camera housing contains actuators that change the orientation of the camera – enabling a full rotation around the vertical axis (pan) and 90° around the horizontal axis (tilt). A system for acquisition, processing image analysis and a camera driver are implemented in the FPGA Xilinx Spartan-6 device. An original, innovative reconfigurable system architecture has been developed. The FPGA device is connected directly to the eight independently operated SRAM memory banks. A prototype device has been constructed with a real-time tracking algorithm, enabling an automatically control of the position of the camera. The device has been tested indoors and outdoors. The camera is able to keep a tracked object close to the center of its field of view. The power consumption of the control system is 2 W. A reconfigurable part reaches the computing performance of 3200 MOPS.  相似文献   

8.
This paper presents a novel compressed sensing (CS) algorithm and camera design for light field video capture using a single sensor consumer camera module. Unlike microlens light field cameras which sacrifice spatial resolution to obtain angular information, our CS approach is designed for capturing light field videos with high angular, spatial, and temporal resolution. The compressive measurements required by CS are obtained using a random color-coded mask placed between the sensor and aperture planes. The convolution of the incoming light rays from different angles with the mask results in a single image on the sensor; hence, achieving a significant reduction on the required bandwidth for capturing light field videos. We propose to change the random pattern on the spectral mask between each consecutive frame in a video sequence and extracting spatio-angular-spectral-temporal 6D patches. Our CS reconstruction algorithm for light field videos recovers each frame while taking into account the neighboring frames to achieve significantly higher reconstruction quality with reduced temporal incoherencies, as compared with previous methods. Moreover, a thorough analysis of various sensing models for compressive light field video acquisition is conducted to highlight the advantages of our method. The results show a clear advantage of our method for monochrome sensors, as well as sensors with color filter arrays.  相似文献   

9.
Automatically focusing and seeing occluded moving object in cluttered and complex scene is a significant challenging task for many computer vision applications. In this paper, we present a novel synthetic aperture imaging approach to solve this problem. The unique characteristics of this work include the following: (1) To the best of our knowledge, this work is the first to simultaneously solve camera array auto focusing and occluded moving object imaging problem. (2) A unified framework is designed to achieve seamless interaction between the focusing and imaging modules. (3) In the focusing module, a local and global constraint-based optimization algorithm is presented to dynamically estimate the focus plane of the moving object. (4) In the imaging module, a novel visibility analysis based active synthetic aperture imaging approach is proposed to remove the occluder and significantly improve the quality of occluded object imaging. An active camera array system has been set up and evaluated in challenging indoor and outdoor scenes. Extensive experimental results with qualitative and quantitative analyses demonstrate the superiority of the proposed approach compared with state-of-the-art approaches.  相似文献   

10.
匡卫军 《微型电脑应用》2011,27(8):24-27,73
提出了一种新颖的用于视频监控的双摄像头系统,在此系统中全景摄像机与PTZ摄像机(云台摄像机)结合在一起,既能对大范围内的目标进行检测与跟踪又能对目标的详细图像进行捕捉。在全景摄像机获取的图像中进行运动检测,获取运动物体位置信息后利用PTZ摄像机对其进行检测分析,以实现二者的数据融合。设计了全景摄像机的反射镜面,对该双摄像头系统进行了标定,在实验室环境下的进行实验验证了系统的性能。  相似文献   

11.
Bitmask Soft Shadows   总被引:4,自引:0,他引:4  
Recently, several real-time soft shadow algorithms have been introduced which all compute a single shadow map and use its texels to obtain a discrete scene representation. The resulting micropatches are backprojected onto the light source and the light areas occluded by them get accumulated to estimate overall light occlusion. This approach ignores patch overlaps, however, which can lead to objectionable artifacts. In this paper, we propose to determine the visibility of the light source with a bit field where each bit tracks the visibility of a sample point on the light source. This approach not only avoids overlapping-related artifacts but offers a solution to the important occluder fusion problem. Hence, it also becomes possible to correctly incorporate information from multiple depth maps. In addition, a new interpretation of the shadow map data is suggested which often provides superior visual results. Finally, we show how the search area for potential occluders can be reduced substantially.  相似文献   

12.
We introduce a new approach to capturing refraction in transparent media, which we call light field background oriented Schlieren photography. By optically coding the locations and directions of light rays emerging from a light field probe, we can capture changes of the refractive index field between the probe and a camera or an observer. Our prototype capture setup consists of inexpensive off-the-shelf hardware, including inkjet-printed transparencies, lenslet arrays, and a conventional camera. By carefully encoding the color and intensity variations of 4D light field probes, we show how to code both spatial and angular information of refractive phenomena. Such coding schemes are demonstrated to allow for a new, single image approach to reconstructing transparent surfaces, such as thin solids or surfaces of fluids. The captured visual information is used to reconstruct refractive surface normals and a sparse set of control points independently from a single photograph.  相似文献   

13.
图像单个运动目标识别与跟踪的一种解决方案   总被引:2,自引:0,他引:2       下载免费PDF全文
就图像单个运动目标识别和跟踪问题提出了一种解决方案。在图像分割部分,提出了基于数学形态学中流域分割的实现方法,依据单运动目标的特征通过合并过渡分割区域算法较好地解决了过度分割的现象,实现无需人工设置任何阈值,全智能化图像分割;在目标提取部分,提出了一种在连续多帧图像中自动提取单运动目标的方法,能适应摄像头固定和摄像头随运动目标移动等各种情况;在目标跟踪部分,对进行匹配的模板设置掩码,并自动进行调整,使得对运动目标的跟踪更为鲁棒。  相似文献   

14.
刘栋栋 《微型电脑应用》2012,28(3):43-45,68,69
设计了一个基于全景视觉的多摄像机监控网络。全景相机视野广,可以实现大范围的目标检测与跟踪。云台摄像机视角具有一定的自由度,可以捕捉目标的高分辨率图像。将全景相机与云台相机相互配合,通过多传感器的数据融合,分层次的跟踪算法及多相机调度算法,实现了大范围的多个运动目标的检测与跟踪,并能捕获目标的清晰图像。实验验证了该系统的有效性和合理性。  相似文献   

15.
In this paper we present a method for the calibration of multiple cameras based on the extraction and use of the physical characteristics of a one-dimensional invariant pattern which is defined by four collinear markers. The advantages of this kind of pattern stand out in two key steps of the calibration process. In the initial step of camera calibration methods, related to sample points capture, the proposed method takes advantage of using a new technique for the capture and recognition of a robust sample of projective invariant patterns, which allows to capture simultaneously more than one invariant pattern in the tracking area and recognize each pattern individually as well as each marker that composes them. This process is executed in real time while capturing our sample of calibration points in the cameras of our system. This new feature allows to capture a more numerous and robust set of sample points than other patterns used for multi-camera calibration methods. In the last step of the calibration process, related to camera parameters' optimization, we explore the collinearity feature of the invariant pattern and add this feature in the camera parameters optimization model. This approach obtains better results in the computation of camera parameters. We present the results obtained with the calibration of two multi-camera systems using the proposed method and compare them with other methods from the literature.  相似文献   

16.
设计了一种基于Cortex-A8的远程视频运动目标检测系统.系统包含以Cortex-A8为核心的视频采集端和以VS2015与OpenCV3.2结合为运行环境的运动目标检测端.视频采集端以S5PV210芯片作为处理器,以USB摄像头进行视频采集,并搭建了Linux操作系统对视频数据进行H.264编码,对编码后的视频数据进行RTP打包和网络传输;在PC机上通过FFMPEG对视频数据进行接收解码,然后以OpenCV函数库中的函数实现对ViBe算法的改进,使用改进后的ViBe算法对运动运动目标加以检测.经过测试,系统能够有效地减少视频数据量,而且可以得到清晰的运动目标.  相似文献   

17.
In this paper, we present a three-dimensional (3D) model-based video coding scheme for streaming static scene video in a compact way but also enabling time and spatial scalability according to network or terminal capability and providing 3D functionalities. The proposed format is based on encoding the sequence of reconstructed models using second-generation wavelets, and efficiently multiplexing the resulting geometric, topological, texture, and camera motion binary representations. The wavelets decomposition can be adaptive in order to fit to images and scene contents. To ensure time scalability, this representation is based on a common connectivity for all 3D models, which also allows straightforward morphing between successive models ensuring visual continuity at no additional cost. The method proves to be better than previous methods for video encoding of static scenes, even better than state-of-the-art video coders such as H264 (also known as MPEG AVC). Another application of our approach are smoothing camera path for suppression of jitter from hand-held acquisition and the fast transmission and real-time visualization of virtual environments obtained by video capture, for virtual or augmented reality and interactive walk-through in photo-realistic 3D environments around the original camera path  相似文献   

18.
It is a well known classical result that given the image projections of three known world points it is possible to solve for the pose of a calibrated perspective camera to up to four pairs of solutions. We solve the Generalised problem where the camera is allowed to sample rays in some arbitrary but known fashion and is not assumed to perform a central perspective projection. That is, given three back-projected rays that emanate from a camera or multi-camera rig in an arbitrary but known fashion, we seek the possible poses of the camera such that the three rays meet three known world points. We show that the Generalised problem has up to eight solutions that can be found as the intersections between a circle and a ruled quartic surface. A minimal and efficient constructive numerical algorithm is given to find the solutions. The algorithm derives an octic polynomial whose roots correspond to the solutions. In the classical case, when the three rays are concurrent, the ruled quartic surface and the circle possess a reflection symmetry such that their intersections come in symmetric pairs. This manifests itself in that the odd order terms of the octic polynomial vanish. As a result, the up to four pairs of solutions can be found in closed form. The proposed algorithm can be used to solve for the pose of any type of calibrated camera or camera rig. The intended use for the algorithm is in a hypothesise-and-test architecture.  相似文献   

19.
More and more processing of visual information is nowadays done by computers, but the images captured by conventional cameras are still based on the pinhole principle inspired by our own eyes. This principle though is not necessarily the optimal image-formation principle for automated processing of visual information. Each camera samples the space of light rays according to some pattern. If we understand the structure of the space formed by the light rays passing through a volume of space, we can determine the camera, or in other words the sampling pattern of light rays, that is optimal with regard to a given task. In this work we analyze the differential structure of the space of time-varying light rays described by the plenoptic function and use this analysis to relate the rigid motion of an imaging device to the derivatives of the plenoptic function. The results can be used to define a hierarchy of camera models with respect to the structure from motion problem and formulate a linear, scene-independent estimation problem for the rigid motion of the sensor purely in terms of the captured images.  相似文献   

20.
Occluder Shadows for Fast Walkthroughs of Urban Environments   总被引:1,自引:0,他引:1  
This paper describes a new algorithm that employs image-based rendering for fast occlusion culling in complex urban environments. It exploits graphics hardware to render and automatically combine a relatively large set of occluders. The algorithm is fast to calculate and therefore also useful for scenes of moderate complexity and walkthroughs with over 20 frames per second. Occlusion is calculated dynamically and does not rely on any visibility precalculation or occluder preselection. Speed-ups of one order of magnitude can be obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号