首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
To overcome the dynamic range limitations in images taken with regular consumer cameras, several methods exist for creating high dynamic range (HDR) content. Current low-budget solutions apply a temporal exposure bracketing which is not applicable for dynamic scenes or HDR video. In this article, a framework is presented that utilizes two cameras to realize a spatial exposure bracketing, for which the different exposures are distributed among the cameras. Such a setup allows for HDR images of dynamic scenes and HDR video due to its frame by frame operating principle, but faces challenges in the stereo matching and HDR generation steps. Therefore, the modules in this framework are selected to alleviate these challenges and to properly handle under- and oversaturated regions. In comparison to existing work, the camera response calculation is shifted to an offline process and a masking with a saturation map before the actual HDR generation is proposed. The first aspect enables the use of more complex camera setups with different sensors and provides robust camera responses. The second one makes sure that only necessary pixel values are used from the additional camera view, and thus, reduces errors in the final HDR image. The resulting HDR images are compared with the quality metric HDR-VDP-2 and numerical results are given for the first time. For the Middlebury test images, an average gain of 52 points on a 0-100 mean opinion score is achieved in comparison to temporal exposure bracketing with camera motion. Finally, HDR video results are provided.  相似文献   

2.
Multiview images captured by multicamera systems are generally not uniform in colour domain. In this paper, we propose a novel colour correction method of multicamera systems that can (i) be applied to not only dense multicamera system, but also sparse multicamera configuration and (ii) obtain an average colour pattern among all cameras. Our proposed colour correction method starts from any camera on the array sequentially, following a certain path, for pairs of cameras, until it reaches the starting point and triggers several iterations. The iteration stops when the correction applied to the images becomes small enough. We propose to calculate the colour correction transformation based on energy minimisation using a dynamic programming of a nonlinearly weighted Gaussian-based kernel density function of geometrically corresponding feature points, obtained by the modified scale invariant feature transformation (SIFT) method, from several time instances and their Gaussian-filtered images. This approach guarantees the convergence of the iteration procedure without any visible colour distortion. The colour correction is done for each colour channel independently. The process is entirely automatic, after estimation of the parameters through the algorithm. Experimental results show that the proposed iteration-based algorithm can colour-correct the dense/sparse multicamera system. The correction is always converged with average colour intensity among viewpoint, and out-performs the conventional method.  相似文献   

3.
基于标记点的直升机旋翼动态三维测量方法   总被引:2,自引:1,他引:1  
为快速方便地获取直升机旋翼动态三维坐标,提出了一种基于编码标记点和立体视觉的旋翼动态三维测量方法,该方法首先采用平面标定法及Bouguet校正算法实现立体标定和图像校正;其次,依据标记点形状和灰度特征,采用图像二值化和连通域标记等方法检测并定位编码标记点的中心,确定标记点像素坐标;然后,采用邻域扫描法对编码点进行解码,得到标记点的码值,并依据码值实现左右视图中同名标记点匹配;最后,利用投影方程求出标记点三维坐标,实现直升机旋翼动态三维测量。仿真实验结果表明该方法精度高,实际测量误差不超过0.3mm,可满足旋翼动态三维测量的要求。  相似文献   

4.
In this paper, we present an automatic foreground object detection method for videos captured by freely moving cameras. While we focus on extracting a single foreground object of interest throughout a video sequence, our approach does not require any training data nor the interaction by the users. Based on the SIFT correspondence across video frames, we construct robust SIFT trajectories in terms of the calculated foreground feature point probability. Our foreground feature point probability is able to determine candidate foreground feature points in each frame, without the need of user interaction such as parameter or threshold tuning. Furthermore, we propose a probabilistic consensus foreground object template (CFOT), which is directly applied to the input video for moving object detection via template matching. Our CFOT can be used to detect the foreground object in videos captured by a fast moving camera, even if the contrast between the foreground and background regions is low. Moreover, our proposed method can be generalized to foreground object detection in dynamic backgrounds, and is robust to viewpoint changes across video frames. The contribution of this paper is trifold: (1) we provide a robust decision process to detect the foreground object of interest in videos with contrast and viewpoint variations; (2) our proposed method builds longer SIFT trajectories, and this is shown to be robust and effective for object detection tasks; and (3) the construction of our CFOT is not sensitive to the initial estimation of the foreground region of interest, while its use can achieve excellent foreground object detection results on real-world video data.  相似文献   

5.
Stereoscopic and high dynamic range (HDR) imaging are two methods that enhance video content by respectively improving depth perception and light representation. A large body of research has looked into each of these technologies independently, but very little work has attempted to combine them due to limitations in capture and display; HDR video capture (for a wide range of exposure values over 20 f-stops) is not yet commercially available and few prototype HDR video cameras exist. In this work we propose techniques which facilitate stereoscopic high dynamic range (SHDR) video capture by using an HDR and LDR camera pair. Three methods are proposed: one based on generating the missing HDR frame by warping the existing one using a disparity map; increasing the range of LDR video using a novel expansion operator; and a hybrid of the two where expansion is used for pixels within the LDR range and warping for the rest. Generated videos were compared to the ground truth SHDR video captured using two HDR video cameras. Results show little overall error and demonstrate that the hybrid method produces the least error of the presented methods.  相似文献   

6.
贺理  陈果  郭宏  金伟其 《红外技术》2020,42(4):340-347
高动态范围成像技术能够全面有效反映场景信息,有利于在高动态范围场景下获得高质量的成像。但当前常用的基于单台相机的多次曝光融合方法在动态场景下易出现“鬼影”问题,基于多个传感器同时曝光的系统复杂且价格昂贵,基于单幅低动态范围图像的拓展方法易丢失欠曝光或过曝光区域的细节信息,且多用于较好的照明条件。针对低照度动态场景成像,研究了一种基于双通道低照度CMOS相机的高动态范围图像融合方法,对双通道CMOS相机采集低照度动态场景两幅不同曝光图像,依据累计直方图拓展原则分别进行动态范围拓展,并采用像素级融合方法对动态范围拓展的序列图像进行融合。实验表明,动态范围拓展融合方法可满足低照度动态场景下获取高动态范围图像的应用要求,获得更佳的成像质量。  相似文献   

7.
In fingerprint recognition systems, feature extraction is an important part because of its impact on the final performance of the overall system, particularly, in the case of low-quality images, which poses significant challenges to traditional fingerprint feature extraction methods. In this work, we make two major contributions: First, a novel feature extraction method for low-quality fingerprints images is proposed, which mimics the magnetic energy when attracting iron fillings, and this method is based on image energies attracting uniformly distributed points to form the final features that can describe a fingerprint. Second, we created a new low-quality fingerprints image database to evaluate the proposed method. We used a mobile phone camera to capture the fingerprints of 136 different persons, with five samples for each to obtain 680 fingerprint images in total. To match the computed features, we used the dynamic time warping and evaluated the performance of our system based on k-nearest neighbor classifier. Further, we represent the features using their probability density functions to evaluate the method using some other classifiers. The highest identification accuracy recorded by several experiments reached 95.11% using our in-house database. The experimental results show that the proposed method can be used as a general feature extraction method for other applications.  相似文献   

8.
Multiple images with different exposures are used to produce a high dynamic range (HDR) image. Sometimes high-sensitivity setting is needed for capturing images in low light condition as in an indoor room. However, current digital cameras do not produce a high-quality HDR image when noise occurs in low light condition or high-sensitivity setting. In this paper, we propose a noise reduction method in generating HDR images using a set of low dynamic range (LDR) images with different exposures, where ghost artifacts are effectively removed by image registration and local motion information. In high-sensitivity setting, motion information is used in generating a HDR image. We analyze the characteristics of the proposed method and compare the performance of the proposed and existing HDR image generation methods, in which Reinhard et al.’s global tone mapping method is used for displaying the final HDR images. Experiments with several sets of test LDR images with different exposures show that the proposed method gives better performance than existing methods in terms of visual quality and computation time.  相似文献   

9.
郭汉洲  郭立红  吕游 《液晶与显示》2016,31(11):1070-1078
针对SIFT在视点变化下对物体几何特征描述的缺陷,本文提出一种结合最稳极值区域检测和改进的二阶梯度直方图描述子的目标局部特征提取方法。首先,采用一种新的最稳定度判断准则,提高了不规则形状区域和模糊条件下的检测效果;然后,利用改进的二阶梯度直方图提取MSER区域的局部特征描述子,采用高斯函数加权的方法考虑了不同像素对区域中心像素的影响,提高了稳定性;最后,通过标准测试图像和实际拍摄图像的匹配对算法进行验证。实验结果表明,本文方法在视点变化下仍能获得70%以上的匹配率,匹配效果优于SIFT。本文方法相比于传统方法检测效果更为稳定,对于不规则形状区域仍有较好的检测效果,适用于视点变化下的目标匹配。  相似文献   

10.
研究了一种多摄像机的视野(Field of View,FOV)分界线恢复方法,利用Harris角点检测和单应矩阵的方法完成摄像机视野分界线恢复。用Harris角点检测算法提取图像中角点特征;在有重叠区域图像间进行特征点匹配,再根据匹配点计算图像间的单应矩阵;最后由图像的边界点及图像间的单应矩阵计算摄像机的FOV分界线。该方法能准确恢复摄像机的视野分界线,具有准确性和鲁棒性。  相似文献   

11.
全景视频摄像机的非参数化几何校正与拼接   总被引:7,自引:7,他引:0  
提出了一种新的多相机全景视频拼接几何校正拼接方法。先用全景视频摄像机组采集标准标定场的稀疏特征点阵,利用自适应网格细分技术把采集的图像和标定场坐标细分到像素级对应,则全景视频的几何校正和拼接可以通过直接映射得到。在摄像机阵列相对位置固定时,可以一次校正多次使用。本文方法避免了普通拼接方法的特征点检测、匹配优化等参数求解过程,尤其是采用具有较大畸变的广角镜头和相邻相机视场重叠较少时更能体现出其优越性,并可以很容易地迁移到类似的应用中。通过一个原型系统,验证了本文方法的有效性。  相似文献   

12.
Interactive 3-D Video Representation and Coding Technologies   总被引:5,自引:0,他引:5  
Interactivity in the sense of being able to explore and navigate audio-visual scenes by freely choosing viewpoint and viewing direction, is an important key feature of new and emerging audio-visual media. This paper gives an overview of suitable technology for such applications, with a focus on international standards, which are beneficial for consumers, service providers, and manufacturers. We first give a general classification and overview of interactive scene representation formats as commonly used in computer graphics literature. Then, we describe popular standard formats for interactive three-dimensional (3-D) scene representation and creation of virtual environments, the virtual reality modeling language (VRML), and the MPEG-4 BInary Format for Scenes (BIFS) with some examples. Recent extensions to MPEG-4 BIFS, the Animation Framework eXtension (AFX), providing advanced computer graphics tools, are explained and illustrated. New technologies mainly targeted at reconstruction, modeling, and representation of dynamic real world scenes are further studied. The user shall be able to navigate photorealistic scenes within certain restrictions, which can be roughly defined as 3-D video. Omnidirectional video is an extension of the planar two-dimensional (2-D) image plane to a spherical or cylindrical image plane. Any 2-D view in any direction can be rendered from this overall recording to give the user the impression of looking around. In interactive stereo two views, one for each eye, are synthesized to provide the user with an adequate depth cue of the observed scene. Head motion parallax viewing can be supported in a certain operating range if sufficient depth or disparity data are delivered with the video data. In free viewpoint video, a dynamic scene is captured by a number of cameras. The input data are transformed into a special data representation that enables interactive navigation through the dynamic scene environment.  相似文献   

13.
HDR image was developed for the reproduction of real scenes with an acquisition of large dynamic range. In general, HDR image consists of several different exposure images according to the exposure value of a digital camera. Before the construction of a single HDR image, each input image is calibrated using CRF to convert its image data to scene radiance. In order find CRF, conventional methods require special apparatus and reference targets, or several exposure images. This paper proposes a new HDR blending algorithm that uses only dual-exposure images. The proposed algorithm is based on the least squares method, and includes spatial and intensity weighting functions. Each weighting function is used to reduce error points and improve CRF computation accuracy. In addition, a constraint is added to correct white balance in the brightness level. The rendering results show that the proposed algorithm is superior to the conventional algorithm.  相似文献   

14.
研究了由双曲面镜面与球面镜面共同组成折反射系统并结合普通透视摄像头所构成的一类特定全景视觉系统,推导了该全方位摄像头成像模型以及外极曲线的计算过程,分析了外极曲线特性,最后通过实验进行理论验证.实验结果表明,本文所提出的计算方法可快速准确地得到外极曲线,有利于全景图像立体匹配过程中对应点的搜索匹配.  相似文献   

15.
侯幸林  罗海波  周培培 《红外与激光工程》2017,46(7):726001-0726001(7)
基于融合多幅低动态图像来获取高动态图像的过程中,传统方法中低动态图像获取时对曝光时间选取的策略简单,使拍摄的多幅图像信息冗余较多,严重影响融合效率。提出了一种基于局部信息熵最大准则的多曝光控制方法。讨论了低动态场景图像信息熵与曝光时间的关系,得出了低动态范围场景的图像信息熵随曝光时间的增加呈现先增加后减小的规律,并在某个曝光时间处信息熵最大。对于高动态场景,首先,利用图像平均灰度响应与曝光时间的近似线性关系确定场景的曝光时间范围;然后,根据图像直方图将高动态场景分成若干个低动态范围场景区域;最后,以信息熵最大为优化目标,设计一维搜索算法,搜索各个低动态范围区域的最优曝光时间,直到所有区域都搜索到最优曝光时间。此方法将场景的局部信息熵与曝光时间联系起来,能针对不同的区域进行曝光时间优化,目的性强,有效地避免了传统曝光控制中的缺点,实验证明:用该方法获取的图像进行融合获得了良好的效果。  相似文献   

16.
Ghosting artifacts due to misaligned imaging and missing content of the moving regions are major challenges of synthesizing high dynamic range (HDR) images from multiple low-dynamic range (LDR) with different exposures in dynamic scenes. Therefore, it hopes the HDR reconstruction model can align the LDRs’ features and restore the missing content without artifacts. In the paper, a new dual-branch recursive band reconstruction network for high dynamic range (DRBR-HDR) is proposed to generate credible result in missing content regions, which not only uses global features as supplementary information to help local features from different receptive fields for efficient feature alignment but also designs a series of coarse-to-fine band representation to better repair missing areas in the process of recursion. In addition, we introduce an interactive attention mechanism for local branches to alleviate ghosting artifacts. The experimental results demonstrate that DRBR-HDR achieves state-of-the-art performance compared with that of the prevailing HDR reconstruction methods in various challenging scenes.Index Terms—inverse tone mapping, band reconstruction, global features, high dynamic range images.  相似文献   

17.
18.
The extraction of depth information associated to dynamic scenes is an intriguing topic, because of its perspective role in many applications, including free viewpoint and 3D video systems. Time-of-flight (ToF) range cameras allow for the acquisition of depth maps at video rate, but they are characterized by a limited resolution, specially if compared with standard color cameras. This paper presents a super-resolution method for depth maps that exploits the side information from a standard color camera: the proposed method uses a segmented version of the high-resolution color image acquired by the color camera in order to identify the main objects in the scene and a novel surface prediction scheme in order to interpolate the depth samples provided by the ToF camera. Effective solutions are provided for critical issues such as the joint calibration between the two devices and the unreliability of the acquired data. Experimental results on both synthetic and real-world scenes have shown how the proposed method allows to obtain a more accurate interpolation with respect to standard interpolation approaches and state-of-the-art joint depth and color interpolation schemes.  相似文献   

19.
An Introduction to Distributed Smart Cameras   总被引:2,自引:0,他引:2  
Distributed smart cameras (DSCs) are real-time distributed embedded systems that perform computer vision using multiple cameras. This new approach has emerged thanks to a confluence of simultaneous advances in four key disciplines: computer vision, image sensors, embedded computing, and sensor networks. Processing images in a network of distributed smart cameras introduces several complications. However, we believe that the problems DSCs solve are much more important than the challenges of designing and building a distributed video system. We argue that distributed smart cameras represent key components for future embedded computer vision systems and that smart cameras will become an enabling technology for many new applications. We summarize smart camera technology and applications, discuss current trends, and identify important research challenges.   相似文献   

20.
基于特征融合和L-M算法的车辆重识别方法   总被引:1,自引:0,他引:1  
车辆重识别是在视频监控系统中, 匹配不同外界条件下拍摄的同一车辆目标的技术。针对车辆重识别时不同摄像机中同一车辆的图像差异较大,单一特征难以稳定地描述图像的问题,采用多种特征融合实现车辆特征的提取,该方法将车辆图片的HSV特征和LBP特征进行融合,并对融合特征矩阵进行奇异值分解,提取特征值。针对重识别模型训练时传统BP算法收敛速度慢,精度不高的问题,采用Levenberg-Marguardt自适应调整算法优化BP神经网络。实验结果表明,该方法在车辆的同一性识别方面的识别率达到975%,且对光照变化、视角变化都具有较好的鲁棒性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号