首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
In this paper, we present a new method for removing shadows from images. First, shadows are detected by interactive brushing assisted with a Gaussian Mixture Model. Secondly, the detected shadows are removed using an adaptive illumination transfer approach that accounts for the reflectance variation of the image texture. The contrast and noise levels of the result are then improved with a multi‐scale illumination transfer technique. Finally, any visible shadow boundaries in the image can be eliminated based on our Bayesian framework. We also extend our method to video data and achieve temporally consistent shadow‐free results.  相似文献   

2.

The appearance of an object depends on both the viewpoint from which it is observed and the light sources by which it is illuminated. If the appearance of two objects is never identical for any pose or lighting conditions, then–in theory–the objects can always be distinguished or recognized. The question arises: What is the set of images of an object under all lighting conditions and pose? In this paper, we consider only the set of images of an object under variable illumination, including multiple, extended light sources and shadows. We prove that the set of n-pixel images of a convex object with a Lambertian reflectance function, illuminated by an arbitrary number of point light sources at infinity, forms a convex polyhedral cone in IRn and that the dimension of this illumination cone equals the number of distinct surface normals. Furthermore, the illumination cone can be constructed from as few as three images. In addition, the set of n-pixel images of an object of any shape and with a more general reflectance function, seen under all possible illumination conditions, still forms a convex cone in IRn. Extensions of these results to color images are presented. These results immediately suggest certain approaches to object recognition. Throughout, we present results demonstrating the illumination cone representation.

  相似文献   

3.
Most active optical range sensors record, simultaneously with the range image, the amount of light reflected at each measured surface location: this information forms what is called a range intensity image, also known as a reflectance image. This paper proposes a method that uses this type of image for the correction of the color information of a textured 3D model. This color information is usually obtained from color images acquired using a digital camera. The lighting condition for the color images are usually not controlled, thus this color information may not be accurate. On the other hand, the illumination condition for the range intensity image is known since it is obtained from a controlled lighting and observation configuration, as required for the purpose of active optical range measurement. The paper describes a method for combining the two sources of information, towards the goal of compensating for a reference range intensity image is first obtained by considering factors such as sensor properties, or distance and relative surface orientation of the measured surface. The color image of the corresponding surface portion is then corrected using this reference range intensity image. A B-spline interpolation technique is applied to reduce the noise of range intensity images. Finally, a method for the estimation of the illumination color is applied to compensate for the light source color. Experiments show the effectiveness of the correction method using range intensity images.  相似文献   

4.
Variation in illumination conditions caused by weather, time of day, etc., makes the task difficult when building video surveillance systems of real world scenes. Especially, cast shadows produce troublesome effects, typically for object tracking from a fixed viewpoint, since it yields appearance variations of objects depending on whether they are inside or outside the shadow. In this paper, we handle such appearance variations by removing shadows in the image sequence. This can be considered as a preprocessing stage which leads to robust video surveillance. To achieve this, we propose a framework based on the idea of intrinsic images. Unlike previous methods of deriving intrinsic images, we derive time-varying reflectance images and corresponding illumination images from a sequence of images instead of assuming a single reflectance image. Using obtained illumination images, we normalize the input image sequence in terms of incident lighting distribution to eliminate shadowing effects. We also propose an illumination normalization scheme which can potentially run in real time, utilizing the illumination eigenspace, which captures the illumination variation due to weather, time of day, etc., and a shadow interpolation method based on shadow hulls. This paper describes the theory of the framework with simulation results and shows its effectiveness with object tracking results on real scene data sets.  相似文献   

5.
Reflectance based object recognition   总被引:7,自引:4,他引:3  
Neighboring points on a smoothly curved surface have similar surface normals and illumination conditions. Therefore, their brightness values can be used to compute the ratio of their reflectance coefficients. Based on this observation, we develop an algorithm that estimates a reflectance ratio for each region in an image with respect to its background. The algorithm is efficient as it computes ratios for all image regions in just two raster scans. The region reflectance ratio represents a physical property that is invariant to illumination and imaging parameters. Several experiments are conducted to demonstrate the accuracy and robustness of ratio invariant.The ratio invariant is used to recognize objects from a single brightness image of a scene. Object models are automatically acquired and represented using a hash table. Recognition and pose estimation algorithms are presented that use ratio estimates of scene regions as well as their geometric properties to index the hash table. The result is a hypothesis for the existence of an object in the image. This hypothesis is verified using the ratios and locations of other regions in the scene. This approach to recognition is effective for objects with printed characters and pictures. Recognition experiments are conducted on images with illumination variations, occlusions, and shadows. The paper is concluded with a discussion on the simultaneous use of reflectance and geometry for visual perception.  相似文献   

6.
While laser scanners can produce a high-precision 3D shape of a real object, appearance information of the object has to be captured by an image sensor, such as a digital camera. This paper proposes a novel and simple technique for colorizing 3D geometric models based on laser reflectivity. Laser scanners capture the range data of a target object from the sensors. Simultaneously, the power of the reflected laser is obtained as a by-product of the range data. The reflectance image, which is a collection of laser reflectance depicted as a grayscale image, contains rich appearance information about the target object. The proposed technique is an alternative to texture mapping, which has been widely used to realize photo-realistic 3D modeling but requires strict alignment between range data and texture images. The proposed technique first colorizes a reflectance image based on the similarity of color and reflectance images. Then the appearance information (color and texture information) is added to a 3D model by transferring the color in the colorized reflectance image to the corresponding range image. Some experiments and comparisons between texture mapping and the proposed technique demonstrate the validity of the proposed technique.  相似文献   

7.
We are developing a new reflective display technology, based on total internal reflection, which achieves a large difference between the maximum and minimum reflectance values. This yields a surface with greatly improved legibility under a wide range of illumination conditions. Such devices can display an image with a maximum reflectance ranging from 55% under uniform background illumination to over 85% under common non-uniform illumination conditions, even at viewing angles greater than 80° from the surface normal. This approach, which we call ‘CLEAR’ (Charged Liquid Electro-Active Response), uses polymeric microstructures to efficiently redirect incoming ambient light back toward the viewer. One advantage of this technique is the ability to achieve bright, full-color images over a wide range of viewing angles, much like ink printed on paper. The displayed image can be updated rapidly since switching from the reflective state to the absorptive state requires only about a half-micron of motion of absorbing material into the evanescent field associated with TIR. This new approach offers significant advantages in a number of reflective display applications.  相似文献   

8.
TM遥感影像的地形辐射校正研究   总被引:2,自引:0,他引:2  
从地面所接收到的太阳直接辐射、天空散射辐射和临近地形反射附加的辐射三个方面分析计算地面每个像元的太阳总辐射,并在此基础上建立地表真实反射率恢复模型,实现对地形的辐射校正。在算法实现上,采用交互式数据语言(Interactive Data Language,IDL),结合6S大气校正模型和数字高程模型(DEM)进行编程实现。利用北京山区的TM遥感影像所做的实验表明该方法能有效地消除卫星影像中地形的影响,为影像的后续处理提供更真实的信息。  相似文献   

9.
在肤色检测、人脸识别、图象和视频检索的研究中,大量算法都是基于对图象色彩特征进行分析的,然而当图象发生偏色时,这些算法的性能会明显下降,甚至无效,而且由于现有的偏色校正算法,引入了其他关于偏色图象的先验性信息,具有很大的使用局限性,为此,提出了一种在只给出偏色图象的条件下,进行偏色检测和自动校正的算法.该算法首先获取并分析偏色图象在RGB各通道内的直方图特征,然后参照这些特征检测偏色通道,并通过调整偏色或非偏色通道强度分布来达到各个通道之间色彩平衡.实验表明,在较大程度的偏色情况下,该算法校正恢复出的图象与原始无偏色图象能达到视觉上基本一致的效果,并具有普遍的适用性.  相似文献   

10.
目的 现有大多数低照度图像增强算法会放大噪声,且用于极低照度图像时会出现亮度提升不足、色彩失真等问题。为此,提出一种基于Retinex(retina cortex)的增强与去噪方法。方法 为了增强极低照度图像,首先利用暗通道先验原理估计场景的全局光照,若光照低于0.5,对图像进行初始光照校正;其次,提出一种Retinex顺序分解模型,使低照度图像中的噪声均体现在反射分量中,基于分解结果,利用Gamma校正求取增强后的噪声图像;最后,提出一种基于内外双重互补先验约束的去噪机制,利用非局部自相似性原理为反射分量构建内部先验约束,基于深度学习,为增强后的噪声图像构建外部先验约束,使内外约束相互制约。结果 将本文算法与6种算法比较,在140幅普通低照度图像和162幅极低照度图像上(有正常曝光参考图像)进行主观视觉和客观指标评价比较,结果显示本文方法在亮度提升、色彩保真及去噪方面均有明显优势,对于普通低照度图像,BTMQI(blind tone-mapped quality index)和NIQE(natural image quality evaluator)指标均取得次优值,对于极低照度图像...  相似文献   

11.
A Variational Framework for Retinex   总被引:25,自引:1,他引:25  
Retinex theory addresses the problem of separating the illumination from the reflectance in a given image and thereby compensating for non-uniform lighting. This is in general an ill-posed problem. In this paper we propose a variational model for the Retinex problem that unifies previous methods. Similar to previous algorithms, it assumes spatial smoothness of the illumination field. In addition, knowledge of the limited dynamic range of the reflectance is used as a constraint in the recovery process. A penalty term is also included, exploiting a-priori knowledge of the nature of the reflectance image. The proposed formulation adopts a Bayesian view point of the estimation problem, which leads to an algebraic regularization term, that contributes to better conditioning of the reconstruction problem.Based on the proposed variational model, we show that the illumination estimation problem can be formulated as a Quadratic Programming optimization problem. An efficient multi-resolution algorithm is proposed. It exploits the spatial correlation in the reflectance and illumination images. Applications of the algorithm to various color images yield promising results.  相似文献   

12.
一种数字航空影像的匀光方法   总被引:16,自引:1,他引:16       下载免费PDF全文
针对单张数字航空影像不均匀光照现象的成因,深入研究了马斯克匀光技术在数字航空影像匀光中的应用,并针对数字航空影像提出了相应的匀光处理的具体流程和实现方法。实验表明该方法可以克服数学模型法的不足,具有较强的适用性,对于消除数字航空影像的不均匀光照现象能够取得满意的效果,从而可以有效地解决单张数字航空影像的色彩平衡问题。  相似文献   

13.
We describe a method of learning generative models of objects from a set of images of the object under different, and unknown, illumination. Such a model allows us to approximate the objects' appearance under a range of lighting conditions. This work is closely related to photometric stereo with unknown light sources and, in particular, to the use of Singular Value Decomposition (SVD) to estimate shape and albedo from multiple images up to a linear transformation (Hayakawa, 1994). Firstly we analyze and extend the SVD approach to this problem. We demonstrate that it applies to objects for which the dominant imaging effects are Lambertian reflectance with a distant light source and a background ambient term. To determine that this is a reasonable approximation we calculate the eigenvectors of the SVD on a set of real objects, under varying lighting conditions, and demonstrate that the first few eigenvectors account for most of the data in agreement with our predictions. We then analyze the linear ambiguities in the SVD approach and demonstrate that previous methods proposed to resolve them (Hayakawa, 1994) are only valid under certain conditions. We discuss alternative possibilities and, in particular, demonstrate that knowledge of the object class is sufficient to resolve this problem. Secondly, we describe the use of surface consistency for putting constraints on the possible solutions. We prove that this constraint reduces the ambiguities to a subspace called the generalized bas relief ambiguity (GBR) which is inherent in the Lambertian reflectance function (and which can be shown to exist even if attached and cast shadows are present (Belhumeur et al., 1997)). We demonstrate the use of surface consistency to solve for the shape and albedo up to a GBR and describe, and implement, a variety of additional assumptions to resolve the GBR. Thirdly, we demonstrate an iterative algorithm that can detect and remove some attached shadows from the objects thereby increasing the accuracy of the reconstructed shape and albedo.  相似文献   

14.
This paper describes an approach for estimating the effect of factors influencing the determination of forest boundaries on medium-resolution satellite images. Forest edges were delineated on a Landsat Thematic Mapper (TM) image made in late winter under plain snow cover conditions. The study investigated the best Landsat TM spectral band and threshold value for the detection of forest edges in winter images. It was hypothesized that shadows cast by trees on forest edges on the bright snow of the surrounding open area make north- or north-west-facing forest edges less sharp than edges facing in other directions. If this holds true for medium-resolution Landsat TM satellite images, forest area change studies should carefully consider images taken under different atmospheric and solar elevation conditions in order to distinguish real changes at forest edges from those stemming from different conditions of solar illumination. The results of the study show that there is no significant shadow effect, as the reflectance contrast at forest edges exposed in different azimuthal directions does not differ on Landsat TM winter images under plain snow cover conditions. Landsat TM bands 2-4 are all equally good for the detection of the forest edge location at an average value of reflectance. These results are valid for forest edges that have remained stable for several decades.  相似文献   

15.
Color from black and white   总被引:1,自引:1,他引:0  
Color constancy can be achieved by analyzing the chromatic aberration in an image. Chromatic aberration spatially separates light of different wavelengths and this allows the spectral power distribution of the light to be extracted. This is more information about the light than is registered by the cones of the human visual system or by a color television camera; and, using it, we show how color constancy, the separation of reflectance from illumination, can be achieved. As examples, we consider grey-level images of (a) a colored dot under unknown illumination, and (b) an edge between two differently colored regions under unknown illumination. Our first result is that in principle we can determine completely the spectral power distribution of the reflected light from the dot or, in the case of the color edge, the difference in the spectral power distributions of the light from the two regions. By employing a finite-dimensional linear model of illumination and surface reflectance, we obtain our second result, which is that the spectrum of the reflected light can be uniquely decomposed into a component due to the illuminant and another component due to the surface reflectance. This decomposition provides the complete spectral reflectance function, and hence color, of the surface as well as the spectral power distribution of the illuminant. Up to the limit of the accuracy of the finite-dimensional model, this effectively solves the color constancy problem.  相似文献   

16.
Captured reflectance fields tend to provide a relatively coarse sampling of the incident light directions. As a result, sharp illumination features, such as highlights or shadow boundaries, are poorly reconstructed during relighting; highlights are disconnected, and shadows show banding artefacts. In this paper, we propose a novel interpolation technique for 4D reflectance fields that reconstructs plausible images even for non-observed light directions. Given a sparsely sampled reflectance field, we can effectively synthesize images as they would have been obtained from denser sampling. The processing pipeline consists of three steps: (1) segmentation of regions where intermediate lighting cannot be obtained by blending, (2) appropriate flow algorithms for highlights and shadows, plus (3) a final reconstruction technique that uses image-based priors to faithfully correct errors that might be introduced by the segmentation or flow step. The algorithm reliably reproduces scenes that contain specular highlights, interreflections, shadows or caustics.  相似文献   

17.
Intrinsic images are a mid‐level representation of an image that decompose the image into reflectance and illumination layers. The reflectance layer captures the color/texture of surfaces in the scene, while the illumination layer captures shading effects caused by interactions between scene illumination and surface geometry. Intrinsic images have a long history in computer vision and recently in computer graphics, and have been shown to be a useful representation for tasks ranging from scene understanding and reconstruction to image editing. In this report, we review and evaluate past work on this problem. Specifically, we discuss each work in terms of the priors they impose on the intrinsic image problem. We introduce a new synthetic ground‐truth dataset that we use to evaluate the validity of these priors and the performance of the methods. Finally, we evaluate the performance of the different methods in the context of image‐editing applications.  相似文献   

18.
In this paper, we present a novel image-based technique that transfers illumination from a source face image to a target face image based on the Logarithmic Total Variation (LTV) model. Our method does not require any prior information regarding the lighting conditions or the 3D geometries of the underlying faces. We first use a Radial Basis Functions (RBFs)-based deformation technique to align key facial features of the reference 2D face with those of the target face. Then, we employ the LTV model to factorize each of the two aligned face images to an illumination-dependent component and an illumination-invariant component. Finally, illumination transferring is achieved by replacing the illumination-dependent component of the target face by that of the reference face. We tested our technique on numerous grayscale and color face images from various face datasets including the Yale face Database, as well as the application of illumination-preserved face coloring.  相似文献   

19.
In this paper we present a new practical camera characterization technique to improve color accuracy in high dynamic range (HDR) imaging. Camera characterization refers to the process of mapping device‐dependent signals, such as digital camera RAW images, into a well‐defined color space. This is a well‐understood process for low dynamic range (LDR) imaging and is part of most digital cameras — usually mapping from the raw camera signal to the sRGB or Adobe RGB color space. This paper presents an efficient and accurate characterization method for high dynamic range imaging that extends previous methods originally designed for LDR imaging. We demonstrate that our characterization method is very accurate even in unknown illumination conditions, effectively turning a digital camera into a measurement device that measures physically accurate radiance values — both in terms of luminance and color — rivaling more expensive measurement instruments.  相似文献   

20.
Specularities often confound algorithms designed to solve computer vision tasks such as image segmentation, object detection, and tracking. These tasks usually require color image segmentation to partition an image into regions, where each region corresponds to a particular material. Due to discontinuities resulting from shadows and specularities, a single material is often segmented into several sub-regions. In this paper, a specularity detection and removal technique is proposed that requires no camera calibration or other a priori information regarding the scene. The approach specifically addresses detecting and removing specularities in facial images. The image is first processed by the Luminance Multi-Scale Retinex [B.V. Funt, K. Barnard, M. Brockington, V. Cardei, Luminance-Based Multi-Scale Retinex, AIC’97, Kyoto, Japan, May 1997]. Second, potential specularities are detected and a wavefront is generated outwards from the peak of the specularity to its boundary or until a material boundary has been reached. Upon attaining the specularity boundary, the wavefront contracts inwards while coloring in the specularity until the latter no longer exists. The third step is discussed in a companion paper [M.D. Levine, J. Bhattacharyya, Removing shadows, Pattern Recognition Letters, 26 (2005) 251–265] where a method for detecting and removing shadows has also been introduced. The approach involves training Support Vector Machines to identify shadow boundaries based on their boundary properties. The latter are used to identify shadowed regions in the image and then assign to them the color of non-shadow neighbors of the same material as the shadow. Based on these three steps, we show that more meaningful color image segmentations can be achieved by compensating for illumination using the Illumination Compensation Method proposed in this paper. It is also demonstrated that the accuracy of facial skin detection improves significantly when this illumination compensation approach is used. Finally, we show how illumination compensation can increase the accuracy of face recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号