首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Palette‐based image decomposition has attracted increasing attention in recent years. A specific class of approaches have been proposed basing on the RGB‐space geometry, which manage to construct convex hulls whose vertices act as palette colors. However, such palettes do not guarantee to have the representative colors which actually appear in the image, thus making it less intuitive and less predictable when editing palette colors to perform recoloring. Hence, we proposed an improved geometric approach to address this issue. We use a polyhedron, but not necessarily a convex hull, in the RGB space to represent the color palette. We then formulate the task of palette extraction as an optimization problem which could be solved in a few seconds. Our palette has a higher degree of representativeness and maintains a relatively similar level of accuracy compared with previous methods. For layer decomposition, we compute layer opacities via simple mean value coordinates, which could achieve instant feedbacks without precomputations. We have demonstrated our method for image recoloring on a variety of examples. In comparison with state‐of‐the‐art works, our approach is generally more intuitive and efficient with fewer artifacts.  相似文献   

2.
This paper presents a new two‐step color transfer method which includes color mapping and detail preservation. To map source colors to target colors, which are from an image or palette, the proposed similarity‐preserving color mapping algorithm uses the similarities between pixel color and dominant colors as existing algorithms and emphasizes the similarities between source image pixel colors. Detail preservation is performed by an ?0 gradient‐preserving algorithm. It relaxes the large gradients of the sparse pixels along color region boundaries and preserves the small gradients of pixels within color regions. The proposed method preserves source image color similarity and image details well. Extensive experiments demonstrate that the proposed approach has achieved a state‐of‐art visual performance.  相似文献   

3.
Fast image retrieval using color-spatial information   总被引:1,自引:0,他引:1  
In this paper, we present an image retrieval system that employs both the color and spatial information of images to facilitate the retrieval process. The basic unit used in our technique is a single-colored cluster, which bounds a homogeneous region of that color in an image. Two clusters from two images are similar if they are of the same color and overlap in the image space. The number of clusters that can be extracted from an image can be very large, and it affects the accuracy of retrieval. We study the effect of the number of clusters on retrieval effectiveness to determine an appropriate value for “optimal' performance. To facilitate efficient retrieval, we also propose a multi-tier indexing mechanism called the Sequenced Multi-Attribute Tree (SMAT). We implemented a two-tier SMAT, where the first layer is used to prune away clusters that are of different colors, while the second layer discriminates clusters of different spatial locality. We conducted an experimental study on an image database consisting of 12,000 images. Our results show the effectiveness of the proposed color-spatial approach, and the efficiency of the proposed indexing mechanism. Received August 1, 1997 / Accepted December 9, 1997  相似文献   

4.
Removing specular highlight in an image is a fundamental research problem in computer vision and computer graphics. While various methods have been proposed, they typically do not work well for real‐world images due to the presence of rich textures, complex materials, hard shadows, occlusions and color illumination, etc. In this paper, we present a novel specular highlight removal method for real‐world images. Our approach is based on two observations of the real‐world images: (i) the specular highlight is often small in size and sparse in distribution; (ii) the remaining diffuse image can be represented by linear combination of a small number of basis colors with the sparse encoding coefficients. Based on the two observations, we design an optimization framework for simultaneously estimating the diffuse and specular highlight images from a single image. Specifically, we recover the diffuse components of those regions with specular highlight by encouraging the encoding coefficients sparseness using L0 norm. Moreover, the encoding coefficients and specular highlight are also subject to the non‐negativity according to the additive color mixing theory and the illumination definition, respectively. Extensive experiments have been performed on a variety of images to validate the effectiveness of the proposed method and its superiority over the previous methods.  相似文献   

5.
Iridescence is a natural phenomenon that is perceived as gradual color changes, depending on the view and illumination direction. Prominent examples are the colors seen in oil films and soap bubbles. Unfortunately, iridescent effects are particularly difficult to recreate in real‐time computer graphics. We present a high‐quality real‐time method for rendering iridescent effects under image‐based lighting. Previous methods model dielectric thin‐films of varying thickness on top of an arbitrary micro‐facet model with a conducting or dielectric base material, and evaluate the resulting reflectance term, responsible for the iridescent effects, only for a single direction when using real‐time image‐based lighting. This leads to bright halos at grazing angles and over‐saturated colors on rough surfaces, which causes an unnatural appearance that is not observed in ground truth data. We address this problem by taking the distribution of light directions, given by the environment map and surface roughness, into account when evaluating the reflectance term. In particular, our approach prefilters the first and second moments of the light direction, which are used to evaluate a filtered version of the reflectance term. We show that the visual quality of our approach is superior to the ones previously achieved, while having only a small negative impact on performance.  相似文献   

6.
Monte‐Carlo path tracing techniques can generate stunning visualizations of medical volumetric data. In a clinical context, such renderings turned out to be valuable for communication, education, and diagnosis. Because a large number of computationally expensive lighting samples is required to converge to a smooth result, progressive rendering is the only option for interactive settings: Low‐sampled, noisy images are shown while the user explores the data, and as soon as the camera is at rest the view is progressively refined. During interaction, the visual quality is low, which strongly impedes the user's experience. Even worse, when a data set is explored in virtual reality, the camera is never at rest, leading to constantly low image quality and strong flickering. In this work we present an approach to bring volumetric Monte‐Carlo path tracing to the interactive domain by reusing samples over time. To this end, we transfer the idea of temporal antialiasing from surface rendering to volume rendering. We show how to reproject volumetric ray samples even though they cannot be pinned to a particular 3D position, present an improved weighting scheme that makes longer history trails possible, and define an error accumulation method that downweights less appropriate older samples. Furthermore, we exploit reprojection information to adaptively determine the number of newly generated path tracing samples for each individual pixel. Our approach is designed for static, medical data with both volumetric and surface‐like structures. It achieves good‐quality volumetric Monte‐Carlo renderings with only little noise, and is also usable in a VR context.  相似文献   

7.
In this paper, we propose a PatchMatch‐based Multi‐View Stereo (MVS) algorithm which can efficiently estimate geometry for the textureless area. Conventional PatchMatch‐based MVS algorithms estimate depth and normal hypotheses mainly by optimizing photometric consistency metrics between patch in the reference image and its projection on other images. The photometric consistency works well in textured regions but can not discriminate textureless regions, which makes geometry estimation for textureless regions hard work. To address this issue, we introduce the local consistency. Based on the assumption that neighboring pixels with similar colors likely belong to the same surface and share approximate depth‐normal values, local consistency guides the depth and normal estimation with geometry from neighboring pixels with similar colors. To fasten the convergence of pixelwise local consistency across the image, we further introduce a pyramid architecture similar to previous work which can also provide coarse estimation at upper levels. We validate the effectiveness of our method on the ETH3D benchmark and Tanks and Temples benchmark. Results show that our method outperforms the state‐of‐the‐art.  相似文献   

8.
Diffusion curves [ [OBW*08] ] provide a flexible tool to create smooth‐shaded images from curves defined with colors. The resulting image is typically computed by solving a Poisson equation that diffuses the curve colors to the interior of the image. In this paper we present a new method for solving diffusion curves by using ray tracing. Our approach is analogous to final gathering in global illumination, where the curves define source radiance whose visible contribution will be integrated at a shading pixel to produce a color using stochastic ray tracing. Compared to previous work, the main benefit of our method is that it provides artists with extended flexibility in achieving desired image effects. Specifically, we introduce generalized curve colors called shaders that allow for the seamless integration of diffusion curves with classic 2D graphics including vector graphics (e.g. gradient fills) and raster graphics (e.g. patterns and textures). We also introduce several extended curve attributes to customize the contribution of each curve. In addition, our method allows any pixel in the image to be independently evaluated, without having to solve the entire image globally (as required by a Poisson‐based approach). Finally, we present a GPU‐based implementation that generates solution images at interactive rates, enabling dynamic curve editing. Results show that our method can easily produce a variety of desirable image effects.  相似文献   

9.
This paper proposes a deep learning‐based image tone enhancement approach that can maximally enhance the tone of an image while preserving the naturalness. Our approach does not require carefully generated ground‐truth images by human experts for training. Instead, we train a deep neural network to mimic the behavior of a previous classical filtering method that produces drastic but possibly unnatural‐looking tone enhancement results. To preserve the naturalness, we adopt the generative adversarial network (GAN) framework as a regularizer for the naturalness. To suppress artifacts caused by the generative nature of the GAN framework, we also propose an imbalanced cycle‐consistency loss. Experimental results show that our approach can effectively enhance the tone and contrast of an image while preserving the naturalness compared to previous state‐of‐the‐art approaches.  相似文献   

10.
We present a new outlier removal technique for a gradient‐domain path tracing (G‐PT) that computes image gradients as well as colors. Our approach rejects gradient outliers whose estimated errors are much higher than those of the other gradients for improving reconstruction quality for the G‐PT. We formulate our outlier removal problem as a least trimmed squares optimization, which employs only a subset of gradients so that a final image can be reconstructed without including the gradient outliers. In addition, we design this outlier removal process so that the chosen subset of gradients maintains connectivity through gradients between pixels, preventing pixels from being isolated. Lastly, the optimal number of inlier gradients is estimated to minimize our reconstruction error. We have demonstrated that our reconstruction with robustly rejecting gradient outliers produces visually and numerically improved results, compared to the previous screened Poisson reconstruction that uses all the gradients.  相似文献   

11.
目的 目前学者已经设计了很多模拟油画、水彩、水墨等风格的非真实感绘制方法,而能够生成彩色素描的算法还不是很多。针对这一课题,在前人工作的基础上,结合线积分卷积与双色调映射技术,改进了一种彩色素描模拟方法。方法 首先基于K-means聚类对彩色图像进行分割,通过计算色彩差异性为每个区域指定两种基本色,并利用双色调映射技术计算每种颜色的密度。而后利用线积分卷积分别生成两个基本色层的素描纹理,并将两层纹理相融合来生成彩色纹理。与此同时,利用霓虹变换生成素描轮廓线。最后,将轮廓与彩色纹理相融合来得到彩色素描效果。结果 实验结果表明,本文方法能够实现由彩色图像到彩铅画的自动、实时转化。结论 本文方法从轮廓和纹理两个角度模拟了真实的彩铅绘画过程。基于K-means聚类的分割方法得到的结果能够更好地反映彩色图像的颜色分布特性。通过色彩差异性计算指定基本色的策略提高了该环节的效率,满足了实时性要求。由于粉笔、蜡笔等绘画风格的调色与彩铅画类似,本文不同颜色层上下叠加的方式可以扩展到对其他介质绘画的模拟当中。  相似文献   

12.
The paper considers the problem of illuminant estimation: how, given an image of a scene, recorded under an unknown light, we can recover an estimate of that light. Obtaining such an estimate is a central part of solving the color constancy problem. Thus, the work presented will have applications in fields such as color-based object recognition and digital photography. Rather than attempting to recover a single estimate of the illuminant, we instead set out to recover a measure of the likelihood that each of a set of possible illuminants was the scene illuminant. We begin by determining which image colors can occur (and how these colors are distributed) under each of a set of possible lights. We discuss how, for a given camera, we can obtain this knowledge. We then correlate this information with the colors in a particular image to obtain a measure of the likelihood that each of the possible lights was the scene illuminant. Finally, we use this likelihood information to choose a single light as an estimate of the scene illuminant. Computation is expressed and performed in a generic correlation framework which we develop. We propose a new probabilistic instantiation of this correlation framework and show that it delivers very good color constancy on both synthetic and real images. We further show that the proposed framework is rich enough to allow many existing algorithms to be expressed within it: the gray-world and gamut-mapping algorithms are presented in this framework and we also explore the relationship of these algorithms to other probabilistic and neural network approaches to color constancy  相似文献   

13.
14.
Recent work has shown that distributing Monte Carlo errors as a blue noise in screen space improves the perceptual quality of rendered images. However, obtaining such distributions remains an open problem with high sample counts and high‐dimensional rendering integrals. In this paper, we introduce a temporal algorithm that aims at overcoming these limitations. Our algorithm is applicable whenever multiple frames are rendered, typically for animated sequences or interactive applications. Our algorithm locally permutes the pixel sequences (represented by their seeds) to improve the error distribution across frames. Our approach works regardless of the sample count or the dimensionality and significantly improves the images in low‐varying screen‐space regions under coherent motion. Furthermore, it adds negligible overhead compared to the rendering times. Note: our supplemental material provides more results with interactive comparisons against previous work.  相似文献   

15.
Estimating the correspondence between the images using optical flow is the key component for image fusion, however, computing optical flow between a pair of facial images including backgrounds is challenging due to large differences in illumination, texture, color and background in the images. To improve optical flow results for image fusion, we propose a novel flow estimation method, wavelet flow, which can handle both the face and background in the input images. The key idea is that instead of computing flow directly between the input image pair, we estimate the image flow by incorporating multi‐scale image transfer and optical flow guided wavelet fusion. Multi‐scale image transfer helps to preserve the background and lighting detail of input, while optical flow guided wavelet fusion produces a series of intermediate images for further fusion quality optimizing. Our approach can significantly improve the performance of the optical flow algorithm and provide more natural fusion results for both faces and backgrounds in the images. We evaluate our method on a variety of datasets to show its high outperformance.  相似文献   

16.
In this paper we propose a time-series matching-based approach that provides the interactive boundary image matching with noise control for a large-scale image database. To achieve the noise reduction effect in boundary image matching, we exploit the moving average transform of time-series matching. We are motivated by a simple intuition that the moving average transform might reduce the noise of boundary images as well as that of time-series data. To confirm this intuition, we first propose a new notion of k-order image matching, which applies the moving average transform to boundary image matching. A boundary image can be represented as a sequence in the time-series domain, and our k-order image matching identifies similar boundary images in this time-series domain by comparing the k-moving average transformed sequences. We then propose an index-based method that efficiently performs k-order image matching on a large image database, and formally prove its correctness. We also formally analyze the relationship of orders and their matching results and present an interactive approach of controlling the noise reduction effect. Experimental results show that our k-order image matching exploits the noise reduction effect well, and our index-based method outperforms the sequential scan by one or two orders of magnitude. These results indicate that our k-order image matching and its index-based solution provide a very practical way of realizing the noise control boundary image matching. To our best knowledge, the proposed interactive approach for large-scale image databases is the first attempt to solve the noise control problem in the time-series domain rather than the image domain by exploiting the efficient time-series matching techniques. Thus, our approach can be widely used in removing other types of distortions in image matching areas.  相似文献   

17.
This work revisits the Shock Filters of Osher and Rudin [ OR90 ] and shows how the proposed filtering process can be interpreted as the advection of image values along flow‐lines. Using this interpretation, we obtain an efficient implementation that only requires tracing flow‐lines and re‐sampling the image. We show that the approach is stable, allowing the use of arbitrarily large time steps without requiring a linear solve. Furthermore, we demonstrate the robustness of the approach by extending it to the processing of signals on meshes in 3D.  相似文献   

18.
Reproducing the appearance of real‐world materials using current printing technology is problematic. The reduced number of inks available define the printer's limited gamut, creating distortions in the printed appearance that are hard to control. Gamut mapping refers to the process of bringing an out‐of‐gamut material appearance into the printer's gamut, while minimizing such distortions as much as possible. We present a novel two‐step gamut mapping algorithm that allows users to specify which perceptual attribute of the original material they want to preserve (such as brightness, or roughness). In the first step, we work in the low‐dimensional intuitive appearance space recently proposed by Serrano et al. [ SGM*16 ], and adjust achromatic reflectance via an objective function that strives to preserve certain attributes. From such intermediate representation, we then perform an image‐based optimization including color information, to bring the BRDF into gamut. We show, both objectively and through a user study, how our method yields superior results compared to the state of the art, with the additional advantage that the user can specify which visual attributes need to be preserved. Moreover, we show how this approach can also be used for attribute‐preserving material editing.  相似文献   

19.
模糊相关图割的非监督层次化彩色图像分割   总被引:1,自引:0,他引:1       下载免费PDF全文
目的 基于阈值的分割方法能根据像素的信息将图像划分为同类的区域,其中常用的最大模糊相关分割方法,因能利用模糊相关度量划分的适当性,得到较好的分割结果,而广受关注。然而该算法存在划分数需预先确定,阈值的分割结果存在孤立噪声,无法对彩色图像实施分割的问题。为此,提出基于模糊相关图割的非监督层次化分割策略来解决该问题。方法 算法首先将图像划分为若干超像素,以提高层次化图像分割的效率;随后将快速模糊相关算法与图割结合,构成模糊相关图割2-划分算子,在确保分割效率的基础上,解决单一阈值分割存在孤立噪声的问题;最后设计了自顶向下层次化分割策略,利用构建的2-划分算子选择合适的区域及通道,迭代地对超像素实施层次化分割,直到算法收敛,划分数自动确定。结果 对Berkeley分割数据库上300幅图像进行了测试,结果表明算法能有效分割彩色图像,分割精度优于Ncut、JSEG方法,运行时间较这两种方法也提高了近20%。结论 本文算法为最大模糊相关算法在非监督彩色图像分割领域的应用提供指导依据,能用于目标检测和识别领域。  相似文献   

20.
Data visualization can accelerate data processing so that enormous quantities of data can be utilized effectively. Visualization of data can achieve image communication between people and data as well as between people to help observers get information hidden in data, providing a tool for discovery and understanding of scientific law. To solve the problem of multi-image and multi-modality image display in the field of remote sensing, an interactive colour visualization method for hyperspectral imagery (HSI) is proposed in this article. This method visualizes complex information of original HSI data through different fusion results of multiple images in a colour space, which is under the interactive control of the observers. By gradually determining predetermined points, observers can obtain a relatively satisfying image blending mode, output an image with clearer interested target, and obtain the corresponding mixing coefficient of images. The proposed method can also solve the problem that traditional visualization methods only display information from three bands in one image, and conduct information mining in HSI with a certain purpose based on the demands of users. In addition, this approach is also applicable for visualization of other types of multi-modal imagery.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号