首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
We present an example‐based approach for radiometrically linearizing photographs that takes as input a radiometrically linear exemplar image and a target regular uncalibrated image of the same scene, possibly from a different viewpoint and/or under different lighting. The output of our method is a radiometrically linearized version of the target image. Modeling the change in appearance of a small image patch seen from a different viewpoint and/or under different lighting as a linear 1D subspace, allows us to recast radiometric transfer in a form similar to classic radiometric calibration from exposure stacks. The resulting radiometric transfer method is lightweight and easy to implement. We demonstrate the accuracy and validity of our method on a variety of scenes.  相似文献   

2.
We present a method for synthesizing fluid animation from a single image, using a fluid video database. The user inputs a target painting or photograph of a fluid scene along with its alpha matte that extracts the fluid region of interest in the scene. Our approach allows the user to generate a fluid animation from the input image and to enter a few additional commands about fluid orientation or speed. Employing the database of fluid examples, the core algorithm in our method then automatically assigns fluid videos for each part of the target image. Our method can therefore deal with various paintings and photographs of a river, waterfall, fire, and smoke. The resulting animations demonstrate that our method is more powerful and efficient than our prior work.  相似文献   

3.
This paper proposes a new approach for color transfer between two images. Our method is unique in its consideration of the scene illumination and the constraint that the mapped image must be within the color gamut of the target image. Specifically, our approach first performs a white‐balance step on both images to remove color casts caused by different illuminations in the source and target image. We then align each image to share the same ‘white axis’ and perform a gradient preserving histogram matching technique along this axis to match the tone distribution between the two images. We show that this illuminant‐aware strategy gives a better result than directly working with the original source and target image's luminance channel as done by many previous methods. Afterwards, our method performs a full gamut‐based mapping technique rather than processing each channel separately. This guarantees that the colors of our transferred image lie within the target gamut. Our experimental results show that this combined illuminant‐aware and gamut‐based strategy produces more compelling results than previous methods. We detail our approach and demonstrate its effectiveness on a number of examples.  相似文献   

4.
A method for automatically generating a picture maze from two different images is introduced throughout this paper. The process begins with the extraction of salient contours and edge tangent flow information from the primary image in order to build the overall maze. Thus, mazes with passages flowing in the main edge directions and walls that effectively represent an abstract version of the primary image can be successfully created. Furthermore, our proposed approach makes possible the use of their solution path as a means of illustrating the main features of the secondary image, while attempting to keep its image motif concealed until the maze has been finally solved. The contour features and intensity of the secondary image are also incorporated into our method in order to determine the areas of the maze to be shaded by allowing the solution path to go through them. Moreover, an experiment has been conducted to confirm that solution paths can be successfully hidden from the participants in the mazes generated using our method.  相似文献   

5.
We introduce a novel method for enabling stereoscopic viewing of a scene from a single pre‐segmented image. Rather than attempting full 3D reconstruction or accurate depth map recovery, we hallucinate a rough approximation of the scene's 3D model using a number of simple depth and occlusion cues and shape priors. We begin by depth‐sorting the segments, each of which is assumed to represent a separate object in the scene, resulting in a collection of depth layers. The shapes and textures of the partially occluded segments are then completed using symmetry and convexity priors. Next, each completed segment is converted to a union of generalized cylinders yielding a rough 3D model for each object. Finally, the object depths are refined using an iterative ground fitting process. The hallucinated 3D model of the scene may then be used to generate a stereoscopic image pair, or to produce images from novel viewpoints within a small neighborhood of the original view. Despite the simplicity of our approach, we show that it compares favorably with state‐of‐the‐art depth ordering methods. A user study was conducted showing that our method produces more convincing stereoscopic images than existing semi‐interactive and automatic single image depth recovery methods.  相似文献   

6.
We present a novel image resizing method which attempts to ensure that important local regions undergo a geometric similarity transformation, and at the same time, to preserve image edge structure. To accomplish this, we define handles to describe both local regions and image edges, and assign a weight for each handle based on an importance map for the source image. Inspired by conformal energy, which is widely used in geometry processing, we construct a novel quadratic distortion energy to measure the shape distortion for each handle. The resizing result is obtained by minimizing the weighted sum of the quadratic distortion energies of all handles. Compared to previous methods, our method allows distortion to be diffused better in all directions, and important image edges are well‐preserved. The method is efficient, and offers a closed form solution.  相似文献   

7.
Directors employ a process called “color grading” to add color styles to feature films. Color grading is used for a number of reasons, such as accentuating a certain emotion or expressing the signature look of a director. We collect a database of feature film clips and label them with tags such as director, emotion, and genre. We then learn a model that maps from the low‐level color and tone properties of film clips to the associated labels. This model allows us to examine a number of common hypotheses on the use of color to achieve goals, such as specific emotions. We also describe a method to apply our learned color styles to new images and videos. Along with our analysis of color grading techniques, we demonstrate a number of images and videos that are automatically filtered to resemble certain film styles.  相似文献   

8.
We present a simple and effective algorithm to transfer deformation between surface meshes with multiple components. The algorithm automatically computes spatial relationships between components of the target object, builds correspondences between source and target, and finally transfers deformation of the source onto the target while preserving cohesion between the target's components. We demonstrate the versatility of our approach on various complex models.  相似文献   

9.
This paper introduces a framework that can extract an alpha matte from a single image with Fresnel reflection, and that can composite other objects with the image such that plausible reflections are included. Our method handles reflections in a plane with small undulations, for example, a water surface with waves or a glossy tabletop. During the matting stage, our method first estimates the transmission color, which is assumed to be uniform, and then calculates a reflection image and alpha matte based on user markups. However, accurate extraction of the matte becomes challenging when a plane has small undulations because these create perturbations in the matte. We therefore propose a filter that can refine the matte effectively. In the compositing stage, the reflection of a composited object is synthesized by ray tracing in real time. We demonstrate the effectiveness of our method through comparisons with ground‐truth data and results using natural images as inputs.  相似文献   

10.
This paper investigates a new approach for color transfer. Rather than transferring color from one image to another globally, we propose a system with a stroke‐based user interface to provide a direct indication mechanism. We further present a multiple local color transfer method. Through our system the user can easily enhance a defect (source) photo by referring to some other good quality (target) images by simply drawing some strokes. Then, the system will perform the multiple local color transfer automatically. The system consists of two major steps. First, the user draws some strokes on the source and target images to indicate corresponding regions and also the regions he or she wants to preserve. The regions to be preserved which will be masked out based on an improved graph cuts algorithm. Second, a multiple local color transfer method is presented to transfer the color from the target image(s) to the source image through gradient‐guided pixel‐wise color transfer functions. Finally, the defect (source) image can be enhanced seamlessly by multiple local color transfer based on some good quality (target) examples through an interactive and intuitive stroke‐based user interface.  相似文献   

11.
Recent color transfer methods use local information to learn the transformation from a source to an exemplar image, and then transfer this appearance change to a target image. These solutions achieve very successful results for general mood changes, e.g., changing the appearance of an image from “sunny” to “overcast”. However, such methods have a hard time creating new image content, such as leaves on a bare tree. Texture transfer, on the other hand, can synthesize such content but tends to destroy image structure. We propose the first algorithm that unifies color and texture transfer, outperforming both by leveraging their respective strengths. A key novelty in our approach resides in teasing apart appearance changes that can be modeled simply as changes in color versus those that require new image content to be generated. Our method starts with an analysis phase which evaluates the success of color transfer by comparing the exemplar with the source. This analysis then drives a selective, iterative texture transfer algorithm that simultaneously predicts the success of color transfer on the target and synthesizes new content where needed. We demonstrate our unified algorithm by transferring large temporal changes between photographs, such as change of season – e.g., leaves on bare trees or piles of snow on a street – and flooding.  相似文献   

12.
This paper introduces a method for automatically generating continuous line illustrations, drawings consisting of a single line, from a given input image. Our approach begins by inferring a graph from a set of edges extracted from the image in question and obtaining a path that traverses through all edges of the said graph. The resulting path is then subjected to a series of post‐processing operations to transform it into a continuous line drawing. Moreover, our approach allows us to manipulate the amount of detail portrayed in our line illustrations, which is particularly useful for simplifying the overall illustration while still retaining its most significant features. We also present several experimental results to demonstrate that our approach can automatically synthesize continuous line illustrations comparable to those of some contemporary artists.  相似文献   

13.
We propose 2D stick figures as a unified medium for visualizing and searching for human motion data. The stick figures can express a wide range or human motion, and they are easy to be drawn by people without any professional training. In our interface, the user can browse overall motion by viewing the stick figure images generated from the database and retrieve them directly by using sketched stick figures as an input query. We started with a preliminary survey to observe how people draw stick figures. Based on the rules observed from the user study, we developed an algorithm converting motion data to a sequence of stick figures. The feature‐based comparison method between the stick figures provides an interactive and progressive search for the users. They assist the user's sketching by showing the current retrieval result at each stroke. We demonstrate the utility of the system with a user study, in which the participants retrieved example motion segments from the database with 102 motion files by using our interface.  相似文献   

14.
15.
Modeling of realistic garments is essential for online shopping and many other applications including virtual characters. Most of existing methods either require a multi‐camera capture setup or a restricted mannequin pose. We address the garment modeling problem according to a single input image. We design an all‐pose garment outline interpretation, and a shading‐based detail modeling algorithm. Our method first estimates the mannequin pose and body shape from the input image. It further interprets the garment outline with an oriented facet decided according to the mannequin pose to generate the initial 3D garment model. Shape details such as folds and wrinkles are modeled by shape‐from‐shading techniques, to improve the realism of the garment model. Our method achieves similar result quality as prior methods from just a single image, significantly improving the flexibility of garment modeling.  相似文献   

16.
Mappings between color spaces are ubiquitous in image processing problems such as gamut mapping, decolorization, and image optimization for color‐blind people. Simple color transformations often result in information loss and ambiguities, and one wishes to find an image‐specific transformation that would preserve as much as possible the structure of the original image in the target color space. In this paper, we propose Laplacian colormaps, a generic framework for structure‐preserving color transformations between images. We use the image Laplacian to capture the structural information, and show that if the color transformation between two images preserves the structure, the respective Laplacians have similar eigenvectors, or in other words, are approximately jointly diagonalizable. Employing the relation between joint diagonalizability and commutativity of matrices, we use Laplacians commutativity as a criterion of color mapping quality and minimize it w.r.t. the parameters of a color transformation to achieve optimal structure preservation. We show numerous applications of our approach, including color‐to‐gray conversion, gamut mapping, multispectral image fusion, and image optimization for color deficient viewers.  相似文献   

17.
Decomposing an input image into its intrinsic shading and reflectance components is a long‐standing ill‐posed problem. We present a novel algorithm that requires no user strokes and works on a single image. Based on simple assumptions about its reflectance and luminance, we first find clusters of similar reflectance in the image, and build a linear system describing the connections and relations between them. Our assumptions are less restrictive than widely‐adopted Retinex‐based approaches, and can be further relaxed in conflicting situations. The resulting system is robust even in the presence of areas where our assumptions do not hold. We show a wide variety of results, including natural images, objects from the MIT dataset and texture images, along with several applications, proving the versatility of our method.  相似文献   

18.
We present a semi‐automatic method for reconstructing flower models from a single photograph. Such reconstruction is challenging since the 3D structure of a flower can appear ambiguous in projection. However, the flower head typically consists of petals embedded in 3D space that share similar shapes and form certain level of regular structure. Our technique employs these assumptions by first fitting a cone and subsequently a surface of revolution to the flower structure and then computing individual petal shapes from their projection in the photo. Flowers with multiple layers of petals are handled through processing different layers separately. Occlusions are dealt with both within and between petal layers. We show that our method allows users to quickly generate a variety of realistic 3D flowers from photographs and to animate an image using the underlying models reconstructed from our method.  相似文献   

19.
We present photon beam diffusion, an efficient numerical method for accurately rendering translucent materials. Our approach interprets incident light as a continuous beam of photons inside the material. Numerically integrating diffusion from such extended sources has long been assumed computationally prohibitive, leading to the ubiquitous single‐depth dipole approximation and the recent analytic sum‐of‐Gaussians approach employed by Quantized Diffusion. In this paper, we show that numerical integration of the extended beam is not only feasible, but provides increased speed, flexibility, numerical stability, and ease of implementation, while retaining the benefits of previous approaches. We leverage the improved diffusion model, but propose an efficient and numerically stable Monte Carlo integration scheme that gives equivalent results using only 3–5 samples instead of 20–60 Gaussians as in previous work. Our method can account for finite and multi‐layer materials, and additionally supports directional incident effects at surfaces. We also propose a novel diffuse exact single‐scattering term which can be integrated in tandem with the multi‐scattering approximation. Our numerical approach furthermore allows us to easily correct inaccuracies of the diffusion model and even combine it with more general Monte Carlo rendering algorithms. We provide practical details necessary for efficient implementation, and demonstrate the versatility of our technique by incorporating it on top of several rendering algorithms in both research and production rendering systems.  相似文献   

20.
3D garment capture is an important component for various applications such as free‐view point video, virtual avatars, online shopping, and virtual cloth fitting. Due to the complexity of the deformations, capturing 3D garment shapes requires controlled and specialized setups. A viable alternative is image‐based garment capture. Capturing 3D garment shapes from a single image, however, is a challenging problem and the current solutions come with assumptions on the lighting, camera calibration, complexity of human or mannequin poses considered, and more importantly a stable physical state for the garment and the underlying human body. In addition, most of the works require manual interaction and exhibit high run‐times. We propose a new technique that overcomes these limitations, making garment shape estimation from an image a practical approach for dynamic garment capture. Starting from synthetic garment shape data generated through physically based simulations from various human bodies in complex poses obtained through Mocap sequences, and rendered under varying camera positions and lighting conditions, our novel method learns a mapping from rendered garment images to the underlying 3D garment model. This is achieved by training Convolutional Neural Networks (CNN‐s) to estimate 3D vertex displacements from a template mesh with a specialized loss function. We illustrate that this technique is able to recover the global shape of dynamic 3D garments from a single image under varying factors such as challenging human poses, self occlusions, various camera poses and lighting conditions, at interactive rates. Improvement is shown if more than one view is integrated. Additionally, we show applications of our method to videos.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号