首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We present a new technique to jointly MIP‐map BRDF and normal maps. Starting with generating an instant BRDF map, our technique builds its MIP‐mapped versions based on a highly efficient algorithm that interpolates von Mises‐Fisher (vMF) distributions. In our BRDF MIP‐maps, each pixel stores a vMF mixture approximating the average of all BRDF lobes from the finest level. Our method is capable of jointly MIP‐mapping BRDF and normal maps, even with high‐frequency variations, at real‐time while preserving high‐quality reflectance details. Further, it is very fast, easy to implement, and requires no precomputation.  相似文献   

2.
Particle‐based simulation techniques, like the discrete element method or molecular dynamics, are widely used in many research fields. In real‐time explorative visualization it is common to render the resulting data using opaque spherical glyphs with local lighting only. Due to massive overlaps, however, inner structures of the data are often occluded rendering visual analysis impossible. Furthermore, local lighting is not sufficient as several important features like complex shapes, holes, rifts or filaments cannot be perceived well. To address both problems we present a new technique that jointly supports transparency and ambient occlusion in a consistent illumination model. Our approach is based on the emission‐absorption model of volume rendering. We provide analytic solutions to the volume rendering integral for several density distributions within a spherical glyph. Compared to constant transparency our approach preserves the three‐dimensional impression of the glyphs much better. We approximate ambient illumination with a fast hierarchical voxel cone‐tracing approach, which builds on a new real‐time voxelization of the particle data. Our implementation achieves interactive frame rates for millions of static or dynamic particles without any preprocessing. We illustrate the merits of our method on real‐world data sets gaining several new insights.  相似文献   

3.
Photorealistic rendering of real world environments is important in a range of different areas; including Visual Special effects, Interior/Exterior Modelling, Architectural Modelling, Cultural Heritage, Computer Games and Automotive Design. Currently, rendering systems are able to produce photorealistic simulations of the appearance of many real‐world materials. In the real world, viewer perception of objects depends on the lighting and object/material/surface characteristics, the way a surface interacts with the light and on how the light is reflected, scattered, absorbed by the surface and the impact these characteristics have on material appearance. In order to re‐produce this, it is necessary to understand how materials interact with light. Thus the representation and acquisition of material models has become such an active research area. This survey of the state‐of‐the‐art of BRDF Representation and Acquisition presents an overview of BRDF (Bidirectional Reflectance Distribution Function) models used to represent surface/material reflection characteristics, and describes current acquisition methods for the capture and rendering of photorealistic materials.  相似文献   

4.
We present a near‐instant method for acquiring facial geometry and reflectance using a set of commodity DSLR cameras and flashes. Our setup consists of twenty‐four cameras and six flashes which are fired in rapid succession with subsets of the cameras. Each camera records only a single photograph and the total capture time is less than the 67ms blink reflex. The cameras and flashes are specially arranged to produce an even distribution of specular highlights on the face. We employ this set of acquired images to estimate diffuse color, specular intensity, specular exponent, and surface orientation at each point on the face. We further refine the facial base geometry obtained from multi‐view stereo using estimated diffuse and specular photometric information. This allows final submillimeter surface mesostructure detail to be obtained via shape‐from‐specularity. The final system uses commodity components and produces models suitable for authoring high‐quality digital human characters.  相似文献   

5.
We present a robust, unbiased technique for intelligent light‐path construction in path‐tracing algorithms. Inspired by existing path‐guiding algorithms, our method learns an approximate representation of the scene's spatio‐directional radiance field in an unbiased and iterative manner. To that end, we propose an adaptive spatio‐directional hybrid data structure, referred to as SD‐tree, for storing and sampling incident radiance. The SD‐tree consists of an upper part—a binary tree that partitions the 3D spatial domain of the light field—and a lower part—a quadtree that partitions the 2D directional domain. We further present a principled way to automatically budget training and rendering computations to minimize the variance of the final image. Our method does not require tuning hyperparameters, although we allow limiting the memory footprint of the SD‐tree. The aforementioned properties, its ease of implementation, and its stable performance make our method compatible with production environments. We demonstrate the merits of our method on scenes with difficult visibility, detailed geometry, and complex specular‐glossy light transport, achieving better performance than previous state‐of‐the‐art algorithms.  相似文献   

6.
This paper aims at rendering interactive visual effects inherent to complex interactions between trees and rain in real‐time in order to increase the realism of natural rainy scenes. Such a complex phenomenon involves a great number of physical processes influenced by various interlinked factors and its rendering represents a thorough challenge in Computer Graphics. We approach this problem by introducing an original method to render drops dripping from leaves after interception of raindrops by foliage. Our method introduces a new hydrological model representing interactions between rain and foliage through a phenomenological approach. Our model reduces the complexity of the phenomenon by representing multiple dripping drops with a new fully functional form evaluated per‐pixel on‐the‐fly and providing improved control over density and physical properties. Furthermore, an efficient real‐time rendering scheme, taking full advantage of latest GPU hardware capabilities, allows the rendering of a large number of dripping drops even for complex scenes.  相似文献   

7.
This paper presents a time‐varying, multi‐layered biophysically‐based model of the optical properties of human skin, suitable for simulating appearance changes due to aging. We have identified the key aspects that cause such changes, both in terms of the structure of skin and its chromophore concentrations, and rely on the extensive medical and optical tissue literature for accurate data. Our model can be expressed in terms of biophysical parameters, optical parameters commonly used in graphics and rendering (such as spectral absorption and scattering coefficients), or more intuitively with higher‐level parameters such as age, gender, skin care or skin type. It can be used with any rendering algorithm that uses diffusion profiles, and it allows to automatically simulate different types of skin at different stages of aging, avoiding the need for artistic input or costly capture processes. While the presented skin model is inspired on tissue optics studies, we also provided a simplified version valid for non‐diagnostic applications.  相似文献   

8.
Renderings of animation sequences with physics‐based Monte Carlo light transport simulations are exceedingly costly to generate frame‐by‐frame, yet much of this computation is highly redundant due to the strong coherence in space, time and among samples. A promising approach pursued in prior work entails subsampling the sequence in space, time, and number of samples, followed by image‐based spatio‐temporal upsampling and denoising. These methods can provide significant performance gains, though major issues remain: firstly, in a multiple scattering simulation, the final pixel color is the composite of many different light transport phenomena, and this conflicting information causes artifacts in image‐based methods. Secondly, motion vectors are needed to establish correspondence between the pixels in different frames, but it is unclear how to obtain them for most kinds of light paths (e.g. an object seen through a curved glass panel). To reduce these ambiguities, we propose a general decomposition framework, where the final pixel color is separated into components corresponding to disjoint subsets of the space of light paths. Each component is accompanied by motion vectors and other auxiliary features such as reflectance and surface normals. The motion vectors of specular paths are computed using a temporal extension of manifold exploration and the remaining components use a specialized variant of optical flow. Our experiments show that this decomposition leads to significant improvements in three image‐based applications: denoising, spatial upsampling, and temporal interpolation.  相似文献   

9.
We present a new approach to microfacet‐based BSDF importance sampling. Previously proposed sampling schemes for popular analytic BSDFs typically begin by choosing a microfacet normal at random in a way that is independent of direction of incident light. To sample the full BSDF using these normals requires arbitrarily large sample weights leading to possible fireflies. Additionally, at grazing angles nearly half of the sampled normals face away from the incident ray and must be rejected, making the sampling scheme inefficient. Instead, we show how to use the distribution of visible normals directly to generate samples, where normals are weighted by their projection factor toward the incident direction. In this way, no backfacing normals are sampled and the sample weights contain only the shadowing factor of outgoing rays (and additionally a Fresnel term for conductors). Arbitrarily large sample weights are avoided and variance is reduced. Since the BSDF depends on the microsurface model, we describe our sampling algorithm for two models: the V‐cavity and the Smith models. We demonstrate results for both isotropic and anisotropic rough conductors and dielectrics with Beckmann and GGX distributions.  相似文献   

10.
Natural‐looking insect animation is very difficult to simulate. The fast movement and small scale of insects often challenge the standard motion capture techniques. As for the manual key‐framing or physics‐driven methods, significant amounts of time and efforts are necessary due to the delicate structure of the insect, which prevents practical applications. In this paper, we address this challenge by presenting a two‐level control framework to efficiently automate the modeling and authoring of insects’ locomotion. On the top level, we design a Triangle Placement Engine to automatically determine the location and orientation of insects’ foot contacts, given the user‐defined trajectory and settings, including speed, load, path and terrain etc. On the low‐level, we relate the Central Pattern Generator to the triangle profiles with the assistance of a Controller Look‐Up Table to fast simulate the physically‐based movement of insects. With our approach, animators can directly author insects’ behavior among a wide range of locomotion repertoire, including walking along a specified path or on an uneven terrain, dynamically adjusting to external perturbations and collectively transporting prey back to the nest.  相似文献   

11.
The popularity of many‐light rendering, which converts complex global illumination computations into a simple sum of the illumination from virtual point lights (VPLs), for predictive rendering has increased in recent years. A huge number of VPLs are usually required for predictive rendering at the cost of extensive computational time. While previous methods can achieve significant speedup by clustering VPLs, none of these previous methods can estimate the total errors due to clustering. This drawback imposes on users tedious trial and error processes to obtain rendered images with reliable accuracy. In this paper, we propose an error estimation framework for many‐light rendering. Our method transforms VPL clustering into stratified sampling combined with confidence intervals, which enables the user to estimate the error due to clustering without the costly computing required to sum the illumination from all the VPLs. Our estimation framework is capable of handling arbitrary BRDFs and is accelerated by using visibility caching, both of which make our method more practical. The experimental results demonstrate that our method can estimate the error much more accurately than the previous clustering method.  相似文献   

12.
In this paper, we present an on‐line real‐time physics‐based approach to motion control with contact repositioning based on a low‐dimensional dynamics model using example motion data. Our approach first generates a reference motion in run time according to an on‐line user request by transforming an example motion extracted from a motion library. Guided by the reference motion, it repeatedly generates an optimal control policy for a small time window one at a time for a sequence of partially overlapping windows, each covering a couple of footsteps of the reference motion, which supports an on‐line performance. On top of this, our system dynamics and problem formulation allow to derive closed‐form derivative functions by exploiting the low‐dimensional dynamics model together with example motion data. These derivative functions and their sparse structures facilitate a real‐time performance. Our approach also allows contact foot repositioning so as to robustly respond to an external perturbation or an environmental change as well as to perform locomotion tasks such as stepping on stones effectively.  相似文献   

13.
We address several limitations of the sampling‐based motion control method of Liu et at. [ LYvdP* 10 ]. The key insight is to learn from the past control reconstruction trials through sample distribution adaptation. Coupled with a sliding window scheme for better performance and an averaging method for noise reduction, the improved algorithm can efficiently construct open‐loop controls for long and challenging reference motions in good quality. Our ideas are intuitive and the implementations are simple. We compare the improved algorithm with the original algorithm both qualitatively and quantitatively, and demonstrate the effectiveness of the improved algorithm with a variety of motions ranging from stylized walking and dancing to gymnastic and Martial Arts routines.  相似文献   

14.
Traditionally, Lagrangian fields such as finite‐time Lyapunov exponents (FTLE) are precomputed on a discrete grid and are ray casted afterwards. This, however, introduces both grid discretization errors and sampling errors during ray marching. In this work, we apply a progressive, view‐dependent Monte Carlo‐based approach for the visualization of such Lagrangian fields in time‐dependent flows. Our approach avoids grid discretization and ray marching errors completely, is consistent, and has a low memory consumption. The system provides noisy previews that converge over time to an accurate high‐quality visualization. Compared to traditional approaches, the proposed system avoids explicitly predefined fieldline seeding structures, and uses a Monte Carlo sampling strategy named Woodcock tracking to distribute samples along the view ray. An acceleration of this sampling strategy requires local upper bounds for the FTLE values, which we progressively acquire during the rendering. Our approach is tailored for high‐quality visualizations of complex FTLE fields and is guaranteed to faithfully represent detailed ridge surface structures as indicators for Lagrangian coherent structures (LCS). We demonstrate the effectiveness of our approach by using a set of analytic test cases and real‐world numerical simulations.  相似文献   

15.
Procedural shaders are a vital part of modern rendering systems. Despite their prevalence, however, procedural shaders remain sensitive to aliasing any time they are sampled at a rate below the Nyquist limit. Antialiasing is typically achieved through numerical techniques like supersampling or precomputing integrals stored in mipmaps. This paper explores the problem of analytically computing a band‐limited version of a procedural shader as a continuous function of the sampling rate. There is currently no known way of analytically computing these integrals in general. We explore the conditions under which exact solutions are possible and develop several approximation strategies for when they are not. Compared to supersampling methods, our approach produces shaders that are less expensive to evaluate and closer to ground truth in many cases. Compared to mipmapping or precomputation, our approach produces shaders that support an arbitrary bandwidth parameter and require less storage. We evaluate our method on a range of spatially‐varying shader functions, automatically producing antialiased versions that have comparable error to 4×4 multisampling but can be over an order of magnitude faster. While not complete, our approach is a promising first step toward this challenging goal and indicates a number of interesting directions for future work.  相似文献   

16.
We present Forward Light Cuts, a novel approach to real‐time global illumination using forward rendering techniques. We focus on unshadowed diffuse interactions for the first indirect light bounce in the context of large models such as the complex scenes usually encountered in CAD application scenarios. Our approach efficiently generates and uses a multiscale radiance cache by exploiting the geometry‐specific stages of the graphics pipeline, namely the tessellator unit and the geometry shader To do so, we assimilate virtual point lights to the scene's triangles and design a stochastic decimation process chained with a partitioning strategy that accounts for both close‐by strong light reflections, and distant regions from which numerous virtual point lights collectively contribute strongly to the end pixel. Our probabilistic solution is supported by a mathematical analysis and a number of experiments covering a wide range of application scenarios. As a result, our algorithm requires no precomputation of any kind, is compatible with dynamic view points, lighting condition, geometry and materials, and scales to tens of millions of polygons on current graphics hardware.  相似文献   

17.
Texture atlases are commonly used as representations for mesh parameterizations in numerous applications including texture and normal mapping. Therefore, packing is an important post‐processing step that tries to place and orient the single parameterizations in a way that the available space is used as efficiently as possible. However, since packing is NP hard, only heuristics can be used in practice to find near‐optimal solutions. In this publication we introduce the new search space of modulo valid packings. The key idea thereby is to allow the texture charts to wrap around in the atlas. By utilizing this search space we propose a new algorithm that can be used in order to automatically pack texture atlases. In the evaluation section we show that our algorithm achieves solutions with a significantly higher packing efficiency when compared to the state of the art, especially for complex packing problems.  相似文献   

18.
We present a novel appearance model for paper. Based on our appearance measurements for matte and glossy paper, we find that paper exhibits a combination of subsurface scattering, specular reflection, retroreflection, and surface sheen. Classic microfacet and simple diffuse reflection models cannot simulate the double‐sided appearance of a thin layer. Our novel BSDF model matches our measurements for paper and accounts for both reflection and transmission properties. At the core of the BSDF model is a method for converting a multi‐layer subsurface scattering model (BSSRDF) into a BSDF, which allows us to retain physically‐based absorption and scattering parameters obtained from the measurements. We also introduce a method for computing the amount of light available for subsurface scattering due to transmission through a rough dielectric surface. Our final model accounts for multiple scattering, single scattering, and surface reflection and is capable of rendering paper with varying levels of roughness and glossiness on both sides.  相似文献   

19.
We propose a stable and efficient particle‐based method for simulating highly viscous fluids that can generate coiling and buckling phenomena and handle variable viscosity. In contrast to previous methods that use explicit integration, our method uses an implicit formulation to improve the robustness of viscosity integration, therefore enabling use of larger time steps and higher viscosities. We use Smoothed Particle Hydrodynamics to solve the full form of viscosity, constructing a sparse linear system with a symmetric positive definite matrix, while exploiting the variational principle that automatically enforces the boundary condition on free surfaces. We also propose a new method for extracting coefficients of the matrix contributed by second‐ring neighbor particles to efficiently solve the linear system using a conjugate gradient solver. Several examples demonstrate the robustness and efficiency of our implicit formulation over previous methods and illustrate the versatility of our method.  相似文献   

20.
Distribution effects such as diffuse global illumination, soft shadows and depth of field, are most accurately rendered using Monte Carlo ray or path tracing. However, physically accurate algorithms can take hours to converge to a noise‐free image. A recent body of work has begun to bridge this gap, showing that both individual and multiple effects can be achieved accurately and efficiently. These methods use sparse sampling, GPU raytracers, and adaptive filtering for reconstruction. They are based on a Fourier analysis, which models distribution effects as a wedge in the frequency domain. The wedge can be approximated as a single large axis‐aligned filter, which is fast but retains a large area outside the wedge, and therefore requires a higher sampling rate; or a tighter sheared filter, which is slow to compute. The state‐of‐the‐art fast sheared filtering method combines low sampling rate and efficient filtering, but has been demonstrated for individual distribution effects only, and is limited by high‐dimensional data storage and processing. We present a novel filter for efficient rendering of combined effects, involving soft shadows and depth of field, with global (diffuse indirect) illumination. We approximate the wedge spectrum with multiple axis‐aligned filters, marrying the speed of axis‐aligned filtering with an even more accurate (compact and tighter) representation than sheared filtering. We demonstrate rendering of single effects at comparable sampling and frame‐rates to fast sheared filtering. Our main practical contribution is in rendering multiple distribution effects, which have not even been demonstrated accurately with sheared filtering. For this case, we present an average speedup of 6× compared with previous axis‐aligned filtering methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号