首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
On the foundations of many rendering algorithms it is the symmetry between the path traversed by light and its adjoint path starting from the camera. However, several effects, including polarization or fluorescence, break that symmetry, and are defined only on the direction of light propagation. This reduces the applicability of bidirectional methods that exploit this symmetry for simulating effectively light transport. In this work, we focus on how to include these non‐symmetric effects within a bidirectional rendering algorithm. We generalize the path integral to support the constraints imposed by non‐symmetric light transport. Based on this theoretical framework, we propose modifications on two bidirectional methods, namely bidirectional path tracing and photon mapping, extending them to support polarization and fluorescence, in both steady and transient state.  相似文献   

2.
Rendering animations of scenes with deformable objects, camera motion, and complex illumination, including indirect lighting and arbitrary shading, is a long‐standing challenge. Prior work has shown that complex lighting can be accurately approximated by a large collection of point lights. In this formulation, rendering of animation sequences becomes the problem of efficiently shading many surface samples from many lights across several frames. This paper presents a tensor formulation of the animated many‐light problem, where each element of the tensor expresses the contribution of one light to one pixel in one frame. We sparsely sample rows and columns of the tensor, and introduce a clustering algorithm to select a small number of representative lights to efficiently approximate the animation. Our algorithm achieves efficiency by reusing representatives across frames, while minimizing temporal flicker. We demonstrate our algorithm in a variety of scenes that include deformable objects, complex illumination and arbitrary shading and show that a surprisingly small number of representative lights is sufficient for high quality rendering. We believe out algorithm will find practical use in applications that require fast previews of complex animation.  相似文献   

3.
The popularity of many‐light rendering, which converts complex global illumination computations into a simple sum of the illumination from virtual point lights (VPLs), for predictive rendering has increased in recent years. A huge number of VPLs are usually required for predictive rendering at the cost of extensive computational time. While previous methods can achieve significant speedup by clustering VPLs, none of these previous methods can estimate the total errors due to clustering. This drawback imposes on users tedious trial and error processes to obtain rendered images with reliable accuracy. In this paper, we propose an error estimation framework for many‐light rendering. Our method transforms VPL clustering into stratified sampling combined with confidence intervals, which enables the user to estimate the error due to clustering without the costly computing required to sum the illumination from all the VPLs. Our estimation framework is capable of handling arbitrary BRDFs and is accelerated by using visibility caching, both of which make our method more practical. The experimental results demonstrate that our method can estimate the error much more accurately than the previous clustering method.  相似文献   

4.
Area lights add tremendous realism, but rendering them interactively proves challenging. Integrating visibility is costly, even with current shadowing techniques, and existing methods frequently ignore illumination variations at unoccluded points due to changing radiance over the light's surface. We extend recent image‐space work that reduces costs by gathering illumination in a multiresolution fashion, rendering varying frequencies at corresponding resolutions. To compute visibility, we eschew shadow maps and instead rely on a coarse screen‐space voxelization, which effectively provides a cheap layered depth image for binary visibility queries via ray marching. Our technique requires no precomputation and runs at interactive rates, allowing scenes with large area lights, including dynamic content such as video screens.  相似文献   

5.
Many‐light methods approximate the light transport in a scene by computing the direct illumination from many virtual point light sources (VPLs), and render low‐noise images covering a wide range of performance and quality goals. However, they are very inefficient at representing glossy light transport. This is because a VPL on a glossy surface illuminates a small fraction of the scene only, and a tremendous number of VPLs might be necessary to render acceptable images. In this paper, we introduce Rich‐VPLs which, in contrast to standard VPLs, represent a multitude of light paths and thus have a more widespread emission profile on glossy surfaces and in scenes with multiple primary light sources. By this, a single Rich‐VPL contributes to larger portions of a scene with negligible additional shading cost. Our second contribution is a placement strategy for (Rich‐)VPLs proportional to sensor importance times radiance. Although both Rich‐VPLs and improved placement can be used individually, they complement each other ideally and share interim computation. Furthermore, both complement existing many‐light methods, e.g. Lightcuts or the Virtual Spherical Lights method, and can improve their efficiency as well as their application for scenes with glossy materials and many primary light sources.  相似文献   

6.
A rendering system for interior scenes is proposed in this paper. The light reaches the interior scene, usually through small regions, such as windows or abat‐jours, which we call portals. To provide a solution, suitable for rendering interior scenes with portals, we extend the traditional precomputed radiance transfer approaches. In our approach, a bounding sphere, which we call a shell, of the interior, centered at each portal, is created and the light transferred from the shell towards the interior through the portal is precomputed. Each shell acts as an environment light source and its intensity distribution is determined by rendering images of the scene, viewed from the center of the shell. By updating the intensity distribution of the shell at each frame, we are able to handle dynamic objects outside the shells. The material of the portals can also be modified at run time (e.g. changing from transparent glass to frosted glass). Several applications are shown, including the illumination of a cathedral, lit by skylight at different times of a day, and a car, running in a town, at interactive frame rates, with a dynamic viewpoint.  相似文献   

7.
In this paper we present a novel method for high‐quality rendering of scenes with participating media. Our technique is based on instant radiosity, which is used to approximate indirect illumination between surfaces by gathering light from a set of virtual point lights (VPLs). It has been shown that this principle can be applied to participating media as well, so that the combined single scattering contribution of VPLs within the medium yields full multiple scattering. As in the surface case, VPL methods for participating media are prone to singularities, which appear as bright “splotches” in the image. These artifacts are usually countered by clamping the VPLs' contribution, but this leads to energy loss within the short‐distance light transport. Bias compensation recovers the missing energy, but previous approaches are prohibitively costly. We investigate VPL‐based methods for rendering scenes with participating media, and propose a novel and efficient approximate bias compensation technique. We evaluate our technique using various test scenes, showing it to be visually indistinguishable from ground truth.  相似文献   

8.
Signed distance functions (SDF) to explicit or implicit surface representations are intensively used in various computer graphics and visualization algorithms. Among others, they are applied to optimize collision detection, are used to reconstruct data fields or surfaces, and, in particular, are an obligatory ingredient for most level set methods. Level set methods are common in scientific visualization to extract surfaces from scalar or vector fields. Usual approaches for the construction of an SDF to a surface are either based on iterative solutions of a special partial differential equation or on marching algorithms involving a polygonization of the surface. We propose a novel method for a non‐iterative approximation of an SDF and its derivatives in a vicinity of a manifold. We use a second‐order algebraic fitting scheme to ensure high accuracy of the approximation. The manifold is defined (explicitly or implicitly) as an isosurface of a given volumetric scalar field. The field may be given at a set of irregular and unstructured samples. Stability and reliability of the SDF generation is achieved by a proper scaling of weights for the Moving Least Squares approximation, accurate choice of neighbors, and appropriate handling of degenerate cases. We obtain the solution in an explicit form, such that no iterative solving is necessary, which makes our approach fast.  相似文献   

9.
We present a method to accelerate the visualization of large crowds of animated characters. Linear‐blend skinning remains the dominant approach for animating a crowd but its efficiency can be improved by utilizing the temporal and intra‐crowd coherencies that are inherent within a populated scene. Our work adopts a caching system that enables a skinned key‐pose to be re‐used by multi‐pass rendering, between multiple agents and across multiple frames. We investigate two different methods; an intermittent caching scheme (whereby each member of a crowd is animated using only its nearest key‐pose) and an interpolative approach that enables key‐pose blending to be supported. For the latter case, we show that finding the optimal set of key‐poses to store is an NP‐hard problem and present a greedy algorithm suitable for real‐time applications. Both variants deliver a worthwhile performance improvement in comparison to using linear‐blend skinning alone.  相似文献   

10.
At present, stochastic progressive photon mapping (SPPM) is one of the most comprehensive methods for a consistent global illumination computation. Even though the number of photons is unlimited due to their progressive nature, the scene size is still bound by the available main memory. In this paper, we present the first consistent out‐of‐core SPPM algorithm. In order to cope with large scenes, we automatically subdivide the geometry and parallelly trace photons and eye rays in a portal‐based system, distributed across multiple machines in a commodity cluster. Moreover, modifications of the original SPPM method are introduced that keep both the utilization of tracer machines high and the network traffic low. Therefore, compared to a portal‐based single machine setup, our distributed approach achieves a significant speedup. We compare a GPU‐based with a CPU‐based implementation and demonstrate our system in multiple large test scenes of up to 90 million triangles.  相似文献   

11.
Existing synthesis methods for closely interacting virtual characters relied on user‐specified constraints such as the reaching positions and the distance between body parts. In this paper, we present a novel method for synthesizing new interacting motion by composing two existing interacting motion samples without the need to specify the constraints manually. Our method automatically detects the type of interactions contained in the inputs and determines a suitable timing for the interaction composition by analyzing the spacetime relationships of the input characters. To preserve the features of the inputs in the synthesized interaction, the two inputs will be aligned and normalized according to the relative distance and orientation of the characters from the inputs. With a linear optimization method, the output is the optimal solution to preserve the close interaction of two characters and the local details of individual character behavior. The output animations demonstrated that our method is able to create interactions of new styles that combine the characteristics of the original inputs.  相似文献   

12.
In this paper, we present an inexpensive approach to create highly detailed reconstructions of the landscape surrounding a road. Our method is based on a space‐efficient semi‐procedural representation of the terrain and vegetation supporting high‐quality real‐time rendering not only for aerial views but also at road level. We can integrate photographs along selected road stretches. We merge the point clouds extracted from these photographs with a low‐resolution digital terrain model through a novel algorithm which is robust against noise and missing data. We pre‐compute plausible locations for trees through an algorithm which takes into account perceptual cues. At runtime we render the reconstructed terrain along with plants generated procedurally according to pre‐computed parameters. Our rendering algorithm ensures visual consistency with aerial imagery and thus it can be integrated seamlessly with current virtual globes.  相似文献   

13.
We present a reflectance model for dielectric cylinders with rough surfaces such as human hair fibers. Our model is energy conserving and can evaluate arbitrarily many orders of internal reflection. Accounting for compression and contraction of specular cones produces a new longitudinal scattering function which is non‐Gaussian and includes an off‐specular peak. Accounting for roughness in the azimuthal direction leads to an integral across the hair fiber which is efficiently evaluated using a Gaussian quadrature. Solving cubic equations is avoided, caustics are included in the model in a consistent fashion, and more accurate colors are predicted by considering many internal pathways.  相似文献   

14.
The human shoulder complex is perhaps the most complicated joint in the human body being comprised of a set of three bones, muscles, tendons, and ligaments. Despite this anatomical complexity, computer graphics models for motion capture most often represent this joint as a simple ball and socket. In this paper, we present a method to determine a shoulder skeletal model that, when combined with standard skinning algorithms, generates a more visually pleasing animation that is a closer approximation to the actual skin deformations of the human body. We use a data‐driven approach and collect ground truth skin deformation data with an optical motion capture system with a large number of markers (200 markers on the shoulder complex alone). We cluster these markers during movement sequences and discover that adding one extra joint around the shoulder improves the resulting animation qualitatively and quantitatively yielding a marker set of approximately 70 markers for the complete skeleton. We demonstrate the effectiveness of our skeletal model by comparing it with ground truth data as well as with recorded video. We show its practicality by integrating it with the conventional rendering/animation pipeline.  相似文献   

15.
The visual simulation of natural phenomena has been widely studied. Although several methods have been proposed to simulate melting, the flows of meltwater drops on the surfaces of objects are not taken into account. In this paper, we propose a particle‐based method for the simulation of the melting and freezing of ice objects and the interactions between ice and fluids. To simulate the flow of meltwater on ice and the formation of water droplets, a simple interfacial tension is proposed, which can be easily incorporated into common particle‐based simulation methods such as Smoothed Particle Hydrodynamics. The computations of heat transfer, the phase transition between ice and water, the interactions between ice and fluids, and the separation of ice due to melting are further accelerated by implementing our method using CUDA. We demonstrate our simulation and rendering method for depicting melting ice at interactive frame‐rates.  相似文献   

16.
This paper presents an efficient technique for synthesizing motions by stitching, or splicing, an upper‐body motion retrieved from a motion space on top of an existing lower‐body locomotion of another motion. Compared to the standard motion splicing problem, motion space splicing imposes new challenges as both the upper and lower body motions might not be known in advance. Our technique is the first motion (space) splicing technique that propagates temporal and spatial properties of the lower‐body locomotion to the newly generated upper‐body motion and vice versa. Whereas existing techniques only adapt the upper‐body motion to fit the lower‐body motion, our technique also adapts the lower‐body locomotion based on the upper body task for a more coherent full‐body motion. In this paper, we will show that our decoupled approach is able to generate high‐fidelity full‐body motion for interactive applications such as games.  相似文献   

17.
In this paper we review the traversal algorithms for kd‐trees for ray tracing. Ordinary traversal algorithms such as sequential, recursive, and those with neighbour‐links have different limitations, which led to several new developments within the last decade. We describe algorithms exploiting ray coherence and algorithms designed with specific hardware architecture limitations such as memory latency and consumption in mind. We also discuss the robustness of traversal algorithms as one issue that has been neglected in previous research.  相似文献   

18.
Caricature is an interesting art to express exaggerated views of different persons and things through drawing. The face caricature is popular and widely used for different applications. To do this, we have to properly extract unique/specialized features of a person's face. A person's facial feature not only depends on his/her natural appearance, but also the associated expression style. Therefore, we would like to extract the neutural facial features and personal expression style for different applicaions. In this paper, we represent the 3D neutral face models in BU–3DFE database by sparse signal decomposition in the training phase. With this decomposition, the sparse training data can be used for robust linear subspace modeling of public faces. For an input 3D face model, we fit the model and decompose the 3D model geometry into a neutral face and the expression deformation separately. The neutral geomertry can be further decomposed into public face and individualized facial feature. We exaggerate the facial features and the expressions by estimating the probability on the corresponding manifold. The public face, the exaggerated facial features and the exaggerated expression are combined to synthesize a 3D caricature for a 3D face model. The proposed algorithm is automatic and can effectively extract the individualized facial features from an input 3D face model to create 3D face caricature.  相似文献   

19.
Simulation of light transport through lens systems plays an important role in graphics. While basic imaging properties can be conveniently derived from linear models (like ABCD matrices), these approximations fail to describe nonlinear effects and aberrations that arise in real optics. Such effects can be computed by proper ray tracing, for which, however, finding suitable sampling and filtering strategies is often not a trivial task. Inspired by aberration theory, which describes the deviation from the linear ray transfer in terms of wavefront distortions, we propose a ray‐space formulation for nonlinear effects. In particular, we approximate the analytical solution to the ray tracing problem by means of a Taylor expansion in the ray parameters. This representation enables a construction‐kit approach to complex optical systems in the spirit of matrix optics. It is also very simple to evaluate, which allows for efficient execution on CPU and GPU alike, including the computation of mixed derivatives of any order. We evaluate fidelity and performance of our polynomial model, and show applications in high‐quality offline rendering and at interactive frame rates.  相似文献   

20.
Progressive light transport simulations aspire a physically‐based, consistent rendering to obtain visually appealing illumination effects, depth and realism. Thereby, the handling of large scenes is a difficult problem, as in typical scene subdivision approaches the parallel processing requires frequent synchronization due to the bouncing of light throughout the scene. In practice, however, only few object parts noticeably contribute to the radiance observable in the image, whereas large areas play only a minor role. In fact, a mesh simplification of the latter can go unnoticed by the human eye. This particular importance to the visible radiance in the image calls for an output‐sensitive mesh reduction that allows to render originally out‐of‐core scenes on a single machine without swapping of memory. Thus, in this paper, we present a preprocessing step that reduces the scene size under the constraint of radiance preservation with focus on high‐frequency effects such as caustics. For this, we perform a small number of preliminary light transport simulation iterations. Thereby, we identify mesh parts that contribute significantly to the visible radiance in the scene, and which we thus preserve during mesh reduction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号