共查询到20条相似文献,搜索用时 22 毫秒
1.
Many man‐made objects, in particular building facades, exhibit dominant structural relations such as symmetry and regularity. When editing these shapes, a common objective is to preserve these relations. However, often there are numerous plausible editing results that all preserve the desired structural relations of the input, creating ambiguity. We propose an interactive facade editing framework that explores this structural ambiguity. We first analyze the input in a semi‐automatic manner to detect different groupings of the facade elements and the relations among them. We then provide an incremental editing process where a set of variations that preserve the detected relations in a particular grouping are generated at each step. Starting from one input example, our system can quickly generate various facade configurations. 相似文献
2.
Xuaner Zhang Joon‐Young Lee Kalyan Sunkavalli Zhaowen Wang 《Computer Graphics Forum》2017,36(7):105-113
Videos captured by consumer cameras often exhibit temporal variations in color and tone that are caused by camera auto‐adjustments like white‐balance and exposure. When such videos are sub‐sampled to play fast‐forward, as in the increasingly popular forms of timelapse and hyperlapse videos, these temporal variations are exacerbated and appear as visually disturbing high frequency flickering. Previous techniques to photometrically stabilize videos typically rely on computing dense correspondences between video frames, and use these correspondences to remove all color changes in the video sequences. However, this approach is limited in fast‐forward videos that often have large content changes and also might exhibit changes in scene illumination that should be preserved. In this work, we propose a novel photometric stabilization algorithm for fast‐forward videos that is robust to large content‐variation across frames. We compute pairwise color and tone transformations between neighboring frames and smooth these pair‐wise transformations while taking in account the possibility of scene/content variations. This allows us to eliminate high‐frequency fluctuations, while still adapting to real variations in scene characteristics. We evaluate our technique on a new dataset consisting of controlled synthetic and real videos, and demonstrate that our techniques outperforms the state‐of‐the‐art. 相似文献
3.
Shao‐Chi Chen Hsin‐Yi Chen Yi‐Ling Chen Hsin‐Mu Tsai Bing‐Yu Chen 《Computer Graphics Forum》2014,33(7):289-297
Visual obstruction caused by a preceding vehicle is one of the key factors threatening driving safety. One possible solution is to share the first‐person‐view of the preceding vehicle to unveil the blocked field‐of‐view of the following vehicle. However, the geometric inconsistency caused by the camera‐eye discrepancy renders view sharing between different cars a very challenging task. In this paper, we present a first‐person‐perspective image rendering algorithm to solve this problem. Firstly, we contour unobstructed view as the transferred region, then by iteratively estimating local homography transformations and performing perspective‐adaptive warping using the estimated transformations, we are able to locally adjust the shape of the unobstructed view so that its perspective and boundary could be matched to that of the occluded region. Thus, the composited view is seamless in both the perceived perspective and photometric appearance, creating an impression as if the preceding vehicle is transparent. Our system improves the driver's visibility and thus relieves the burden on the driver, which in turn increases comfort. We demonstrate the usability and stability of our system by performing its evaluation with several challenging data sets collected from real‐world driving scenarios. 相似文献
4.
Many visualization techniques use images containing meaningful color sequences. If such images are converted to grayscale, the sequence is often distorted, compromising the information in the image. We preserve the significance of a color sequence during decolorization by mapping the colors from a source image to a grid in the CIELAB color space. We then identify the most significant hues, and thin the corresponding cells of the grid to approximate a curve in the color space, eliminating outliers using a weighted Laplacian eigenmap. This curve is then mapped to a monotonic sequence of gray levels. The saturation values of the resulting image are combined with the original intensity channels to restore details such as text. Our approach can also be used to recolor images containing color sequences, for instance for viewers with color‐deficient vision, or to interpolate between two images that use the same geometry and color sequence to present different data. 相似文献
5.
Jose A. Iglesias‐Guitian Bochang Moon Charalampos Koniaris Eric Smolikowski Kenny Mitchell 《Computer Graphics Forum》2016,35(7):363-372
We propose a new real‐time temporal filtering and antialiasing (AA) method for rasterization graphics pipelines. Our method is based on Pixel History Linear Models (PHLM), a new concept for modeling the history of pixel shading values over time using linear models. Based on PHLM, our method can predict per‐pixel variations of the shading function between consecutive frames. This combines temporal reprojection with per‐pixel shading predictions in order to provide temporally coherent shading, even in the presence of very noisy input images. Our method can address both spatial and temporal aliasing problems under a unique filtering framework that minimizes filtering error through a recursive least squares algorithm. We demonstrate our method working with a commercial deferred shading engine for rasterization and with our own OpenGL deferred shading renderer. We have implemented our method in GPU and it has shown significant reduction of temporal flicker in very challenging scenarios including foliage rendering, complex non‐linear camera motions, dynamic lighting, reflections, shadows and fine geometric details. Our approach, based on PHLM, avoids the creation of visible ghosting artifacts and it reduces the filtering overblur characteristic of temporal deflickering methods. At the same time, the results are comparable to state‐of‐the‐art real‐time filters in terms of temporal coherence. 相似文献
6.
Learning regressors from low‐resolution patches to high‐resolution patches has shown promising results for image super‐resolution. We observe that some regressors are better at dealing with certain cases, and others with different cases. In this paper, we jointly learn a collection of regressors, which collectively yield the smallest super‐resolving error for all training data. After training, each training sample is associated with a label to indicate its ‘best’ regressor, the one yielding the smallest error. During testing, our method bases on the concept of ‘adaptive selection’ to select the most appropriate regressor for each input patch. We assume that similar patches can be super‐resolved by the same regressor and use a fast, approximate kNN approach to transfer the labels of training patches to test patches. The method is conceptually simple and computationally efficient, yet very effective. Experiments on four datasets show that our method outperforms competing methods. 相似文献
7.
Many useful algorithms for processing images and geometry fall under the general framework of high‐dimensional Gaussian filtering. This family of algorithms includes bilateral filtering and non‐local means. We propose a new way to perform such filters using the permutohedral lattice, which tessellates high‐dimensional space with uniform simplices. Our algorithm is the first implementation of a high‐dimensional Gaussian filter that is both linear in input size and polynomial in dimensionality. Furthermore it is parameter‐free, apart from the filter size, and achieves a consistently high accuracy relative to ground truth (> 45 dB). We use this to demonstrate a number of interactive‐rate applications of filters in as high as eight dimensions. 相似文献
8.
Deqing Sun Oliver Wang Kalyan Sunkavalli Sylvain Paris Hanspeter Pfister 《Computer Graphics Forum》2017,36(2):397-407
Visual formats have advanced beyond single‐view images and videos: 3D movies are commonplace, researchers have developed multi‐view navigation systems, and VR is helping to push light field cameras to mass market. However, editing tools for these media are still nascent, and even simple filtering operations like color correction or stylization are problematic: naively applying image filters per frame or per view rarely produces satisfying results due to time and space inconsistencies. Our method preserves and stabilizes filter effects while being agnostic to the inner working of the filter. It captures filter effects in the gradient domain, then uses input frame gradients as a reference to impose temporal and spatial consistency. Our least‐squares formulation adds minimal overhead compared to naive data processing. Further, when filter cost is high, we introduce a filter transfer strategy that reduces the number of per‐frame filtering computations by an order of magnitude, with only a small reduction in visual quality. We demonstrate our algorithm on several camera array formats including stereo videos, light fields, and wide baselines. 相似文献
9.
Marcel Campen Moritz Ibing Hans‐Christian Ebke Denis Zorin Leif Kobbelt 《Computer Graphics Forum》2016,35(5):1-10
Various applications of global surface parametrization benefit from the alignment of parametrization isolines with principal curvature directions. This is particularly true for recent parametrization‐based meshing approaches, where this directly translates into a shape‐aware edge flow, better approximation quality, and reduced meshing artifacts. Existing methods to influence a parametrization based on principal curvature directions suffer from scale‐dependence, which implies the necessity of parameter variation, or try to capture complex directional shape features using simple 1D curves. Especially for non‐sharp features, such as chamfers, fillets, blends, and even more for organic variants thereof, these abstractions can be unfit. We present a novel approach which respects and exploits the 2D nature of such directional feature regions, detects them based on coherence and homogeneity properties, and controls the parametrization process accordingly. This approach enables us to provide an intuitive, scale‐invariant control parameter to the user. It also allows us to consider non‐local aspects like the topology of a feature, enabling further improvements. We demonstrate that, compared to previous approaches, global parametrizations of higher quality can be generated without user intervention. 相似文献
10.
Typical high dynamic range (HDR) imaging approaches based on multiple images have difficulties in handling moving objects and camera shakes, suffering from the ghosting effect and the loss of sharpness in the output HDR image. While there exist a variety of solutions for resolving such limitations, most of the existing algorithms are susceptible to complex motions, saturation, and occlusions. In this paper, we propose an HDR imaging approach using the coded electronic shutter which can capture a scene with row‐wise varying exposures in a single image. Our approach enables a direct extension of the dynamic range of the captured image without using multiple images, by photometrically calibrating rows with different exposures. Due to the concurrent capture of multiple exposures, misalignments of moving objects are naturally avoided with significant reduction in the ghosting effect. To handle the issues with under‐/over‐exposure, noise, and blurs, we present a coherent HDR imaging process where the problems are resolved one by one at each step. Experimental results with real photographs, captured using a coded electronic shutter, demonstrate that our method produces a high quality HDR images without the ghosting and blur artifacts. 相似文献
11.
We present GEMSe, an interactive tool for exploring and analyzing the parameter space of multi‐channel segmentation algorithms. Our targeted user group are domain experts who are not necessarily segmentation specialists. GEMSe allows the exploration of the space of possible parameter combinations for a segmentation framework and its ensemble of results. Users start with sampling the parameter space and computing the corresponding segmentations. A hierarchically clustered image tree provides an overview of variations in the resulting space of label images. Details are provided through exemplary images from the selected cluster and histograms visualizing the parameters and the derived output in the selected cluster. The correlation between parameters and derived output as well as the effect of parameter changes can be explored through interactive filtering and scatter plots. We evaluate the usefulness of GEMSe through expert reviews and case studies based on three different kinds of datasets: A synthetic dataset emulating the combination of 3D X‐ray computed tomography with data from K‐Edge spectroscopy, a three‐channel scan of a rock crystal acquired by a Talbot‐Lau grating interferometer X‐ray computed tomography device, as well as a hyperspectral image. 相似文献
12.
We present an Aortic Vortex Classification (AVOCLA) that allows to classify vortices in the human aorta semi‐automatically. Current medical studies assume a strong relation between cardiovascular diseases and blood flow patterns such as vortices. Such vortices are extracted and manually classified according to specific, unstandardized properties. We employ an agglomerative hierarchical clustering to group vortex‐representing path lines as basis for the subsequent classification. Classes are based on the vortex' size, orientation and shape, its temporal occurrence relative to the cardiac cycle as well as its spatial position relative to the vessel course. The classification results are presented by a 2D and 3D visualization technique. To confirm the usefulness of both approaches, we report on the results of a user study. Moreover, AVOCLA was applied to 15 datasets of healthy volunteers and patients with different cardiovascular diseases. The results of the semi‐automatic classification were qualitatively compared to a manually generated ground truth of two domain experts considering the vortex number and five specific properties. 相似文献
13.
Ł. Dąbała M. Ziegler P. Didyk F. Zilly J. Keinert K. Myszkowski H.‐P. Seidel P. Rokita T. Ritschel 《Computer Graphics Forum》2016,35(7):401-410
Light field videos express the entire visual information of an animated scene, but their shear size typically makes capture, processing and display an off‐line process, i. e., time between initial capture and final display is far from real‐time. In this paper we propose a solution for one of the key bottlenecks in such a processing pipeline, which is a reliable depth reconstruction possibly for many views. This is enabled by a novel correspondence algorithm converting the video streams from a sparse array of off‐the‐shelf cameras into an array of animated depth maps. The algorithm is based on a generalization of the classic multi‐resolution Lucas‐Kanade correspondence algorithm from a pair of images to an entire array. Special inter‐image confidence consolidation allows recovery from unreliable matching in some locations and some views. It can be implemented efficiently in massively parallel hardware, allowing for interactive computations. The resulting depth quality as well as the computation performance compares favorably to other state‐of‐the art light field‐to‐depth approaches, as well as stereo matching techniques. Another outcome of this work is a data set of light field videos that are captured with multiple variants of sparse camera arrays. 相似文献
14.
Attention‐based Level‐Of–Detail (LOD) managers downgrade the quality of areas that are expected to go unnoticed by an observer to economize on computational resources. The perceptibility of lowered visual fidelity is determined by the accuracy of the attention model that assigns quality levels. Most previous attention based LOD managers do not take into account saliency provoked by context, failing to provide consistently accurate attention predictions. In this work, we extend a recent high level saliency model with four additional components yielding more accurate predictions: an object‐intrinsic factor accounting for canonical form of objects, an object‐context factor for contextual isolation of objects, a feature uniqueness term that accounts for the number of salient features in an image, and a temporal context that generates recurring fixations for objects inconsistent with the context. We conduct a perceptual experiment to acquire the weighting factors to initialize our model. We design C‐LOD, a LOD manager that maintains a constant frame rate on mobile devices by dynamically re‐adjusting material quality on secondary visual features of non‐attended objects. In a proof of concept study we establish that by incorporating C‐LOD, complex effects such as parallax occlusion mapping usually omitted in mobile devices can now be employed, without overloading GPU capability and, at the same time, conserving battery power. 相似文献
15.
Real‐time Texture Synthesis and Concurrent Random‐access Rendering for Low‐cost GPU Chip Design
下载免费PDF全文

Numerous algorithms have been researched in the area of texture synthesis. However, it remains difficult to design a low‐cost synthesis scheme capable of generating high quality results while simultaneously achieving real‐time performance. Additional challenges include making a scheme parallel and being able to partially render/synthesize high‐resolution textures. Furthermore, it would be beneficial for a synthesis scheme to be able to incorporate Texture Compression and minimize the bandwidth usage, especially on mobile devices. In this paper, we propose a practical method which has low computational complexity and produces textures with small storage requirements. Through use of an index table, random access of the texture is another essential advantage, with which parallel rendering becomes feasible including generation of mip‐map sequences. Integrating the index table with existing compression algorithms, for example ETC or PVRTC, the bandwidth is further reduced and avoids the need for a separate, computationally expensive pass to compress the synthesized output. It should be noted that our texture synthesis achieves real‐time performance and low power consumption even on mobile devices, for which texture synthesis has been traditionally considered too expensive. 相似文献
16.
Recent photography techniques such as sculpting with light show great potential in compositing beautiful images from fixed‐viewpoint photos under multiple illuminations. The process relies heavily on the artists’ experience and skills using the available tools. An apparent trend in recent works is to facilitate the interaction making it less time‐consuming and addressable not only to experts, but also novices. We propose a method that automatically creates enhanced light montages that are comparable to those produced by artists. It detects and emphasizes cues that are important for perception by introducing a technique to extract depth and shape edges from an unconstrained light stack. Studies show that these cues are associated with silhouettes and suggestive contours which artists use to sketch and construct the layout of paintings. Textures, due to perspective distortion, offer essential cues that depict shape and surface slant. We balance the emphasis between depth edges and reflectance textures to enhance the sense of both shape and reflectance properties. Our light montage technique works perfectly with a few to hundreds of illuminations for each scene. Experiments show great results for static scenes making it practical for small objects, interiors and small‐scale outdoor scenes. Dynamic scenes may be captured using spatially distributed light setups such as light domes. The approach could also be applied to time‐lapse photos, with the sun as the main light source. 相似文献
17.
We present the first visualization tool that enables a comparative depiction of structural stress tensor data for vessel walls of cerebral aneurysms. Such aneurysms bear the risk of rupture, whereas their treatment also carries considerable risks for the patient. Medical researchers emphasize the importance of analyzing the interaction of morphological and hemodynamic information for the patient‐specific rupture risk evaluation and treatment analysis. Tensor data such as the stress inside the aneurysm walls characterizes the interplay between the morphology and blood flow and seems to be an important rupture‐prone criterion. We use different glyph‐based techniques to depict local stress tensors simultaneously and compare their applicability to cerebral aneurysms in a user study. We thus offer medical researchers an effective visual exploration tool to assess the aneurysm rupture risk. We developed a GPU‐based implementation of our techniques with a flexible interactive data exploration mechanism. Our depictions are designed in collaboration with domain experts, and we provide details about the evaluation. 相似文献
18.
Jing Liao Rodolfo S. Lima Diego Nehab Hugues Hoppe Pedro V. Sander 《Computer Graphics Forum》2014,33(4):51-60
We explore creating smooth transitions between videos of different scenes. As in traditional image morphing, good spatial correspondence is crucial to prevent ghosting, especially at silhouettes. Video morphing presents added challenges. Because motions are often unsynchronized, temporal alignment is also necessary. Applying morphing to individual frames leads to discontinuities, so temporal coherence must be considered. Our approach is to optimize a full spatiotemporal mapping between the two videos. We reduce tedious interactions by letting the optimization derive the fine‐scale map given only sparse user‐specified constraints. For robustness, the optimization objective examines structural similarity of the video content. We demonstrate the approach on a variety of videos, obtaining results using few explicit correspondences. 相似文献
19.
Hsin‐I Chen Tse‐Ju Lin Xiao‐Feng Jian I‐Chao Shen Bing‐Yu Chen 《Computer Graphics Forum》2015,34(7):235-244
A person's handwriting appears differently within a typical range of variations, and the shapes of handwriting characters also show complex interaction with their nearby neighbors. This makes automatic synthesis of handwriting characters and paragraphs very challenging. In this paper, we propose a method for synthesizing handwriting texts according to a writer's handwriting style. The synthesis algorithm is composed by two phases. First, we create the multidimensional morphable models for different characters based on one writer's data. Then, we compute the cursive probability to decide whether each pair of neighboring characters are conjoined together or not. By jointly modeling the handwriting style and conjoined property through a novel trajectory optimization, final handwriting words can be synthesized from a set of collected samples. Furthermore, the paragraphs’ layouts are also automatically generated and adjusted according to the writer's style obtained from the same dataset. We demonstrate that our method can successfully synthesize an entire paragraph that mimic a writer's handwriting using his/her collected handwriting samples. 相似文献
20.
We present an example‐based approach for radiometrically linearizing photographs that takes as input a radiometrically linear exemplar image and a target regular uncalibrated image of the same scene, possibly from a different viewpoint and/or under different lighting. The output of our method is a radiometrically linearized version of the target image. Modeling the change in appearance of a small image patch seen from a different viewpoint and/or under different lighting as a linear 1D subspace, allows us to recast radiometric transfer in a form similar to classic radiometric calibration from exposure stacks. The resulting radiometric transfer method is lightweight and easy to implement. We demonstrate the accuracy and validity of our method on a variety of scenes. 相似文献