首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present an Aortic Vortex Classification (AVOCLA) that allows to classify vortices in the human aorta semi‐automatically. Current medical studies assume a strong relation between cardiovascular diseases and blood flow patterns such as vortices. Such vortices are extracted and manually classified according to specific, unstandardized properties. We employ an agglomerative hierarchical clustering to group vortex‐representing path lines as basis for the subsequent classification. Classes are based on the vortex' size, orientation and shape, its temporal occurrence relative to the cardiac cycle as well as its spatial position relative to the vessel course. The classification results are presented by a 2D and 3D visualization technique. To confirm the usefulness of both approaches, we report on the results of a user study. Moreover, AVOCLA was applied to 15 datasets of healthy volunteers and patients with different cardiovascular diseases. The results of the semi‐automatic classification were qualitatively compared to a manually generated ground truth of two domain experts considering the vortex number and five specific properties.  相似文献   

2.
We present the first visualization tool that enables a comparative depiction of structural stress tensor data for vessel walls of cerebral aneurysms. Such aneurysms bear the risk of rupture, whereas their treatment also carries considerable risks for the patient. Medical researchers emphasize the importance of analyzing the interaction of morphological and hemodynamic information for the patient‐specific rupture risk evaluation and treatment analysis. Tensor data such as the stress inside the aneurysm walls characterizes the interplay between the morphology and blood flow and seems to be an important rupture‐prone criterion. We use different glyph‐based techniques to depict local stress tensors simultaneously and compare their applicability to cerebral aneurysms in a user study. We thus offer medical researchers an effective visual exploration tool to assess the aneurysm rupture risk. We developed a GPU‐based implementation of our techniques with a flexible interactive data exploration mechanism. Our depictions are designed in collaboration with domain experts, and we provide details about the evaluation.  相似文献   

3.
We explore creating smooth transitions between videos of different scenes. As in traditional image morphing, good spatial correspondence is crucial to prevent ghosting, especially at silhouettes. Video morphing presents added challenges. Because motions are often unsynchronized, temporal alignment is also necessary. Applying morphing to individual frames leads to discontinuities, so temporal coherence must be considered. Our approach is to optimize a full spatiotemporal mapping between the two videos. We reduce tedious interactions by letting the optimization derive the fine‐scale map given only sparse user‐specified constraints. For robustness, the optimization objective examines structural similarity of the video content. We demonstrate the approach on a variety of videos, obtaining results using few explicit correspondences.  相似文献   

4.
The coded aperture snapshot spectral imaging (CASSI) architecture has been employed widely for capturing hyperspectral video. Despite allowing concurrent capture of hyperspectral video, spatial modulation in CASSI sacrifices image resolution significantly while reconstructing spectral projection via sparse sampling. Several multiview alternatives have been proposed to handle this low spatial resolution problem and improve measurement accuracy, for instance, by adding a translation stage for the coded aperture or changing the static coded aperture with a digital micromirror device for dynamic modulation. State‐of‐the‐art solutions enhance spatial resolution significantly but are incapable of capturing video using CASSI. In this paper, we present a novel compressive coded aperture imaging design that increases spatial resolution while capturing 4D hyperspectral video of dynamic scenes. We revise the traditional CASSI design to allow for multiple sampling of the randomness of spatial modulation in a single frame. We demonstrate that our compressive video spectroscopy approach yields enhanced spatial resolution and consistent measurements, compared with the traditional CASSI design.  相似文献   

5.
In virtual reality (VR) applications, the contents are usually generated by creating a 360° Video panorama of a real‐world scene. Although many capture devices are being released, getting high‐resolution panoramas and displaying a virtual world in real‐time remains challenging due to its computationally demanding nature. In this paper, we propose a real‐time 360° Video foveated stitching framework, that renders the entire scene in different level of detail, aiming to create a high‐resolution panoramic Video in real‐time that can be streamed directly to the client. Our foveated stitching algorithm takes Videos from multiple cameras as input, combined with measurements of human visual attention (i.e. the acuity map and the saliency map), can greatly reduce the number of pixels to be processed. We further parallelize the algorithm using GPU to achieve a responsive interface and validate our results via a user study. Our system accelerates graphics computation by a factor of 6 on a Google Cardboard display.  相似文献   

6.
We present a user‐assisted video stabilization algorithm that is able to stabilize challenging videos when state‐of‐the‐art automatic algorithms fail to generate a satisfactory result. Current methods do not give the user any control over the look of the final result. Users either have to accept the stabilized result as is, or discard it should the stabilization fail to generate a smooth output. Our system introduces two new modes of interaction that allow the user to improve the unsatisfactory stabilized video. First, we cluster tracks and visualize them on the warped video. The user ensures that appropriate tracks are selected by clicking on track clusters to include or exclude them. Second, the user can directly specify how regions in the output video should look by drawing quadrilaterals to select and deform parts of the frame. These user‐provided deformations reduce undesirable distortions in the video. Our algorithm then computes a stabilized video using the user‐selected tracks, while respecting the user‐modified regions. The process of interactively removing user‐identified artifacts can sometimes introduce new ones, though in most cases there is a net improvement. We demonstrate the effectiveness of our system with a variety of challenging hand held videos.  相似文献   

7.
We describe a painting machine and associated algorithms. Our modified industrial robot works with visual feedback and applies acrylic paint from a repository to a canvas until the created painting resembles a given input image or scene. The color differences between canvas and input are used to direct the application of new strokes. We present two optimization‐based algorithms that place such strokes in relation to already existing ones. Using these methods we are able to create different painting styles, one that tries to match the input colors with almost transparent strokes and another one that creates dithering patterns of opaque strokes that approximate the input color. The machine produces paintings that mimic those created by human painters and allows us to study the painting process as well as the creation of artworks.  相似文献   

8.
This paper presents a novel method to enhance the performance of structure‐preserving image and texture filtering. With conventional edge‐aware filters, it is often challenging to handle images of high complexity where features of multiple scales coexist. In particular, it is not always easy to find the right balance between removing unimportant details and protecting important features when they come in multiple sizes, shapes, and contrasts. Unlike previous approaches, we address this issue from the perspective of adaptive kernel scales. Relying on patch‐based statistics, our method identifies texture from structure and also finds an optimal per‐pixel smoothing scale. We show that the proposed mechanism helps achieve enhanced image/texture filtering performance in terms of protecting the prominent geometric structures in the image, such as edges and corners, and keeping them sharp even after significant smoothing of the original signal.  相似文献   

9.
Image vectorization is an important yet challenging problem, especially when the input image has rich content. In this paper, we develop a novel method for automatically vectorizing natural images with feature‐aligned quad‐dominant meshes. Inspired by the quadrangulation methods in 3D geometry processing, we propose a new directional field optimization technique by encoding the color gradients, sidestepping the explicit computing of salient image features. We further compute the anisotropic scales of the directional field by accommodating the distance among image features. Our method is fully automatic and efficient, which takes only a few seconds for a 400×400 image on a normal laptop. We demonstrate the effectiveness of the proposed method on various image editing applications.  相似文献   

10.
We propose a system to restrict the manipulation of shape and appearance in an image to a valid subspace which we learn from a collection of exemplar images. To this end, we automatically co‐align a collection of images and learn a subspace model of shape and appearance using principal components. As finding perfect image correspondences for general images is not feasible, we build an approximate partial alignment and improve bad alignments leveraging other, more successful alignments. Our system allows the user to change appearance and shape in real‐time and the result is “projected” onto the subspace of meaningful changes. The change in appearance and shape can either be locked or performed independently. Additional applications include suggestion of alternative shapes or appearance.  相似文献   

11.
This paper introduces a novel facial editing tool, called edge‐aware mask, to achieve multiple photo‐realistic rendering effects in a unified framework. The edge‐aware masks facilitate three basic operations for adaptive facial editing, including region selection, edit setting and region blending. Inspired by the state‐of‐the‐art edit propagation and partial differential equation (PDE) learning method, we propose an adaptive PDE model with facial priors for masks generation through edge‐aware diffusion. The edge‐aware masks can automatically fit the complex region boundary with great accuracy and produce smooth transition between different regions, which significantly improves the visual consistence of face editing and reduce the human intervention. Then, a unified and flexible facial editing framework is constructed, which consists of layer decomposition, edge‐aware masks generation, and layer/mask composition. The combinations of multiple facial layers and edge‐aware masks can achieve various facial effects simultaneously, including face enhancement, relighting, makeup and face blending etc. Qualitative and quantitative evaluations were performed using different datasets for different facial editing tasks. Experiments demonstrate the effectiveness and flexibility of our methods, and the comparisons with the previous methods indicate that improved results are obtained using the combination of multiple edge‐aware masks.  相似文献   

12.
Light field videos express the entire visual information of an animated scene, but their shear size typically makes capture, processing and display an off‐line process, i. e., time between initial capture and final display is far from real‐time. In this paper we propose a solution for one of the key bottlenecks in such a processing pipeline, which is a reliable depth reconstruction possibly for many views. This is enabled by a novel correspondence algorithm converting the video streams from a sparse array of off‐the‐shelf cameras into an array of animated depth maps. The algorithm is based on a generalization of the classic multi‐resolution Lucas‐Kanade correspondence algorithm from a pair of images to an entire array. Special inter‐image confidence consolidation allows recovery from unreliable matching in some locations and some views. It can be implemented efficiently in massively parallel hardware, allowing for interactive computations. The resulting depth quality as well as the computation performance compares favorably to other state‐of‐the art light field‐to‐depth approaches, as well as stereo matching techniques. Another outcome of this work is a data set of light field videos that are captured with multiple variants of sparse camera arrays.  相似文献   

13.
The visual analysis of multivariate projections is a challenging task, because complex visual structures occur. This causes fatigue or misinterpretations, which distorts the analysis. In fact, the same projection can lead to different analysis results. We provide visual guidance pictograms to improve objectivity of the visual search. A visual guidance pictogram is an iconic visual density map encoding the visual structure of certain data properties. By using them to guide the analysis, structures in the projection can be better understood and mentally linked to properties in the data. We introduce a systematic scheme for designing such pictograms and provide a set of pictograms for standard visual tasks, such as correlation and distribution analysis, for standard projections like scatterplots, RadVis, and Star Coordinates. We conduct a study that compares the visual analysis of real data with and without the support of guidance pictograms. Our tests show that the training effort for a visual search can be decreased and the analysis bias can be reduced by supporting the user's visual search with guidance pictograms.  相似文献   

14.
Combining high‐resolution level set surface tracking with lower resolution physics is an inexpensive method for achieving highly detailed liquid animations. Unfortunately, the inherent resolution mismatch introduces several types of disturbing visual artifacts. We identify the primary sources of these artifacts and present simple, efficient, and practical solutions to address them. First, we propose an unconditionally stable filtering method that selectively removes sub‐grid surface artifacts not seen by the fluid physics, while preserving fine detail in dynamic splashing regions. It provides comparable results to recent error‐correction techniques at lower cost, without substepping, and with better scaling behavior. Second, we show how a modified narrow‐band scheme can ensure accurate free surface boundary conditions in the presence of large resolution mismatches. Our scheme preserves the efficiency of the narrow‐band methodology, while eliminating objectionable stairstep artifacts observed in prior work. Third, we demonstrate that the use of linear interpolation of velocity during advection of the high‐resolution level set surface is responsible for visible grid‐aligned kinks; we therefore advocate higher‐order velocity interpolation, and show that it dramatically reduces this artifact. While these three contributions are orthogonal, our results demonstrate that taken together they efficiently address the dominant sources of visual artifacts arising with high‐resolution embedded liquid surfaces; the proposed approach offers improved visual quality, a straightforward implementation, and substantially greater scalability than competing methods.  相似文献   

15.
Shape correspondence is a fundamental problem in computer graphics and vision, with applications in various problems including animation, texture mapping, robotic vision, medical imaging, archaeology and many more. In settings where the shapes are allowed to undergo non‐rigid deformations and only partial views are available, the problem becomes very challenging. To this end, we present a non‐rigid multi‐part shape matching algorithm. We assume to be given a reference shape and its multiple parts undergoing a non‐rigid deformation. Each of these query parts can be additionally contaminated by clutter, may overlap with other parts, and there might be missing parts or redundant ones. Our method simultaneously solves for the segmentation of the reference model, and for a dense correspondence to (subsets of) the parts. Experimental results on synthetic as well as real scans demonstrate the effectiveness of our method in dealing with this challenging matching scenario.  相似文献   

16.
Visual formats have advanced beyond single‐view images and videos: 3D movies are commonplace, researchers have developed multi‐view navigation systems, and VR is helping to push light field cameras to mass market. However, editing tools for these media are still nascent, and even simple filtering operations like color correction or stylization are problematic: naively applying image filters per frame or per view rarely produces satisfying results due to time and space inconsistencies. Our method preserves and stabilizes filter effects while being agnostic to the inner working of the filter. It captures filter effects in the gradient domain, then uses input frame gradients as a reference to impose temporal and spatial consistency. Our least‐squares formulation adds minimal overhead compared to naive data processing. Further, when filter cost is high, we introduce a filter transfer strategy that reduces the number of per‐frame filtering computations by an order of magnitude, with only a small reduction in visual quality. We demonstrate our algorithm on several camera array formats including stereo videos, light fields, and wide baselines.  相似文献   

17.
Visual obstruction caused by a preceding vehicle is one of the key factors threatening driving safety. One possible solution is to share the first‐person‐view of the preceding vehicle to unveil the blocked field‐of‐view of the following vehicle. However, the geometric inconsistency caused by the camera‐eye discrepancy renders view sharing between different cars a very challenging task. In this paper, we present a first‐person‐perspective image rendering algorithm to solve this problem. Firstly, we contour unobstructed view as the transferred region, then by iteratively estimating local homography transformations and performing perspective‐adaptive warping using the estimated transformations, we are able to locally adjust the shape of the unobstructed view so that its perspective and boundary could be matched to that of the occluded region. Thus, the composited view is seamless in both the perceived perspective and photometric appearance, creating an impression as if the preceding vehicle is transparent. Our system improves the driver's visibility and thus relieves the burden on the driver, which in turn increases comfort. We demonstrate the usability and stability of our system by performing its evaluation with several challenging data sets collected from real‐world driving scenarios.  相似文献   

18.
We present a novel algorithm to reconstruct high‐quality images from sampled pixels and gradients in gradient‐domain Rendering. Our approach extends screened Poisson reconstruction by adding additional regularization constraints. Our key idea is to exploit local patches in feature images, which contain per‐pixels normals, textures, position, etc., to formulate these constraints. We describe a GPU implementation of our approach that runs on the order of seconds on megapixel images. We demonstrate a significant improvement in image quality over screened Poisson reconstruction under the L1 norm. Because we adapt the regularization constraints to the noise level in the input, our algorithm is consistent and converges to the ground truth.  相似文献   

19.
Mobile phones and tablets are rapidly gaining significance as omnipresent image and video capture devices. In this context we present an algorithm that allows such devices to capture high dynamic range (HDR) video. The design of the algorithm was informed by a perceptual study that assesses the relative importance of motion and dynamic range. We found that ghosting artefacts are more visually disturbing than a reduction in dynamic range, even if a comparable number of pixels is affected by each. We incorporated these findings into a real‐time, adaptive metering algorithm that seamlessly adjusts its settings to take exposures that will lead to minimal visual artefacts after recombination into an HDR sequence. It is uniquely suitable for real‐time selection of exposure settings. Finally, we present an off‐line HDR reconstruction algorithm that is matched to the adaptive nature of our real‐time metering approach.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号