首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
Displaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time‐series visualizations, parallel coordinates, link‐node diagrams, and phase‐space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2×2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi‐resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line‐based visualizations. We demonstrate this for parallel coordinates, a time‐series visualization, and a phase‐space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image‐based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method.  相似文献   

3.
Texture bombing is a texture synthesis approach that saves memory by stopping short of assembling the output texture from the arrangement of input texture patches; instead, the arrangement is used directly at run time to texture surfaces. However, several problems remain in need of better solutions. One problem is improving texture diversification. A second problem is that mipmapping cannot be used because texel data is not stored explicitly. The lack of an appropriate level‐of‐detail (LoD) scheme results in severe minification artefacts. We present a just‐in‐time texturing method that addresses these two problems. Texture diversification is achieved by modelling a texture patch as an umbrella, a versatile hybrid 3‐D geometry and texture structure with parameterized appearance. The LoD is adapted continuously with a hierarchical algorithm that acts directly on the arrangement map. Results show that our method can model and render the diversity present in nature with only small texture memory requirements.  相似文献   

4.
This paper proposes a scale‐adaptive filtering method to improve the performance of structure‐preserving texture filtering for image smoothing. With classical texture filters, it usually is challenging to smooth texture at multiple scales while preserving salient structures in an image. We address this issue in the concept of adaptive bilateral filtering, where the scales of Gaussian range kernels are allowed to vary from pixel to pixel. Based on direction‐wise statistics, our method distinguishes texture from structure effectively, identifies appropriate scope around a pixel to be smoothed and thus infers an optimal smoothing scale for it. Filtering an image with varying‐scale kernels, the image is smoothed according to the distribution of texture adaptively. With commendable experimental results, we show that, needing less iterations, our proposed scheme boosts texture filtering performance in terms of preserving the geometric structures of multiple scales even after aggressive smoothing of the original image.  相似文献   

5.
In this paper, an application of Quadrature Mirror Filter (QMF) bank-based subband decomposition to texture analysis is presented. Two-dimensional 4-band QMF structure is used and the QMF features are introduced such that the low-low band extracts the information of spatial dependence and the low-high, high-low, and high-high bands extract the structural information. This approach has the twin advantages of efficient information extraction and parallel implementation. The classification abilities of QMF features are compared to those of Haralick features. The experiments demonstrate that the QMF features have better performance than the Haralick features.  相似文献   

6.
The most popular second-order statistical texture features are derived from the co-occurrence matrix, which has been proposed by Haralick. However, the computation of both matrix and extracting texture features are very time consuming. In order to improve the performance of co-occurrence matrices and texture feature extraction algorithms, we propose an architecture on FPGA platform. In the proposed architecture, first, the co-occurrence matrix is computed then all thirteen texture features are calculated in parallel using computed co-occurrence matrix. We have implemented the proposed architecture on Virtex 5 fx130T-3 FPGA device. Our experimental results show that a speedup of 421[× yields over a software implementation on Intel Core i7 2.0 GHz processor. In order to improve much more performance on textures, we have reduced the computation of 13 texture features to 3 texture features using ranking of Haralick’s features. The performance improvement is 484×.  相似文献   

7.
In real‐time rendering, the appearance of scenes is greatly affected by the quality and resolution of the textures used for image synthesis. At the same time, the size of textures determines the performance and the memory requirements of rendering. As a result, finding the optimal texture resolution is critical, but also a non‐trivial task since the visibility of texture imperfections depends on underlying geometry, illumination, interactions between several texture maps, and viewing positions. Ideally, we would like to automate the task with a visibility metric, which could predict the optimal texture resolution. To maximize the performance of such a metric, it should be trained on a given task. This, however, requires sufficient user data which is often difficult to obtain. To address this problem, we develop a procedure for training an image visibility metric for a specific task while reducing the effort required to collect new data. The procedure involves generating a large dataset using an existing visibility metric followed by refining that dataset with the help of an efficient perceptual experiment. Then, such a refined dataset is used to retune the metric. This way, we augment sparse perceptual data to a large number of per‐pixel annotated visibility maps which serve as the training data for application‐specific visibility metrics. While our approach is general and can be potentially applied for different image distortions, we demonstrate an application in a game‐engine where we optimize the resolution of various textures, such as albedo and normal maps.  相似文献   

8.
This paper presents a novel method to enhance the performance of structure‐preserving image and texture filtering. With conventional edge‐aware filters, it is often challenging to handle images of high complexity where features of multiple scales coexist. In particular, it is not always easy to find the right balance between removing unimportant details and protecting important features when they come in multiple sizes, shapes, and contrasts. Unlike previous approaches, we address this issue from the perspective of adaptive kernel scales. Relying on patch‐based statistics, our method identifies texture from structure and also finds an optimal per‐pixel smoothing scale. We show that the proposed mechanism helps achieve enhanced image/texture filtering performance in terms of protecting the prominent geometric structures in the image, such as edges and corners, and keeping them sharp even after significant smoothing of the original signal.  相似文献   

9.
Smoothness is a quality that feels aesthetic and pleasing to the human eye. We present an algorithm for finding “as‐smooth‐as‐possible” sequences in image collections. In contrast to previous work, our method does not assume that the images show a common 3D scene, but instead may depict different object instances with varying deformations, and significant variation in lighting, texture, and color appearance. Our algorithm does not rely on a notion of camera pose, view direction, or 3D representation of an underlying scene, but instead directly optimizes the smoothness of the apparent motion of local point matches among the collection images. We increase the smoothness of our sequences by performing a global similarity transform alignment, as well as localized geometric wobble reduction and appearance stabilization. Our technique gives rise to a new kind of image morphing algorithm, in which the in‐between motion is derived in a data‐driven manner from a smooth sequence of real images without any user intervention. This new type of morph can go far beyond the ability of traditional techniques. We also demonstrate that our smooth sequences allow exploring large image collections in a stable manner.  相似文献   

10.
This paper proposes a line‐time optimization (LTO) technology for ultra‐large and high‐resolution liquid crystal display (LCD) televisions. Line‐time optimization enables a single‐bank data driver configuration without severe image degradation. When the proposed method is applied to an ultra‐high‐definition (UHD) LCD with a single‐bank data driver scheme, the LCD performance comparable to that of a dual‐bank data driver method can be obtained. The implementation of the proposed method helps in achieving desirable goals such as a reduction in the number of drivers and realization of a much more flexible design of UHD LCDs.  相似文献   

11.
This paper systematically advocates an interactive volumetric image manipulation framework, which can enable the rapid deployment and instant utility of patient‐specific medical images in virtual surgery simulation while requiring little user involvement. We seamlessly integrate multiple technical elements to synchronously accommodate physics‐plausible simulation and high‐fidelity anatomical structures visualization. Given a volumetric image, in a user‐transparent way, we build a proxy to represent the geometrical structure and encode its physical state without the need of explicit 3‐D reconstruction. On the basis of the dynamic update of the proxy, we simulate large‐scale deformation, arbitrary cutting, and accompanying collision response driven by a non‐linear finite element method. By resorting to the upsampling of the sparse displacement field resulted from non‐linear finite element simulation, the cut/deformed volumetric image can evolve naturally and serves as a time‐varying 3‐D texture to expedite direct volume rendering. Moreover, our entire framework is built upon CUDA (Beihang University, Beijing, China) and thus can achieve interactive performance even on a commodity laptop. The implementation details, timing statistics, and physical behavior measurements have shown its practicality, efficiency, and robustness. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
局部傅里叶变换(LFT)是一种重要的纹理特征提取方法.该文首先分析了LFT系数各阶矩的纹理鉴别性能,然后采用特征空间中由不同纹理形成的聚类之间的Fisher判据作为评价指标,研究了纹理特征的鉴别性能,并通过纹理分割实验来验证相关结论,从而使结论具有更高的可靠性.分析结果表明,纹理图像的LFT系数一般不服从正态分布,其偶数阶矩具有较好的纹理鉴别性能,而奇数阶矩的纹理鉴别性能则较差,由于以LFT系数的2、4、6阶矩作为纹理特征,其纹理鉴别性能和分割结果优于Yu Hui和Haralick提出的纹理特征,因此,建议选用LFT系数的各偶数阶矩作为纹理特征.  相似文献   

13.
To design a bas‐relief from a 3D scene is an inherently interactive task in many scenarios. The user normally needs to get instant feedback to select a proper viewpoint. However, current methods are too slow to facilitate this interaction. This paper proposes a two‐scale bas‐relief modeling method, which is computationally efficient and easy to produce different styles of bas‐reliefs. The input 3D scene is first rendered into two textures, one recording the depth information and the other recording the normal information. The depth map is then compressed to produce a base surface with level‐of‐depth, and the normal map is used to extract local details with two different schemes. One scheme provides certain freedom to design bas‐reliefs with different visual appearances, and the other provides a control over the level of detail. Finally, the local feature details are added into the base surface to produce the final result. Our approach allows for real‐time computation due to its implementation on graphics hardware. Experiments with a wide range of 3D models and scenes show that our approach can effectively generate digital bas‐reliefs in real time.  相似文献   

14.
Abstract— Super‐PVA (S‐PVA) technology developed by Samsung has demonstrated excellent viewing‐angle performance. However, S‐PVA panels can place extra demands on charging time due to the time‐multiplexed driving scheme required to separately address two subpixels. Specifically, a 2G‐1D pixel structure theoretically requires subpixel charging in one‐half of the time available for a conventional panel. In this paper, a new LCD driving scheme, super impulsive technology (SIT), is proposed to improve motion‐blur reduction by driving an S‐PVA LCD panel at 120 Hz. The proposed scheme allows a 120‐Hz 2G‐1D panel to be driven with an adequate charging‐time margin while providing an impulsive driving effect for motion‐blur reduction. Considering that the cost of a 2G‐1D S‐PVA panel is comparable to that of a conventional 60‐Hz panel, this method achieves good performance at a reasonable price. The detailed algorithm and implementation method are explored and the performance improvements are verified.  相似文献   

15.
16.
Consistent segmentation is to the center of many applications based on dynamic geometric data. Directly segmenting a raw 3D point cloud sequence is a challenging task due to the low data quality and large inter‐frame variation across the whole sequence. We propose a local‐to‐global approach to co‐segment point cloud sequences of articulated objects into near‐rigid moving parts. Our method starts from a per‐frame point clustering, derived from a robust voting‐based trajectory analysis. The local segments are then progressively propagated to the neighboring frames with a cut propagation operation, and further merged through all frames using a novel space‐time segment grouping technqiue, leading to a globally consistent and compact segmentation of the entire articulated point cloud sequence. Such progressive propagating and merging, in both space and time dimensions, makes our co‐segmentation algorithm especially robust in handling noise, occlusions and pose/view variations that are usually associated with raw scan data.  相似文献   

17.
One of the most common tasks in image and video editing is the local adjustment of various properties (e.g., saturation or brightness) of regions within an image or video. Edge‐aware interpolation of user‐drawn scribbles offers a less effort‐intensive approach to this problem than traditional region selection and matting. However, the technique suffers a number of limitations, such as reduced performance in the presence of texture contrast, and the inability to handle fragmented appearances. We significantly improve the performance of edge‐aware interpolation for this problem by adding a boosting‐based classification step that learns to discriminate between the appearance of scribbled pixels. We show that this novel data term in combination with an existing edge‐aware optimization technique achieves substantially better results for the local image and video adjustment problem than edge‐aware interpolation techniques without classification, or related methods such as matting techniques or graph cut segmentation.  相似文献   

18.
纹理相似性度量研究及基于纹理特征的图像检索   总被引:4,自引:0,他引:4  
杨波  徐光祐 《自动化学报》2004,30(6):991-998
纹理相似性研究是纹理合成和基于内容检索研究中的一个重要组成部分.在相似性 判断中,采用与人类视觉感知相对应的纹理特征,将比使用其他无明确含义的纹理特征,对 系统的进一步改善有着更为重要的指导意义.在Tamura,Amadasun和Haralick等人提出的 纹理特征的基础上分析了与人类视觉特征有较为明确对应关系的19个纹理特征,不同纹理 之间的相似性由这19个纹理特征构成的归一化特征向量之间的加权欧氏距离决定.对大量 纹理图像的相似性进行了度量,实验结果表明所选的纹理特征有较强的描述能力.使用了主 成分分析算法来压缩特征向量的维数,结果表明,6维特征主分量已经可以给出较好的纹理相 似性度量.  相似文献   

19.
We present an efficient algorithm for object‐space proximity queries between multiple deformable triangular meshes. Our approach uses the rasterization capabilities of the GPU to produce an image‐space representation of the vertices. Using this image‐space representation, inter‐object vertex‐triangle distances and closest points lying under a user‐defined threshold are computed in parallel by conservative rasterization of bounding primitives and sorted using atomic operations. We additionally introduce a similar technique to detect penetrating vertices. We show how mechanisms of modern GPUs such as mipmapping, Early‐Z and Early‐Stencil culling can optimize the performance of our method. Our algorithm is able to compute dense proximity information for complex scenes made of more than a hundred thousand triangles in real time, outperforming a CPU implementation based on bounding volume hierarchies by more than an order of magnitude.  相似文献   

20.
Abstract— An autostereoscopic 3‐D display suitable for the mobile environment is prototyped and evaluated. First, the required conditions for a 3‐D display in a mobile environment are considered, and the three major requirements are clarified: small size, viewing‐position flexibility, and application support. An application of a mobile‐type 3‐D display should be different from that of a large‐sized 3‐D display because a mobile‐type 3‐D display cannot realize the feeling of immersion while large‐sized 3‐D displays can realize it easily. From this assumption, it is considered that it is important to realize the feeling to handle a 3‐D image. Three types of 3‐D displays are developed to satisfy these requirements. They are subjectively evaluated to confirm their attractiveness. Results of the tests show that intuitive interaction can increase the reality of the 3‐D image in the sense of unity and also can improve the solidity and depth impression of the 3‐D image.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号