首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 127 毫秒
1.
Out-of-core streamline visualization on large unstructured meshes   总被引:1,自引:0,他引:1  
This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that, during the streamline construction, only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-15 megabytes. We also demonstrate that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms  相似文献   

2.
In this paper we present an out-of-core editing system for point clouds, which allows selecting and modifying arbitrary parts of a huge point cloud interactively. We can use the selections to segment the point cloud, to delete points, or to render a preview of the model without the points in the selections. Furthermore, we allow for inserting points into an already existing point cloud. All operations are conducted on a rendering optimized data structure that uses the raw point cloud from a laser scanner, and no additionally created points are needed for an efficient level-of-detail (LOD) representation using this data structure. We also propose an algorithm to alleviate the artifacts when rendering a point cloud with large discrepancies in density in different areas by estimating point sizes heuristically. These estimated point sizes can be used to mimic a closed surface on the raw point cloud, also when the point cloud is composed of several raw laser scans.  相似文献   

3.
海量医学数据处理框架及快速体绘制算法   总被引:3,自引:0,他引:3  
薛健  田捷  戴亚康  陈健 《软件学报》2008,19(12):3237-3248
设计并实现了一套针对海量数据的处理和分析算法框架,并将其融入实验室早先开发完成的医学影像算法研发平台MITK(medical imaging toolkit)中,真正建立起一个海量医学影像数据的处理平台,并在此基础上研究了针对海量数据的基于光线投射和三维纹理的快速体绘制算法,提出了一种半自适应分块的方法对原始数据进行分块,在不对分块速度产生太大影响的基础上得到了更好的分块结果,同时使用图形硬件来进一步加速整个算法的绘制流程.实验结果表明了该平台和算法对于海量医学数据处理和可视化的有效性.  相似文献   

4.
Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces.  相似文献   

5.
We present a novel approach for interactive rendering of massive 3D models. Our approach integrates adaptive sampling-based simplification, visibility culling, out-of-core data management and level-of-detail. We use a unified scene graph representation for acceleration techniques. In preprocessing, we subdivide large objects, and build a BVH clustering hierarchy. We make use of a novel adaptive sampling method to generate LOD models: AdaptiveVoxels. The AdaptiveVoxels reduces the preprocessing cost and our out-of-core rendering algorithm improves rendering efficiency. We have implemented our algorithm on a desktop PC. We can render massive CAD and isosurface models, consisting of hundreds of millions of triangles interactively with little loss in image quality.  相似文献   

6.
视相关大规模三维地形实时绘制技术综述   总被引:1,自引:0,他引:1  
刘贤梅  张婷  汤磊 《计算机仿真》2007,24(6):194-198
大规模三维地形表面实时绘制一直是国内外的研究热点.综述了大规模三维地形表面实时绘制算法的分类,包括基于规则网格和非规则网格、基于in-core和out-of-core、基于CPU和GPU的三大类绘制算法;阐述了大规模三维地形表面实时绘制算法的发展现状和关键技术,通过out-of-core算法的数据预取策略解决大数据量模型的交互式渲染问题,采用视点相关的多分辨率技术简化整个场景的复杂度,减少绘制的数据量;指出了各种方法的优点与不足,最后对大规模三维地形表面实时绘制所需要研究的问题进行了总结.  相似文献   

7.
This paper presents a procedure for virtual autopsies based on interactive 3D visualizations of large scale, high resolution data from CT-scans of human cadavers. The procedure is described using examples from forensic medicine and the added value and future potential of virtual autopsies is shown from a medical and forensic perspective. Based on the technical demands of the procedure state-of-the-art volume rendering techniques are applied and refined to enable real-time, full body virtual autopsies involving gigabyte sized data on standard GPUs. The techniques applied include transfer function based data reduction using level-of-detail selection and multi-resolution rendering techniques. The paper also describes a data management component for large, out-of-core data sets and an extension to the GPU-based raycaster for efficient dual TF rendering. Detailed benchmarks of the pipeline are presented using data sets from forensic cases.  相似文献   

8.
Set-oriented data mining in relational databases   总被引:2,自引:0,他引:2  
Data mining is an important real-life application for businesses. It is critical to find efficient ways of mining large data sets. In order to benefit from the experience with relational databases, a set-oriented approach to mining data is needed. In such an approach, the data mining operations are expressed in terms of relational or set-oriented operations. Query optimization technology can then be used for efficient processing.

In this paper, we describe set-oriented algorithms for mining association rules. Such algorithms imply performing multiple joins and thus may appear to be inherently less efficient than special-purpose algorithms. We develop new algorithms that can be expressed as SQL queries, and discuss optimization of these algorithms. After analytical evaluation, an algorithm named SETM emerges as the algorithm of choice. Algorithm SETM uses only simple database primitives, viz., sorting and merge-scan join. Algorithm SETM is simple, fast, and stable over the range of parameter values. It is easily parallelized and we suggest several additional optimizations. The set-oriented nature of Algorithm SETM makes it possible to develop extensions easily and its performance makes it feasible to build interactive data mining tools for large databases.  相似文献   


9.
图像滤波是图像预处理的重要部分,但由于滤波过程中需要处理的数据量较大,采用软件的方法难以满足实时性的要求。利用现场可编程门阵列(FPGA)的并行处理能力,提出了一种基于FPGA的快速中值滤波算法,相对与传统的中值滤波算法有较大的改进,减少了数据的比较次数,处理速度得到提高,并使用Matlab和Modelsim进行联合仿真,结果证明该算法滤波效果良好。  相似文献   

10.
The real-time display of huge geometry and imagery databases involves view-dependent approximations, typically through the use of precomputed hierarchies that are selectively refined at runtime. A classic motivating problem is terrain visualization in which planetary databases involving billions of elevation and color values are displayed on PC graphics hardware at high frame rates. This paper introduces a new diamond data structure for the basic selective-refinement processing, which is a streamlined method of representing the well-known hierarchies of right triangles that have enjoyed much success in real-time, view-dependent terrain display. Regular-grid tiles are proposed as the payload data per diamond for both geometry and texture. The use of 4-8 grid refinement and coarsening schemes allows level-of-detail transitions that are twice as gradual as traditional quadtree-based hierarchies, as well as very high-quality low-pass filtering compared to subsampling-based hierarchies. An out-of-core storage organization is introduced based on Sierpinski indices per diamond, along with a tile preprocessing framework based on fine-to-coarse, same-level, and coarse-to-fine gathering operations. To attain optimal frame-to-frame coherence and processing-order priorities, dual split and merge queues are developed similar to the realtime optimally adapting meshes (ROAM) algorithm, as well as an adaptation of the ROAM frustum culling technique. Example applications of lake-detection and procedural terrain generation demonstrate the flexibility of the tile processing framework.  相似文献   

11.
For large time-varying data sets, memory and disk limitations can lower the performance of visualization applications. Algorithms and data structures must be explicitly designed to handle these data sets in order to achieve more interactive rates. The Temporal Branch-on-Need Octree (T-BON) extends the three-dimensional branch-on-need octree for time-varying isosurface extraction. This data structure minimizes the impact of the I/O bottleneck by reading from disk only those portions of the search structure and data necessary to construct the current isosurface. By performing a minimum of I/O and exploiting the hierarchical memory found in modern CPUs, the T-BON algorithm achieves high performance isosurface extraction in time-varying fields. The paper extends earlier work on the T-BON data structure by including techniques for better memory utilization, out-of-core isosurface extraction, and support for nonrectilinear grids. Results from testing the T-BON algorithm on large data sets show that its performance is similar to that of the three-dimensional branch-on-need octree for static data sets while providing substantial advantages for time varying fields  相似文献   

12.
Ray directed volume-rendering algorithms are well suited for parallel implementation in a distributed cluster environment. For distributed ray casting, the scene must be partitioned between nodes for good load balancing, and a strict view-dependent priority order is required for image composition. In this paper, we define the load balanced network distribution (LBND) problem and map it to the NP-complete precedence constrained job-shop scheduling problem. We introduce a kd-tree solution and a dynamic programming solution. To process a massive data set, either a parallel or an out-of-core approach is required. Parallel preprocessing is performed by render nodes on data, which are allocated using a static data structure. Volumetric data sets often contain a large portion of voxels that will never be rendered, or empty space. Parallel preprocessing fails to take advantage of this. Our   slab-projection slice, introduced in this paper, tracks empty space across consecutive slices of data to reduce the amount of data distributed and rendered. It is used to facilitate out-of-core bricking and kd-tree partitioning. Load balancing using each of our approaches is compared with traditional methods using several segmented regions of the Visible Korean data set.  相似文献   

13.
We consider the problem of isosurface extraction and rendering for large scale time-varying data. Such data sets have been appearing at an increasing rate especially from physics-based simulations, and can range in size from hundreds of gigabytes to tens of terabytes. Isosurface extraction and rendering is one of the most widely used visualization techniques to explore and analyze such data sets. A common strategy for isosurface extraction involves the determination of the so-called active cells followed by a triangulation of these cells based on linear interpolation, and ending with a rendering of the triangular mesh. We develop a new simple indexing scheme for out-of-core processing of large scale data sets, which enables the identification of the active cells extremely quickly, using more compact indexing structure and more effective bulk data movement than previous schemes. Moreover, our scheme leads to an efficient and scalable implementation on multiprocessor environments in which each processor has access to its own local disk. In particular, our parallel algorithm provably achieves load balancing across the processors independent of the isovalue, with almost no overhead in the total amount of work relative to the sequential algorithm. We conduct a large number of experimental tests on the University of Maryland Visualization Cluster using the Richtmyer–Meshkov instability data set, and obtain results that consistently validate the efficiency and the scalability of our algorithm.  相似文献   

14.
For large volume visualization, an image-based quality metric is difficult to incorporate for level-of-detail selection and rendering without sacrificing the interactivity. This is because it is usually time-consuming to update view-dependent information as well as to adjust to transfer function changes. In this paper, we introduce an image-based level-of-detail selection algorithm for interactive visualization of large volumetric data. The design of our quality metric is based on an efficient way to evaluate the contribution of multiresolution data blocks to the final image. To ensure real-time update of the quality metric and interactive level-of-detail decisions, we propose a summary table scheme in response to runtime transfer function changes and a GPU-based solution for visibility estimation. Experimental results on large scientific and medical data sets demonstrate the effectiveness and efficiency of our algorithm  相似文献   

15.
Previous mesh compression techniques provide decent properties such as high compression ratio, progressive decoding, and out-of-core processing. However, only a few of them supports the random accessibility in decoding, which enables the details of any specific part to be available without decoding other parts. This paper proposes an effective framework for the random accessibility of mesh compression. The key component of the framework is a wire-net mesh constructed from a chartification of the given mesh. Charts are compressed separately for random access to mesh parts and a wire-net mesh provides an indexing and stitching structure for the compressed charts. Experimental results show that random accessibility can be achieved with competent compression ratio, which is only a little worse than single-rate and comparable to progressive encoding. To demonstrate the merits of the framework, we apply it to process huge meshes in an out-of-core manner, such as out-of-core rendering and out-of-core editing.  相似文献   

16.
We present a new method for preprocessing and organizing discrete scalar volume data of any dimension on external storage. We describe our implementation of a visual navigation system using our method. The techniques have important applications for out-of-core visualization of volume data sets and image understanding. The applications include extracting isosurfaces in a manner that helps reduce both I/O and disk seek time, a priori topologically correct isosurface simplification (prior to extraction), and producing a visual atlas of all topologically distinct objects in the data set. The preprocessing algorithm computes regions of space that we call topological zone components, so that any isosurface component (contour) is completely contained in a zone component and all contours contained in a zone component are topologically equivalent. The algorithm also constructs a criticality tree that is related to the recently studied contour tree. However, unlike the contour tree, the zones and the criticality tree hierarchically organize the data set. We demonstrate that the techniques work on both irregularly and regularly gridded data, and can be extended to data sets with nonunique values, by the mathematical analysis we call Digital Morse Theory (DMT), so that perturbation of the data set is not required. We present the results of our initial experiments with three dimensional volume data (CT) and describe future extensions of our DMT organizing technology.  相似文献   

17.
To date, work in microarrays, sequenced genomes and bioinformatics has focused largely on algorithmic methods for processing and manipulating vast biological data sets. Future improvements will likely provide users with guidance in selecting the most appropriate algorithms and metrics for identifying meaningful clusters-interesting patterns in large data sets, such as groups of genes with similar profiles. Hierarchical clustering has been shown to be effective in microarray data analysis for identifying genes with similar profiles and thus possibly with similar functions. Users also need an efficient visualization tool, however, to facilitate pattern extraction from microarray data sets. The Hierarchical Clustering Explorer integrates four interactive features to provide information visualization techniques that allow users to control the processes and interact with the results. Thus, hybrid approaches that combine powerful algorithms with interactive visualization tools will join the strengths of fast processors with the detailed understanding of domain experts  相似文献   

18.
In this paper, we describe a new algorithm for detecting structural redundancy in geometric data sets. Our algorithm computes rigid symmetries, i.e., subsets of a surface model that reoccur several times within the model differing only by translation, rotation or mirroring. Our algorithm is based on matching locally coherent constellations of feature lines on the object surfaces. In comparison to previous work, the new algorithm is able to detect a large number of symmetric parts without restrictions to regular patterns or nested hierarchies. In addition, working on relevant features only leads to a strong reduction in memory and processing costs such that very large data sets can be handled. We apply the algorithm to a number of real world 3D scanner data sets, demonstrating high recognition rates for general patterns of symmetry.  相似文献   

19.
We present an approach to visualizing particle-based simulation data using interactive ray tracing and describe an algorithmic enhancement that exploits the properties of these data sets to provide highly interactive performance and reduced storage requirements. This algorithm for fast packet-based ray tracing of multilevel grids enables the interactive visualization of large time-varying data sets with millions of particles and incorporates advanced features like soft shadows. We compare the performance of our approach with two recent particle visualization systems: one based on an optimized single ray grid traversal algorithm and the other on programmable graphics hardware. This comparison demonstrates that the new algorithm offers an attractive alternative for interactive particle visualization.  相似文献   

20.
The preprocessing of large meshes to provide and optimize interactive visualization implies a complete reorganization that often introduces significant data growth. This is detrimental to storage and network transmission, but in the near future could also affect the efficiency of the visualization process itself, because of the increasing gap between computing times and external access times. In this article, we attempt to reconcile lossless compression and visualization by proposing a data structure that radically reduces the size of the object while supporting a fast interactive navigation based on a viewing distance criterion. In addition to this double capability, this method works out-of-core and can handle meshes containing several hundred million vertices. Furthermore, it presents the advantage of dealing with any n-dimensional simplicial complex, including triangle soups or volumetric meshes, and provides a significant rate-distortion improvement. The performance attained is near state-of-the-art in terms of the compression ratio as well as the visualization frame rates, offering a unique combination that can be useful in numerous applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号