首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
吕广宪  潘懋  王占刚  丛威青 《计算机应用》2006,26(12):2856-2859
针对常规八叉树和线性八叉树的不足,提出了一种基于多级Z-Order曲线、无指针/无位置码的虚拟八叉树模型。在时间方面,由于采用了规则划分的节点数据块及其简单高效的索引结构,新模型具有很高的内存访问效率;在空间方面,由于节点中无指针和位置码信息,而且采用了新的基于规则节点块的合并和压缩原则,新模型具有很好的存储效率。测试结果表明,虚拟八叉树模型同时具有指针八叉树在时间效率、线性八叉树在空间效率两方面的优势,是一种高效的三维体数据组织模型,在体图形学相关的领域中具有重要的研究意义和应用价值。  相似文献   

2.
通过对分块IFS图象压缩编码过程中匹配误差的分析,提出了广义置信度的概念,并据此提出了对输入图象进行四叉树分块的自适应匹配门限(AT)算法。根据排列块的相对复杂程度,修正了自适应匹配门限的公式,提出了改进的自适应门限(RAT)IFS图象压缩编码算法。在对输入图象进行四叉树分块编码过程中,该方法可以根据当前排列块的统计特征确定匹配门限,从而使分块编码过程自动地适应输入图象。实验结果表明,这种新的编码方法可以自适应地对输入图象进行编码,且压缩比较高,有一定的实用性。  相似文献   

3.
基于三维点云数据的线性八叉树编码压缩算法   总被引:1,自引:1,他引:1  
八叉树结构是三维数据建模中研究和应用最为广泛的栅格数据结构。由于三维扫描的点云数据是基于物体表面的,其空间离散程度远大于三维实体数据,一般的线性八叉树编码压缩方法都是基于实体数据的,不能直接应用于三维点云数据。提出的改进的线性八叉树地址码(Morton码)的方法可大大提高它的连续性,有效降低八叉树的深度,提高数据的压缩比,改进后的Morton码还可以应用多种编码压缩算法进一步压缩。  相似文献   

4.
对三维模型和点云曲面重构方法进行深入研究,根据应用特点提出八叉树空间分割和N U RBS曲面重构方法。利用八叉树的快速收敛特性对三维实体的点云数据进行分割、精简,采用N U RBS方法对局部网格曲面进行重构;采用八叉树和四叉树相混合的数据结构,渐进地进行网格曲面的重构。存储结构采用扩展式八叉树结构,编码采用8进制前缀编码方法。利用O penG L设计一个实验模型系统验证了该算法的可行性和有效性。  相似文献   

5.
外存模型简化中数据读取及内存分配的优化   总被引:1,自引:0,他引:1  
由于外存模型的数据量极大,其简化计算只能采用分批读入模型数据并局部处理的方式,其中,数据读入和内存分配的操作对简化效率影响很大,提出一种动态优化方法,在对先处理的一小部分数据进行简化操作的同时,检测数据读取块大小和内存分配模式对简化操作的影响,由此可使不同配置的计算机在处理不同的外存模型时能自适应地得到相应的优化数据读取块大小和内存分配模式,加速后续大量数据的简化操作,实验结果表明,文中方法能有效地提高外存模型简化的效率。  相似文献   

6.
基于块段模型的三维GIS混合数据结构模型研究*   总被引:1,自引:0,他引:1  
为了有效地表示三维GIS空间实体,在地质块段模型的基础上,提出了基于八叉树和四面体格网的混合数据结构模型(block octree tetrahedron,BOT模型).采用BOT模型生成算法对块段模型进行重新分割,八叉树作整体描述,四面体格网作局部精确描述,并以不同的灰度值表示不同的单元块属性.同时,为节省存储空间,提出了线性BOT编码技术.实验结果表明,BOT模型充分发挥了八叉树和四面体格网的优点,可以在不增加存储空间的前提下实现对三维目标更高效、更精确的表达.  相似文献   

7.
八叉树结构是三维数据建模中研究和应用最为广泛的栅格数据结构。由于三维扫描的点云数据是基于物体表面的,其空间离散程度远大于三维实体数据,一般的线性八叉树编码压缩方法都是基于实体数据的,不能直接应用于三维点云数据。提出的改进的线性八叉树地址码(Morton码)的方法可大大提高它的连续性,有效降低八叉树的深度,提高数据的压缩比,改进后的Morton码还可以应用多种编码压缩算法进一步压缩。  相似文献   

8.
颜色量化(CQ)是减少图像颜色数量的过程,已广泛用于图像压缩。基于八叉树的颜色量化(OCQ)因其编码效率高、内存使用低和调色板选择效果良好而被认为是最流行的CQ算法之一。然而,OCQ应用的一个严峻挑战是如何有效地管理关键的本地颜色。提出了一种基于分块的自适应八叉树颜色量化(AB-OCQ)算法,实验结果表明,与传统的OCQ算法相比,由于增加了对局部颜色的适当处理,AB-OCQ可以显著提高图像质量。在图像压缩比方面,AB-OCQ的综合性能也优于OCQ的。同时,和主流图像文件格式相比,AB-OCQ算法可以在保持压缩的前提下拥有随机访问图像像素数据的特性,该特性能让应用程序在同等内存下存储更多的图像数据,为提高应用程序的效率提供了一种方法。  相似文献   

9.
针对遥感图象随机性强,相关性差以及遥感图象数据采集速率高的特点,提出了根据图象局部区域纹理 特征的复杂程度区别对待的自适应按块量化压缩编码的思想,并在此基础上构成多模式自适应量化压缩模型。计 算机仿真结果表明,在压缩比为4的条件下,上述模型的压缩速度与恢复图象精度均明显优于国际静图压缩标准 JPEG所得的结果。  相似文献   

10.
使用Face Fixer方法对由一般多边形网格构成的三维模型拓扑信息进行了压缩,采用3阶自适应算术编码进一步提高压缩比,通过把顶点位置坐标变换到局部坐标系中,结合量化、平行四边形顶点坐标预测以及算术编码来实现三维网格模型几何信息的压缩,在几何模型质量基本没有损失的情况下,获得了很好的压缩性能。  相似文献   

11.
为实现大规模点云的快速绘制,提出以部分内存访问机制为基础、以节点点数上限为叶节点形成条件的平衡八叉树存储结构。设计点云内外存调度绘制流程,包括节点可见性判断、内外存数据调度和点云绘制等环节。为提高可见性判断的效率,在视点与节点距离、夹角约束条件的基础上给出节点可视半径约束。利用实测大规模点云数据进行实验,结果证明,该技术可以在有限的内存资源条件下,以较小的内存消耗实现上亿级规模点云从整体到局部的流畅绘制。  相似文献   

12.
It is widely acknowledged that improving parallel I/O performance is critical for widespread adoption of high performance computing. In this paper, we show that communication in out-of-core distributed memory problems may require both interprocessor communication and file I/O. Thus, in order to improve I/O performance, it is necessary to minimize the I/O costs associated with a communication step. We present three methods for performing communication in out-of-core distributed memory problems. The first method, called thegeneralized collective communicationmethod, follows a loosely synchronous model; computation and communication phases are clearly separated, and communication requires permutation of data in files. The second method, called thereceiver-driven in-core communication, communicates only the in-core data. The third method, called theowner-driven in-core communication, goes even one step further and tries to identify the potential future use of data (by the recipients) while it is in the senders memory. We provide performance results for two out-of-core applications: the two-dimensional FFT code, and the two-dimensional elliptic Jacobi solver.  相似文献   

13.
Etree: a database-oriented method for generating large octree meshes   总被引:1,自引:0,他引:1  
This paper presents the design, implementation, and evaluation of the etree, a database-oriented method for large out-of-core octree mesh generation. The main idea is to map an octree to a database structure and perform all octree operations by querying and updating the database. We apply two standard database techniques, the linear octree and the B-tree, to index and store the octants on disk. Then we introduce two new techniques, auto-navigation and local balancing, to address the special needs of mesh generation. Preliminary evaluation suggests that the etree method is an effective way of generating very large octree meshes on desktop machines.  相似文献   

14.
We present an adaptive out-of-core technique for rendering massive scalar volumes employing single-pass GPU ray casting. The method is based on the decomposition of a volumetric dataset into small cubical bricks, which are then organized into an octree structure maintained out-of-core. The octree contains the original data at the leaves, and a filtered representation of children at inner nodes. At runtime an adaptive loader, executing on the CPU, updates a view and transfer function-dependent working set of bricks maintained on GPU memory by asynchronously fetching data from the out-of-core octree representation. At each frame, a compact indexing structure, which spatially organizes the current working set into an octree hierarchy, is encoded in a small texture. This data structure is then exploited by an efficient stackless ray casting algorithm, which computes the volume rendering integral by visiting non-empty bricks in front-to-back order and adapting sampling density to brick resolution. Block visibility information is fed back to the loader to avoid refinement and data loading of occluded zones. The resulting method is able to interactively explore multi-gigavoxel datasets on a desktop PC.  相似文献   

15.
The Array Management System (AMS) is an integrated set of array management tools designed to increase the productivity of technical programmers engaged in intensive matrix computational applications. These include analog circuit simulator, statistical analysis, dense or sparse equation solving, simulation, and in particular, the finite element program development. AMS is composed of a set of easy-to-use in-core and out-of-core data management subroutines written in FORTRAN 77. The in-core array management subroutines of AMS allow dynamic storage allocation to be accomplished with integer, real, and complex data with a minimum of programming effort. The out-of-core array management subroutines of AMS support simple operations to allow array transfer between in-core and out-of-core systems and allow different programs to access the same data. The out-of-core data management provides for a direct access database file to speed up the input/output operations. Multiple databases are allowed to be accessed by a program; this provides an easy way to share data and restart. This integrated database environment is suitable to be the kernel of a software project with several programmers and data communications among them.  相似文献   

16.
Previous mesh compression techniques provide decent properties such as high compression ratio, progressive decoding, and out-of-core processing. However, only a few of them supports the random accessibility in decoding, which enables the details of any specific part to be available without decoding other parts. This paper proposes an effective framework for the random accessibility of mesh compression. The key component of the framework is a wire-net mesh constructed from a chartification of the given mesh. Charts are compressed separately for random access to mesh parts and a wire-net mesh provides an indexing and stitching structure for the compressed charts. Experimental results show that random accessibility can be achieved with competent compression ratio, which is only a little worse than single-rate and comparable to progressive encoding. To demonstrate the merits of the framework, we apply it to process huge meshes in an out-of-core manner, such as out-of-core rendering and out-of-core editing.  相似文献   

17.
该文结合三维小波变换与八叉树算法,提出了一种新的三维体数据压缩算法。该算法利用小波的多分辩率分析能力对体数据进行压缩,同时结合了八叉树特点对体数据进行编码、存储与重构。实际应用表明该算法能较好地对体数据进行压缩,并且能快速地进行重构和随机地访问体单元数据。  相似文献   

18.
大型网格模型多分辨率的外存构建与交互绘制   总被引:3,自引:1,他引:2  
结合多分辨率、网格排布和基于视点的绘制技术,提出一种外存多分辨率构建和绘制算法.采用适应性八叉树对模型的包围盒进行划分,自顶向下构建模型的多分辨率层次结构,较好地保持了原模型的细节分布;并对多分辨率结构中每个节点所包含的三角形片段进行网格排布优化,降低了缓存的平均失效率;在实时绘制时,采用基于视点的细节层次选择策略进行模型的细化;最后通过引入数据预取机制来隐藏磁盘I/O延时,进一步提高绘制性能.实验结果表明,该算法在绘制速度与细节保留上均优于同类MRMM算法.  相似文献   

19.
Quick-VDR: out-of-core view-dependent rendering of gigantic models   总被引:10,自引:0,他引:10  
We present a novel approach for interactive view-dependent rendering of massive models. Our algorithm combines view-dependent simplification, occlusion culling, and out-of-core rendering. We represent the model as a clustered hierarchy of progressive meshes (CHPM). We use the cluster hierarchy for coarse-grained selective refinement and progressive meshes for fine-grained local refinement. We present an out-of-core algorithm for computation of a CHPM that includes cluster decomposition, hierarchy generation, and simplification. We introduce novel cluster dependencies in the preprocess to generate crack-free, drastic simplifications at runtime. The clusters are used for LOD selection, occlusion culling, and out-of-core rendering. We add a frame of latency to the rendering pipeline to fetch newly visible clusters from the disk and avoid stalls. The CHPM reduces the refinement cost of view-dependent rendering by more than an order of magnitude as compared to a vertex hierarchy. We have implemented our algorithm on a desktop PC. We can render massive CAD, isosurface, and scanned models, consisting of tens or a few hundred million triangles at 15-35 frames per second with little loss in image quality.  相似文献   

20.
We recently introduced an efficient multiresolution structure for distributing and rendering very large point sampled models on consumer graphics platforms [1]. The structure is based on a hierarchy of precomputed object-space point clouds, that are combined coarse-to-fine at rendering time to locally adapt sample densities according to the projected size in the image. The progressive block based refinement nature of the rendering traversal exploits on-board caching and object based rendering APIs, hides out-of-core data access latency through speculative prefetching, and lends itself well to incorporate backface, view frustum, and occlusion culling, as well as compression and view-dependent progressive transmission. The resulting system allows rendering of complex out-of-core models at high frame rates (over 60 M rendered points/second), supports network streaming, and is fundamentally simple to implement. We demonstrate the efficiency of the approach on a number of very large models, stored on local disks or accessed through a consumer level broadband network, including a massive 234 M samples isosurface generated by a compressible turbulence simulation and a 167 M samples model of Michelangelo's St. Matthew. Many of the details of our framework were presented in a previous study. We here provide a more thorough exposition, but also significant new material, including the presentation of a higher quality bottom-up construction method and additional qualitative and quantitative results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号