首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Multiresolution analysis on irregular surface meshes   总被引:2,自引:0,他引:2  
Wavelet-based methods have proven their efficiency for visualization at different levels of detail, progressive transmission, and compression of large data sets. The required core of all wavelet-based methods is a hierarchy of meshes that satisfies subdivision-connectivity. This hierarchy has to be the result of a subdivision process starting from a base mesh. Examples include quadtree uniform 2D meshes, octree uniform 3D meshes, or 4-to-1 split triangular meshes. In particular, the necessity of subdivision-connectivity prevents the application of wavelet-based methods on irregular triangular meshes. In this paper, a “wavelet-like” decomposition is introduced that works on piecewise constant data sets over irregular triangular surface meshes. The decomposition/reconstruction algorithms are based on an extension of wavelet-theory allowing hierarchical meshes without property. Among others, this approach has the following features: it allows exact reconstruction of the data set, even for nonregular triangulations, and it extends previous results on Haar-wavelets over 4-to-1 split triangulations  相似文献   

2.
自适应分割的动态网格生成算法   总被引:2,自引:0,他引:2  
提出一种基于半边数据结构的动态网格自适应分割生成算法.根据曲面尖锐特征对网格模型自适应分割;为满足分割边界的简化要求,提出一种利用半边数据结构的分割边界独立处理算法;并利用边界自适应加权函数,较好地保持了模型边界特征.应用实例表明,该算法高效、可靠,既保持模型细节特征,又减少了模型简化误差.  相似文献   

3.
In this paper, we present a framework that integrates three‐dimensional (3D) mesh streaming and compression techniques and algorithms into our EVE‐II networked virtual environments (NVEs) platform, in order to offer support for large‐scale environments as well as highly complex world geometry. This framework allows the partial and progressive transmission of 3D worlds as well as of separate meshes, achieving reduced waiting times for the end‐user and improved network utilization. We also present a 3D mesh compression method focused on network communication, which is designed to support progressive mesh transmission, offering a fast and effective means of reducing the storage and transmission needs for geometrical data. This method is integrated in the above framework and utilizes prediction to achieve efficient lossy compression of 3D geometry. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
Wavelet-based progressive compression scheme for triangle meshes: wavemesh   总被引:7,自引:0,他引:7  
We propose a new lossy to lossless progressive compression scheme for triangular meshes, based on a wavelet multiresolution theory for irregular 3D meshes. Although remeshing techniques obtain better compression ratios for geometric compression, this approach can be very effective when one wants to keep the connectivity and geometry of the processed mesh completely unchanged. The simplification is based on the solving of an inverse problem. Optimization of both the connectivity and geometry of the processed mesh improves the approximation quality and the compression ratio of the scheme at each resolution level. We show why this algorithm provides an efficient means of compression for both connectivity and geometry of 3D meshes and it is illustrated by experimental results on various sets of reference meshes, where our algorithm performs better than previously published approaches for both lossless and progressive compression.  相似文献   

5.
面向移动终端的三角网格逆细分压缩算法   总被引:2,自引:0,他引:2  
马建平  罗笑南  陈渤  李峥 《软件学报》2009,20(9):3607-2615
针对移动用户的实时显示需求,提出一种基于逆细分的三角网格压缩算法.通过改进逆Butterfly简化算法,采用逆改版Loop模式,将细密的三角网格简化生成由稀疏的基网格和一系列偏移量组成的渐进网格;然后,通过设计偏移量小波树,将渐进网格进行嵌入式零树编码压缩.实验结果表明:该算法与以往方法相比,在获得较高压缩比的同时,运行速度较快.适用于几何模型的网络渐进传输和在移动终端上的3D图形实时渲染.  相似文献   

6.
Limited bandwidth is a strong constraint when efficient transmission of 3D data to Web clients and mobile applications is needed. In this paper we present a novel multi-resolution WebGL based rendering algorithm which combines progressive loading, view-dependent resolution and mesh compression, providing high frame rates and a decoding speed of million of triangles per second in JavaScript. The method is parallelizable and scalable to very large models.The algorithm is based on the local multi-resolution approaches provided by the community, but ad-hoc solutions had to be studied and implemented to provide adequate performances. In particular, a compression mechanism that reached very high compression rate without impact on rendering performance was implemented. Moreover, the data partition strategy was modified in order to be able to load different types of data (i.e. point clouds) and better adapt to the potentials and limitations of web-based rendering.  相似文献   

7.
Nowadays, both mesh meaningful segmentation (also called shape decomposition) and progressive compression are fundamental important problems, and some compression algorithms have been developed with the help of patch-type segmentation. However, little attention has been paid to the effective combination of mesh compression and meaningful segmentation. In this paper, to accomplish both adaptive selective accessibility and a reasonable compression ratio, we break down the original mesh into meaningful parts and encode each part by an efficient compression algorithm. In our method, the segmentation of a model is obtained by a new feature-based decomposition algorithm, which makes use of the salient feature contours to parse the object. Moreover, the progressive compression is an improved degree-driven method, which adapts a multi-granularity quantization method in geometry encoding to obtain a higher compression ratio. We provide evidence that the proposed combination can be beneficial in many applications, such as view-dependent rendering and streaming of large meshes in a compressed form.  相似文献   

8.
We present a new approach to dynamic mesh compression, which combines compression with simplification to achieve improved compression results, a natural support for incremental transmission and level of detail. The algorithm allows fast progressive transmission of dynamic 3D content. Our scheme exploits both temporal and spatial coherency of the input data, and is especially efficient for the case of highly detailed dynamic meshes. The algorithm can be seen as an ultimate extension of the clustering and local coordinate frame (LCF)‐based approaches, where each vertex is expressed within its own specific coordinate system. The presented results show that we have achieved better compression efficiency compared to the state of the art methods. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
A framework for streaming geometry in VRML   总被引:10,自引:0,他引:10  
We introduce a framework for streaming geometry in VRML that eliminates the need to perform complete downloads of geometric models before starting to display them. This framework for the progressive transmission of geometry has three main parts, as follows: 1) a process to generate multiple levels-of-detail (LODs); 2) a transmission process (preferably in compressed form); and 3) a data structure for receiving and exploiting the LODs generated in the first part and transmitted in the second. The processes in parts 1 and 2 have already received considerable attention. We concentrate on a solution for part 3. Our basic contribution is a flexible LOD storage scheme, which we refer to as a progressive multilevel mesh. This scheme, primarily intended as a data structure in memory, has a low memory footprint and provides easy access to the various LODs (thus suitable for efficient rendering). This representation is not tied to a particular automated polygon reduction tool. In fact, we can use the output of any polygon reduction algorithm based on vertex clustering (including the edge collapse operations used in several algorithms). The progressive multilevel mesh complements compression techniques such as those developed by M. Deering (1995), H. Hoppe (1996) or G. Taubin et al. (1998). We discuss the integration of some of these compression techniques. However, for the sake of simplicity, we use a simple file format to describe the algorithm  相似文献   

10.
While progressive compression techniques were proposed long time ago, fast and efficient streaming of detailed 3D models over lossy networks still remains a challenge. A primary reason is that packet loss occurring in unreliable networks is highly unpredictable, leading to connectivity inconsistency and distortions of decompressed meshes. Although prior researches have proposed various methods to handle errors caused by transmission loss, they are always accompanied by additional costs such as redundant transmission data, bandwidth overloads, and result distortions. In this paper, we address this problem from a receiver’s point of view and propose a novel receiver-based loss tolerance scheme which is capable of recovering the lost data when streaming 3D progressive meshes over lossy networks. Specifically, we use some constraints during the model compression procedure on the server side, and suggest a prediction method to handle loss of structural and geometric data on the client/receiver side. Our algorithm works without any data retransmission or introducing any unnecessary protection bits. We stream mesh refinement data on reliable and unreliable networks separately so as to reduce the transmission delay as well as to obtain a satisfactory decompression result. The experimental results indicate that the decompression procedure can be accomplished quickly, suggesting that it is an efficient and practical solution. It is also shown that the proposed prediction technique achieves a very good approximation of the original mesh with low distortions, and in the mean time, error propagations are also well controlled.  相似文献   

11.
The paper presents a method for generating and displaying wireframe approximations to surfaces of constant value (or iso-surfaces). Input to the method is a data grid, a volume decomposition with each of whose vertices is associated a scalar value. During a preprocessing phase, the method constructs a threshold-independent data structure based upon the given data grid. The data structure relates the edges of an iso-surface wireframe to the edges of the data grid, for all possible threshold values. During the subsequent rendering phase, the data structure supports efficient generation and display of the iso-surface wireframe corresponding to any selected threshold value. The technique is efficient enough to form the basis for an interactive software system for visualizing iso-surfaces.  相似文献   

12.
在医学图像三维重建中,经典的等值面重建算法Marching Cube是一种比较常用的算法。该算法具有可以在给定阈值的情况下提取任意三维数据场的等值面的优点,但因需计算大量的数据和三角面片而使得该算法速度较慢。提出了使用不同尺度的等值面重建理论,实现了一个多尺度的Marching Cube算法,经过实验对比,该算法比原Marching Cube算法具有效率高、速度快的优点。  相似文献   

13.
This paper proposes a novel and efficient algorithm for single-rate compression of triangle meshes. The input mesh is traversed along its greedy Hamiltonian cycle in O(n) time. Based on the Hamiltonian cycle, the mesh connectivity can be encoded by a face label sequence with low entropy containing only four kinds of labels (HETS) and the transmission delay at the decoding end that frequently occurs in the conventional single-rate approaches is obviously reduced. The mesh geometry is compressed with a global coordinate concentration strategy and a novel local parallelogram error prediction scheme. Experiments on realistic 3D models demonstrate the effectiveness of our approach in terms of compression rates and run time performance compared to the leading single-rate and progressive mesh compression methods.  相似文献   

14.
A rate-distortion (R-D) optimized progressive coding algorithm for three-dimensional (3D) meshes is proposed in this work. We propose the prioritized gate selection and the curvature prediction to improve the connectivity and geometry compression performance, respectively. Furthermore, based on the bit plane coding, we develop a progressive transmission method, which improves the qualities of intermediate meshes as well as that of the fully reconstructed mesh, and extend it to the view-dependent transmission method. Experiments on various 3D mesh models show that the proposed algorithm provides significantly better compression performance than the conventional algorithms, while supporting progressive reconstruction efficiently.  相似文献   

15.
随着虚拟现实、增强现实等领域快速发展,渐进传输获得了良好的用户体验。为 了三角网格在移动终端的快速传输和显示,提出了一种基于二面角逆插值 Loop 细分(DRILS)的 渐进传输算法。主要通过对原始三角网格进行基于二面角插值 Loop 细分(DILS)和插值 Loop 细 分(ILS)进行预处理,在局部特征精确保持的同时获得具备细分连通性的精网格。在渐进传输的 过程中通过对该精网格迭代操作 3 个步骤,即奇偶顶点划分、预测偏移量、更新三角网格。由 于采用 DILS 与 ILS 结合获取精网格,在渐进传输的过程中保持了精确的局部特征,同时也加 快了渐进传输的速度。实验对比表明,该算法精确、高效,适应于移动终端设备的显示传输及 存储。  相似文献   

16.
为了从医学体数据构建面向虚拟手术仿真系统的器官实体模型,提出一种基于局部特征尺寸的Delaunay四面体化算法。首先采用Marching Cubes算法和外存模型简化技术从体数据中得到器官等值面简化模型,提出重心射线法去除内部冗余网格,获得器官多面体表面;然后基于局部特征尺寸构建表面顶点保护球,结合Delaunay细分算法生成边界一致的初始四面体网格;最后提出基于随机扰动的空间分解法快速生成内部节点,并逐点插入到四面体网格中优化单元质量。该算法克服了Delaunay细分算法无法处理锐角输入的缺点,并从理论  相似文献   

17.
Multiresolution analysis based on subdivision wavelets is an important method of 3D graphics processing. Many applications of this method have been studied and developed, including denoising, compression, progressive transmission, multiresolution editing and so on. Recently Charina and St?ckler firstly gave the explicit construction of wavelet tight frame transform for subdivision surfaces with irregular vertices, which made its practical applications to 3D graphics became a subject worthy of investigation. Based on the works of Charina and St?ckler, we present in detail the wavelet tight frame decomposition and reconstruction formulas for Loop-subdivision scheme. We further implement the algorithm and apply it to the denoising, compression and progressive transmission of 3D graphics. By comparing it with the biorthogonal Loop-subdivision wavelets of Bertram, the numerical results illustrate the good performance of the algorithm. Since multiresolution analysis based on subdivision wavelets or subdivision wavelet tight frames requires the input mesh to be semi-regular, we also propose a simple remeshing algorithm for constructing meshes which not only have subdivision connectivity but also approximate the input mesh.  相似文献   

18.
法向网格是一种新型的曲面多分辨率描述方式,其中每个层次都可以表示为其前一个粗糙层次的法向偏移.文中提出一种基于法向网格表示的隐式曲面多分辨率网格逼近算法.首先通过基于空间剖分技术的多边形化算法获得隐式曲面的粗糙逼近网格,并利用网格均衡化方法对粗糙网格进行优化,消除其中的狭长三角形;然后利用法向细分规则迭代地对网格中的三角面片进行细分,并利用区间算术技术沿法向方向对隐式曲面进行逼近.最终生成的隐式曲面分片线性逼近网格为法向网格.该逼近网格为隐式曲面提供了一种多分辨率表示,网格具有细分连通性,其数据量较传统的多边形化算法所生成的网格有大幅度的压缩.该算法可用于隐式曲面的多级绘制、累进传输及相关数字几何处理.  相似文献   

19.
A feature‐oriented generic progressive lossless mesh coder (FOLProM) is proposed to encode triangular meshes with arbitrarily complex geometry and topology. In this work, a sequence of levels of detail (LODs) are generated through iterative vertex set split and bounding volume subdivision. The incremental geometry and connectivity updates associated with each vertex set split and/or bounding volume subdivision are entropy coded. Due to the visual importance of sharp geometric features, the whole geometry coding process is optimized for a better presentation of geometric features, especially at low coding bitrates. Feature‐oriented optimization in FOLProM is performed in hierarchy control and adaptive quantization. Efficient coordinate representation and prediction schemes are employed to reduce the entropy of data significantly. Furthermore, a simple yet efficient connectivity coding scheme is proposed. It is shown that FOLProM offers a significant rate‐distortion (R‐D) gain over the prior art, which is especially obvious at low bitrates.  相似文献   

20.
Modern supercomputers enable increasingly large N‐body simulations using unstructured point data. The structures implied by these points can be reconstructed implicitly. Direct volume rendering of radial basis function (RBF) kernels in domain‐space offers flexible classification and robust feature reconstruction, but achieving performant RBF volume rendering remains a challenge for existing methods on both CPUs and accelerators. In this paper, we present a fast CPU method for direct volume rendering of particle data with RBF kernels. We propose a novel two‐pass algorithm: first sampling the RBF field using coherent bounding hierarchy traversal, then subsequently integrating samples along ray segments. Our approach performs interactively for a range of data sets from molecular dynamics and astrophysics up to 82 million particles. It does not rely on level of detail or subsampling, and offers better reconstruction quality than structured volume rendering of the same data, exhibiting comparable performance and requiring no additional preprocessing or memory footprint other than the BVH. Lastly, our technique enables multi‐field, multi‐material classification of particle data, providing better insight and analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号