首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   20篇
  免费   1篇
  国内免费   2篇
工业技术   23篇
  2017年   2篇
  2014年   2篇
  2011年   6篇
  2009年   1篇
  2008年   1篇
  2007年   4篇
  2006年   3篇
  2004年   1篇
  2003年   1篇
  2002年   1篇
  2001年   1篇
排序方式: 共有23条查询结果,搜索用时 546 毫秒
1.
This paper describes a new out-of-core multi-resolution data structure for real-time visualization, interactive editing and externally efficient processing of large point clouds. We describe an editing system that makes use of the novel data structure to provide interactive editing and preprocessing tools for large scanner data sets. Using the new data structure, we provide a complete tool chain for 3D scanner data processing, from data preprocessing and filtering to manual touch-up and real-time visualization. In particular, we describe an out-of-core outlier removal and bilateral geometry filtering algorithm, a toolset for interactive selection, painting, transformation, and filtering of huge out-of-core point-cloud data sets and a real-time rendering algorithm, which all use the same data structure as storage backend. The interactive tools work in real-time for small model modifications. For large scale editing operations, we employ a two-resolution approach where editing is planned in real-time and executed in an externally efficient offline computation afterwards. We evaluate our implementation on example data sets of sizes up to 63 GB, demonstrating that the proposed technique can be used effectively in real-world applications.  相似文献   
2.
We present a Fortran library which can be used to solve large-scale dense linear systems, Ax=b. The library is based on the LU decomposition included in the parallel linear algebra library PLAPACK and on its out-of-core extension POOCLAPACK. The library is complemented with a code which calculates the self-polarization charges and self-energy potential of axially symmetric nanostructures, following an induced charge computation method. Illustrative calculations are provided for hybrid semiconductor–quasi-metal zero-dimensional nanostructures. In these systems, the numerical integration of the self-polarization equations requires using a very fine mesh. This translates into very large and dense linear systems, which we solve for ranks up to 3×105. It is shown that the self-energy potential on the semiconductor–metal interface has important effects on the electronic wavefunction.

Program summary

Program title: HDSS (Huge Dense System Solver)Catalogue identifier: AEHU_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 98 889No. of bytes in distributed program, including test data, etc.: 1 009 622Distribution format: tar.gzProgramming language: Fortran 90, CComputer: Parallel architectures: multiprocessors, computer clustersOperating system: Linux/UnixHas the code been vectorized or parallelized?: Yes. 4 processors used in the sample tests; tested from 1 to 288 processorsRAM: 2 GB for the sample tests; tested for up to 80 GBClassification: 7.3External routines: MPI, BLAS, PLAPACK, POOCLAPACK. PLAPACK and POOCLAPACK are included in the distribution file.Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Application to calculations of self-energy potential in dielectrically mismatched semiconductor quantum dots.Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. The self-energy solver relies on an induced charge computation method. The differential equation is discretized to yield linear systems of equations, which we then solve by calling the HDSS library.Restrictions: Simple precision. For the self-energy solver, axially symmetric systems must be considered.Running time: About 32 minutes to solve a system with approximately 100 000 equations and more than 6000 right-hand side vectors using a four-node commodity cluster with a total of 32 Intel cores.  相似文献   
3.
GIS中海量栅格数据的处理技术研究*   总被引:4,自引:1,他引:3  
综合介绍了近些年来采用的针对GIS海量数据的处理策略及技术,指出了这些技术的优缺点;并在GeoScene软件的开发过程中得到了很好的应用。  相似文献   
4.
Ray tracing a volume scene graph composed of multiple point-based volume objects (PBVO) can produce high quality images with effects such as shadows and constructive operations. A naive approach, however, would demand an overwhelming amount of memory to accommodate all point datasets and their associated control structures such as octrees. This paper describes an out-of-core approach for rendering such a scene graph in a scalable manner. In order to address the difficulty in pre-determining the order of data caching, we introduce a technique based on a dynamic, in-core working set. We present a ray-driven algorithm for predicting the working set automatically. This allows both the data and the control structures required for ray tracing to be dynamically pre-fetched according to access patterns determined based on captured knowledge of ray-data intersection. We have conducted a series of experiments on the scalability of the technique using working sets and datasets of different sizes. With the aid of both qualitative and quantitative analysis, we demonstrate that this approach allows the rendering of multiple large PBVOs in a volume scene graph to be performed on desktop computers.  相似文献   
5.
核外计算中,由于磁盘I/O操作特点是启动开销大,所以对文件的访问时间占的比例较大。如果能减少读取文件操作的次数则可以大幅度地提高运行效率。数据重用是一种有效的减少I/O操作次数的技术。本文将数据分成几个文件,然后将本次Cholesky分解完毕的文件继续的留在内存缓冲区中。当对下一个文件进行分解时,可用上一个刚分解完的文件进行数据的更新。这样就减少了读取数据的I/O操作次数,从而提高了分解效率。  相似文献   
6.
《Parallel Computing》2014,40(10):754-767
The processing of massive amounts of data on clusters with finite amount of memory has become an important problem facing the parallel/distributed computing community. While MapReduce-style technologies provide an effective means for addressing various problems that fit within the MapReduce paradigm, there are many classes of problems for which this paradigm is ill-suited. In this paper we present a runtime system for traditional MPI programs that enables the efficient and transparent out-of-core execution of distributed-memory parallel programs. This system, called BDMPI,1 leverages the semantics of MPI’s API to orchestrate the execution of a large number of MPI processes on much fewer compute nodes, so that the running processes maximize the amount of computation that they perform with the data fetched from the disk. BDMPI enables the development of efficient out-of-core parallel distributed memory codes without the high engineering and algorithmic complexities associated with multiple levels of blocking. BDMPI achieves significantly better performance than existing technologies on a single node as well as on a small cluster, and performs within 30% of optimized out-of-core implementations.  相似文献   
7.
Multi-resolution techniques are required for rendering large volumetric datasets exceeding the size of the graphics card's memory or even the main memory. The cut through the multi-resolution volume representation is defined by selection criteria based on error metrics. For GPU-based volume rendering, this cut has to fit into the graphics card's memory and needs to be continuously updated due to the interaction with the volume such as changing the area of interest, the transfer function or the viewpoint. We introduce a greedy cut update algorithm based on split-and-collapse operations for updating the cut on a frame-to-frame basis. This approach is guided by a global data-based metric based on the distortion of classified voxel data, and it takes into account a limited download budget for transferring data from main memory into the graphics card to avoid large frame rate variations. Our out-of-core support for handling very large volumes also makes use of split-and-collapse operations to generate an extended cut in the main memory. Finally, we introduce an optimal polynomial-time cut update algorithm, which maximizes the error reduction between consecutive frames. This algorithm is used to verify how close to the optimum our greedy split-and-collapse algorithm performs.  相似文献   
8.
The preprocessing of large meshes to provide and optimize interactive visualization implies a complete reorganization that often introduces significant data growth. This is detrimental to storage and network transmission, but in the near future could also affect the efficiency of the visualization process itself, because of the increasing gap between computing times and external access times. In this article, we attempt to reconcile lossless compression and visualization by proposing a data structure that radically reduces the size of the object while supporting a fast interactive navigation based on a viewing distance criterion. In addition to this double capability, this method works out-of-core and can handle meshes containing several hundred million vertices. Furthermore, it presents the advantage of dealing with any n-dimensional simplicial complex, including triangle soups or volumetric meshes, and provides a significant rate-distortion improvement. The performance attained is near state-of-the-art in terms of the compression ratio as well as the visualization frame rates, offering a unique combination that can be useful in numerous applications.  相似文献   
9.
Convenient use of legacy software in Java with Janet package   总被引:2,自引:0,他引:2  
This paper describes Janet package — highly expressive Java language extension that enables convenient creation of powerful native methods and efficient Java-to-native code interfaces. Java native interface (JNI) is a low-level API that is rather inconvenient if used directly. Therefore Janet, as the higher-level tool, combines flexibility of JNI with Java’s ease-of-use. Performance results of Janet-generated interface to the lip library are shown. Java code, which uses lip, is compared with native C implementation.  相似文献   
10.
We recently introduced an efficient multiresolution structure for distributing and rendering very large point sampled models on consumer graphics platforms [1]. The structure is based on a hierarchy of precomputed object-space point clouds, that are combined coarse-to-fine at rendering time to locally adapt sample densities according to the projected size in the image. The progressive block based refinement nature of the rendering traversal exploits on-board caching and object based rendering APIs, hides out-of-core data access latency through speculative prefetching, and lends itself well to incorporate backface, view frustum, and occlusion culling, as well as compression and view-dependent progressive transmission. The resulting system allows rendering of complex out-of-core models at high frame rates (over 60 M rendered points/second), supports network streaming, and is fundamentally simple to implement. We demonstrate the efficiency of the approach on a number of very large models, stored on local disks or accessed through a consumer level broadband network, including a massive 234 M samples isosurface generated by a compressible turbulence simulation and a 167 M samples model of Michelangelo's St. Matthew. Many of the details of our framework were presented in a previous study. We here provide a more thorough exposition, but also significant new material, including the presentation of a higher quality bottom-up construction method and additional qualitative and quantitative results.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号