首页 | 官方网站   微博 | 高级检索  
     


On Demand Solid Texture Synthesis Using Deep 3D Networks
Authors:J Gutierrez  J Rabin  B Galerne  T Hurtut
Affiliation:1. Polytechnique Montréal, Canada;2. Normandie Univ. UniCaen, ENSICAEN, CNRS, GREYC, France;3. Institut Denis Poisson, Université d'Orléans, Université de Tours, CNRS, France
Abstract:This paper describes a novel approach for on demand volumetric texture synthesis based on a deep learning framework that allows for the generation of high-quality three-dimensional (3D) data at interactive rates. Based on a few example images of textures, a generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes that reproduce the visual characteristics of the examples along some directions. To cope with memory limitations and computation complexity that are inherent to both high resolution and 3D processing on the GPU, only 2D textures referred to as ‘slices’ are generated during the training stage. These synthetic textures are compared to exemplar images via a perceptual loss function based on a pre-trained deep network. The proposed network is very light (less than 100k parameters), therefore it only requires sustainable training (i.e. few hours) and is capable of very fast generation (around a second for 2563 voxels) on a single GPU. Integrated with a spatially seeded pseudo-random number generator (PRNG) the proposed generator network directly returns a color value given a set of 3D coordinates. The synthesized volumes have good visual results that are at least equivalent to the state-of-the-art patch-based approaches. They are naturally seamlessly tileable and can be fully generated in parallel.
Keywords:solid texture  on demand texture synthesis  generative networks  deep learning  ? Computing methodologies → Texturing  Appearance and texture representations
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号