首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Several applications in shape modeling and exploration require identification and extraction of a 3D shape part matching a 2D sketch. We present CustomCut, an on‐demand part extraction algorithm. Given a sketched query, CustomCut automatically retrieves partially matching shapes from a database, identifies the region optimally matching the query in each shape, and extracts this region to produce a customized part that can be used in various modeling applications. In contrast to earlier work on sketch‐based retrieval of predefined parts, our approach can extract arbitrary parts from input shapes and does not rely on a prior segmentation into semantic components. The method is based on a novel data structure for fast retrieval of partial matches: the randomized compound k‐NN graph built on multi‐view shape projections. We also employ a coarse‐to‐fine strategy to progressively refine part boundaries down to the level of individual faces. Experimental results indicate that our approach provides an intuitive and easy means to extract customized parts from a shape database, and significantly expands the design space for the user. We demonstrate several applications of our method to shape design and exploration.  相似文献   

2.
We present a data‐driven method for synthesizing 3D indoor scenes by inserting objects progressively into an initial, possibly, empty scene. Instead of relying on few hundreds of hand‐crafted 3D scenes, we take advantage of existing large‐scale annotated RGB‐D datasets, in particular, the SUN RGB‐D database consisting of 10,000+ depth images of real scenes, to form the prior knowledge for our synthesis task. Our object insertion scheme follows a co‐occurrence model and an arrangement model, both learned from the SUN dataset. The former elects a highly probable combination of object categories along with the number of instances per category while a plausible placement is defined by the latter model. Compared to previous works on probabilistic learning for object placement, we make two contributions. First, we learn various classes of higher‐order object‐object relations including symmetry, distinct orientation, and proximity from the database. These relations effectively enable considering objects in semantically formed groups rather than by individuals. Second, while our algorithm inserts objects one at a time, it attains holistic plausibility of the whole current scene while offering controllability through progressive synthesis. We conducted several user studies to compare our scene synthesis performance to results obtained by manual synthesis, state‐of‐the‐art object placement schemes, and variations of parameter settings for the arrangement model.  相似文献   

3.
We propose a transductive shape segmentation algorithm, which can transfer prior segmentation results in database to new shapes without explicitly specification of prior category information. Our method first partitions an input shape into a set of segmentations as a data preparation, and then a linear integer programming algorithm is used to select segments from them to form the final optimal segmentation. The key idea is to maximize the segment similarity between the segments in the input shape and the segments in database, where the segment similarity is computed through sparse reconstruction error. The segment‐level similarity enables to handle a large amount of shapes with significant topology or shape variations with a small set of segmented example shapes. Experimental results show that our algorithm can generate high quality segmentation and semantic labeling results in the Princeton segmentation benchmark.  相似文献   

4.
Since indoor scenes are frequently changed in daily life, such as re‐layout of furniture, the 3D reconstructions for them should be flexible and easy to update. We present an automatic 3D scene update algorithm to indoor scenes by capturing scene variation with RGBD cameras. We assume an initial scene has been reconstructed in advance in manual or other semi‐automatic way before the change, and automatically update the reconstruction according to the newly captured RGBD images of the real scene update. It starts with an automatic segmentation process without manual interaction, which benefits from accurate labeling training from the initial 3D scene. After the segmentation, objects captured by RGBD camera are extracted to form a local updated scene. We formulate an optimization problem to compare to the initial scene to locate moved objects. The moved objects are then integrated with static objects in the initial scene to generate a new 3D scene. We demonstrate the efficiency and robustness of our approach by updating the 3D scene of several real‐world scenes.  相似文献   

5.
Automatic Modeling of Urban Facades from Raw LiDAR Point Data   总被引:1,自引:0,他引:1       下载免费PDF全文
Modeling of urban facades from raw LiDAR point data remains active due to its challenging nature. In this paper, we propose an automatic yet robust 3D modeling approach for urban facades with raw LiDAR point clouds. The key observation is that building facades often exhibit repetitions and regularities. We hereby formulate repetition detection as an energy optimization problem with a global energy function balancing geometric errors, regularity and complexity of facade structures. As a result, repetitive structures are extracted robustly even in the presence of noise and missing data. By registering repetitive structures, missing regions are completed and thus the associated point data of structures are well consolidated. Subsequently, we detect the potential design intents (i.e., geometric constraints) within structures and perform constrained fitting to obtain the precise structure models. Furthermore, we apply structure alignment optimization to enforce position regularities and employ repetitions to infer missing structures. We demonstrate how the quality of raw LiDAR data can be improved by exploiting data redundancy, and discovering high level structural information (regularity and symmetry). We evaluate our modeling method on a variety of raw LiDAR scans to verify its robustness and effectiveness.  相似文献   

6.
Terrains are a crucial component of three‐dimensional scenes and are present in many Computer Graphics applications. Terrain modeling methods focus on capturing landforms in all their intricate detail, including eroded valleys arising from the interplay of varied phenomena, dendritic mountain ranges, and complex river networks. Set against this visual complexity is the need for user control over terrain features, without which designers are unable to adequately express their artistic intent. This article provides an overview of current terrain modeling and authoring techniques, organized according to three categories: procedural modeling, physically‐based simulation of erosion and land formation processes, and example‐based methods driven by scanned terrain data. We compare and contrast these techniques according to several criteria, specifically: the variety of achievable landforms; realism from both a perceptual and geomorphological perspective; issues of scale in terms of terrain extent and sampling precision; the different interaction metaphors and attendant forms of user‐control, and computation and memory performance. We conclude with an in‐depth discussion of possible research directions and outstanding technical and scientific challenges.  相似文献   

7.
In this paper, we introduce an interactive method suitable for retargeting both 3D objects and scenes. Initially, the input object or scene is decomposed into a collection of constituent components enclosed by corresponding control bounding volumes which capture the intra‐structures of the object or semantic grouping of objects in the 3D scene. The overall retargeting is accomplished through a constrained optimization by manipulating the control bounding volumes. Without inferring the intricate dependencies between the components, we define a minimal set of constraints that maintain the spatial arrangement and connectivity between the components to regularize the valid retargeting results. The default retargeting behavior can then be easily altered by additional semantic constraints imposed by users. This strategy makes the proposed method highly flexible to process a wide variety of 3D objects and scenes under an unified framework. In addition, the proposed method achieved more general structure‐preserving pattern synthesis in both object and scene levels. We demonstrate the effectiveness of our method by applying it to several complicated 3D objects and scenes.  相似文献   

8.
Fused Filament Fabrication is an additive manufacturing process by which a 3D object is created from plastic filament. The filament is pushed through a hot nozzle where it melts. The nozzle deposits plastic layer after layer to create the final object. This process has been popularized by the RepRap community. Several printers feature multiple extruders, allowing objects to be formed from multiple materials or colors. The extruders are mounted side by side on the printer carriage. However, the print quality suffers when objects with color patterns are printed – a disappointment for designers interested in 3D printing their colored digital models. The most severe issue is the oozing of plastic from the idle extruders: Plastics of different colors bleed onto each other giving the surface a smudged aspect, excess strings oozing from the extruder deposit on the surface, and holes appear due to this missing plastic. Fixing this issue is difficult: increasing the printing speed reduces oozing but also degrades surface quality – on large prints the required speed level become impractical. Adding a physical mechanism increases cost and print time as extruders travel to a cleaning station. Instead, we rely on software and exploit degrees of freedom of the printing process. We introduce three techniques that complement each other in improving the print quality significantly. We first reduce the impact of oozing plastic by choosing a better azimuth angle for the printed part. We build a disposable rampart in close proximity of the part, giving the extruders the opportunity to wipe oozing strings and refill with hot plastic. We finally introduce a toolpath planner avoiding and hiding most of the defects due to oozing, and seamlessly integrating the rampart. We demonstrate our technique on several challenging multiple color prints, and show that our tool path planner improves the surface finish of single color prints as well.  相似文献   

9.
10.
We propose a fast method for 3D shape segmentation and labeling via Extreme Learning Machine (ELM). Given a set of example shapes with labeled segmentation, we train an ELM classifier and use it to produce initial segmentation for test shapes. Based on the initial segmentation, we compute the final smooth segmentation through a graph‐cut optimization constrained by the super‐face boundaries obtained by over‐segmentation and the active contours computed from ELM segmentation. Experimental results show that our method achieves comparable results against the state‐of‐the‐arts, but reduces the training time by approximately two orders of magnitude, both for face‐level and super‐face‐level, making it scale well for large datasets. Based on such notable improvement, we demonstrate the application of our method for fast online sequential learning for 3D shape segmentation at face level, as well as realtime sequential learning at super‐face level.  相似文献   

11.
12.
Synthesizing facial wrinkles has been tackled either by a long process of manual sculpting on 3D models, or using automatic methods that do not allow for user interaction or artistic expression. In this paper, we propose a method that accepts interactive sketchy drawings depicting wrinkle patterns, and synthesizes realistic looking wrinkles on faces. The method inherits the simplicity of sketching, making it possible for artists as well as novice users to generate realistic facial detail very efficiently, allowing fast preview for physical makeup, or aging simulations for fun and professional applications. All strokes are used to infer the wrinkles, retaining the expressiveness of the sketches and realism of the final result at the same time. This is achieved by designing novel multi‐scale statistics tailored to the wrinkle geometry and coupled to the sketch interpretation method. The statistics capture the cross‐sectional profiles of wrinkles at different scales and parts of a face. The strokes are augmented with the statistics extracted from given example face models, and applied to an input face model interactively. The interface gives the user control over the shapes and scales of wrinkles via sketching while adding extra details required for realism automatically.  相似文献   

13.
Feature learning for 3D shapes is challenging due to the lack of natural paramterization for 3D surface models. We adopt the multi‐view depth image representation and propose Multi‐View Deep Extreme Learning Machine (MVD‐ELM) to achieve fast and quality projective feature learning for 3D shapes. In contrast to existing multi‐view learning approaches, our method ensures the feature maps learned for different views are mutually dependent via shared weights and in each layer, their unprojections together form a valid 3D reconstruction of the input 3D shape through using normalized convolution kernels. These lead to a more accurate 3D feature learning as shown by the encouraging results in several applications. Moreover, the 3D reconstruction property enables clear visualization of the learned features, which further demonstrates the meaningfulness of our feature learning.  相似文献   

14.
We present a general method for transferring skeletons and skinning weights between characters with distinct mesh topologies. Our pipeline takes as inputs a source character rig (consisting of a mesh, a transformation hierarchy of joints, and skinning weights) and a target character mesh. From these inputs, we compute joint locations and orientations that embed the source skeleton in the target mesh, as well as skinning weights to bind the target geometry to the new skeleton. Our method consists of two key steps. We first compute the geometric correspondence between source and target meshes using a semi‐automatic method relying on a set of markers. The resulting geometric correspondence is then used to formulate attribute transfer as an energy minimization and filtering problem. We demonstrate our approach on a variety of source and target bipedal characters, varying in mesh topology and morphology. Several examples demonstrate that the target characters behave well when animated with either forward or inverse kinematics. Via these examples, we show that our method preserves subtle artistic variations; spatial relationships between geometry and joints, as well as skinning weight details, are accurately maintained. Our proposed pipeline opens up many exciting possibilities to quickly animate novel characters by reusing existing production assets.  相似文献   

15.
This paper presents a method that can convert a given 3D mesh into a flat‐foldable model consisting of rigid panels. A previous work proposed a method to assist manual design of a single component of such flat‐foldable model, consisting of vertically‐connected side panels as well as horizontal top and bottom panels. Our method semi‐automatically generates a more complicated model that approximates the input mesh with multiple convex components. The user specifies the folding direction of each convex component and the fidelity of shape approximation. Given the user inputs, our method optimizes shapes and positions of panels of each convex component in order to make the whole model flat‐foldable. The user can check a folding animation of the output model. We demonstrate the effectiveness of our method by fabricating physical paper prototypes of flat‐foldable models.  相似文献   

16.
Crowded motions refer to multiple objects moving around and interacting such as crowds, pedestrians and etc. We capture crowded scenes using a depth scanner at video frame rates. Thus, our input is a set of depth frames which sample the scene over time. Processing such data is challenging as it is highly unorganized, with large spatio‐temporal holes due to many occlusions. As no correspondence is given, locally tracking 3D points across frames is hard due to noise and missing regions. Furthermore global segmentation and motion completion in presence of large occlusions is ambiguous and hard to predict. Our algorithm utilizes Gestalt principles of common fate and good continuity to compute motion tracking and completion respectively. Our technique does not assume any pre‐given markers or motion template priors. Our key‐idea is to reduce the motion completion problem to a 1D curve fitting and matching problem which can be solved efficiently using a global optimization scheme. We demonstrate our segmentation and completion method on a variety of synthetic and real world crowded scanned scenes.  相似文献   

17.
In this paper, we propose a method to maintain the temporal coherence of stylized feature lines extracted from 3D models and preserve an artistically intended stylization provided by the user. We formally define the problem of combining spatio‐temporal continuity and artistic intention as a weighted energy minimization problem of competing constraints. The proposed method updates the style properties to provide real‐time smooth transitions from current to goal stylization, by assuring first‐ and second‐order temporal continuity, as well as spatial continuity along each stroke. The proposed weighting scheme guarantees that the stylization of strokes maintains motion coherence with respect to the apparent motion of the underlying surface in consecutive frames. This weighting scheme emphasizes temporal continuity for small apparent motions where the human vision system is able to keep track of the scene, and prioritizes the artistic intention for large apparent motions where temporal coherence is not expected. The proposed method produces temporally coherent and visually pleasing animations without the flickering artifacts of previous methods, while also maintaining the artistic intention of a goal stylization provided by the user.  相似文献   

18.
Estimation of 3D body shapes from dressed‐human photos is an important but challenging problem in virtual fitting. We propose a novel automatic framework to efficiently estimate 3D body shapes under clothes. We construct a database of 3D naked and dressed body pairs, based on which we learn how to predict 3D positions of body landmarks (which further constrain a parametric human body model) automatically according to dressed‐human silhouettes. Critical vertices are selected on 3D registered human bodies as landmarks to represent body shapes, so as to avoid the time‐consuming vertices correspondences finding process for parametric body reconstruction. Our method can estimate 3D body shapes from dressed‐human silhouettes within 4 seconds, while the fastest method reported previously need 1 minute. In addition, our estimation error is within the size tolerance for clothing industry. We dress 6042 naked bodies with 3 sets of common clothes by physically based cloth simulation technique. To the best of our knowledge, We are the first to construct such a database containing 3D naked and dressed body pairs and our database may contribute to the areas of human body shapes estimation and cloth simulation.  相似文献   

19.
We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under‐constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables motion capture using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall.  相似文献   

20.
3D garment capture is an important component for various applications such as free‐view point video, virtual avatars, online shopping, and virtual cloth fitting. Due to the complexity of the deformations, capturing 3D garment shapes requires controlled and specialized setups. A viable alternative is image‐based garment capture. Capturing 3D garment shapes from a single image, however, is a challenging problem and the current solutions come with assumptions on the lighting, camera calibration, complexity of human or mannequin poses considered, and more importantly a stable physical state for the garment and the underlying human body. In addition, most of the works require manual interaction and exhibit high run‐times. We propose a new technique that overcomes these limitations, making garment shape estimation from an image a practical approach for dynamic garment capture. Starting from synthetic garment shape data generated through physically based simulations from various human bodies in complex poses obtained through Mocap sequences, and rendered under varying camera positions and lighting conditions, our novel method learns a mapping from rendered garment images to the underlying 3D garment model. This is achieved by training Convolutional Neural Networks (CNN‐s) to estimate 3D vertex displacements from a template mesh with a specialized loss function. We illustrate that this technique is able to recover the global shape of dynamic 3D garments from a single image under varying factors such as challenging human poses, self occlusions, various camera poses and lighting conditions, at interactive rates. Improvement is shown if more than one view is integrated. Additionally, we show applications of our method to videos.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号