首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Error-based segmentation of cloud data for direct rapid prototyping   总被引:1,自引:0,他引:1  
This paper proposes an error-based segmentation approach for direct rapid prototyping (RP) of random cloud data. The objective is to fully integrate reverse engineering and RP for rapid product development. By constructing an intermediate point-based curve model (IPCM), a layer-based RP model is directly generated from the cloud data and served as the input to the RP machine for fabrication. In this process, neither a surface model nor an STL file is generated. This is accomplished via three steps. First, the cloud data is adaptively subdivided into a set of regions according to a given subdivision error, and the data in each region is compressed by keeping the feature points (FPs) within the user-defined shape tolerance using a digital image based reduction method. Second, based on the FPs of each region, an IPCM is constructed, and RP layer contours are then directly extracted from the models. Finally, the RP layer contours are faired with a discrete curvature based fairing method and subsequently closed to generate the final layer-based RP model. This RP model can be directly submitted to the RP machine for prototype manufacturing. Two case studies are presented to illustrate the efficacy of the approach.  相似文献   

2.
三维激光扫描点云为文物模型重建提供了新的数据支持,但扫描所得海量点云包含大量冗余数据,给建模带来很大不便。针对扫描点云过密、冗余数据较多的问题,提出了一种基于自适应分层的文物点云数据压缩算法,算法的基本思想是:首先通过基于倒角距离变换的自适应分层方法对原始点云进行自适应分层;然后使用弦高差值作为特征点的判别依据来删除冗余数据,采用改进的弦高差法对每层点云进行压缩,保留对模型特征贡献较大的特征点。实验结果表明通过形状误差控制分层厚度,能在平缓部位减少层数提高效率的同时不至于复杂部位因分层过厚而损失重要特征,改进的弦高差法在保留大曲率特征的同时不至于平缓部位出现孔洞,从而保证了模型重建的精度。  相似文献   

3.
Rapid prototyping (RP) provides an effective method for model verification and product development collaboration. A challenging research issue in RP is how to shorten the build time and improve the surface accuracy especially for complex product models. In this paper, systematic adaptive algorithms and strategies have been developed to address the challenge. A slicing algorithm has been first developed for directly slicing a Computer-Aided Design (CAD) model as a number of RP layers. Closed Non-Uniform Rational B-Spline (NURBS) curves have been introduced to represent the contours of the layers to maintain the surface accuracy of the CAD model. Based on it, a mixed and adaptive tool-path generation algorithm, which is aimed to optimize both the surface quality and fabrication efficiency in RP, has been then developed. The algorithm can generate contour tool-paths for the boundary of each RP sliced layer to reduce the surface errors of the model, and zigzag tool-paths for the internal area of the layer to speed up fabrication. In addition, based on developed build time analysis mathematical models, adaptive strategies have been devised to generate variable speeds for contour tool-paths to address the geometric characteristics in each layer to reduce build time, and to identify the best slope degree of zigzag tool-paths to further minimize the build time. In the end, case studies of complex product models have been used to validate and showcase the performance of the developed algorithms in terms of processing effectiveness and surface accuracy.  相似文献   

4.
This paper introduces a new tolerance-based method to generate the optimum layer setup required to build layered manufacturing (LM) end-user parts for maximized efficiency. To achieve this, the deviation between the final polished LM part geometry and the original design model is formulated and controlled. Minimum build time is then realized through optimization of the thickness and position of each layer with respect to the design and final polished part geometry in order to minimize the number of layers to be used. Current LM layer setup methods target the intermediate or raw layered model, generated directly by an LM machine. By not considering the complete LM build process as well as the final polished part geometry, the involved layer setup problem cannot be correctly formulated and solved. Overly conservative layer thickness is then chosen, causing more layers than necessary to be used and greatly compromised efficiency. To achieve maximized efficiency, this work proposed a method based on error compensation and minimization. It has been applied to solving for the optimum layer setup necessary to allow the final polished physical part to meet the user-specified tolerance limit for the design model. Case studies have been performed and the results have validated that the presented method is able to minimize the number of layers for constructing an LM part while controlling the maximum error for tolerance conformance.  相似文献   

5.
针对传统的卷积神经网络(CNN)不能直接处理点云数据,需先将点云数据转换为多视图或者体素化网格,导致过程复杂且点云识别精度低的问题,提出一种新型的点云分类与分割网络Linked-Spider CNN。首先,在Spider CNN基础上通过增加Spider卷积层数以获取点云深层次特征;其次,引入残差网络的思想在每层Spider卷积增加短连接构成残差块;然后,将每层残差块的输出特征进行拼接融合形成点云特征;最后,使用三层全连接层对点云特征进行分类或者利用多层卷积层对点云特征进行分割。在ModelNet40和ShapeNet Parts数据集上将所提网络与PointNet、PointNet++和Spider CNN等网络进行对比实验,实验结果表明,所提网络可以提高点云的分类精度和分割效果,说明该网络具有更快的收敛速度和更强的鲁棒性。  相似文献   

6.
近年来基于二维图像的三维建模方法取得了快速发展,但就人体建模而言,由于摄像头采集到的二维人体图像包含衣物、发丝等大量的纹理信息,而像虚拟试衣等相关应用需要将人体表面的衣物褶皱等纹理信息去除,同时考虑到裸体数据采集侵犯了用户的隐私,因此提出一种基于二维点云图像到三维人体模型的新型建模方法。与摄像机等辅助设备进行二维图片数据集的采集不同,该算法的输入是由三维人体点云模型以顶点模式绘制的二维点云渲染图。主要工作是建立一个由二维点云图和相应的人体黑白二值图构成的数据集,并训练一个由前者生成后者的生成对抗网络模型。该模型将二维点云图转化为相应的黑白二值图。将该二值图输入一个训练好的卷积神经网络,用于评估二维图像到三维人体模型构建的效果。考虑到由不完整三维点云数据重建完整的三维人体网格模型是一个具有挑战性的问题,因此通过模拟二维点云的破损和残缺状态,使得算法能够处理不完整的二维点云图。大量的实验结果表明,该方法重建出的三维人体模型能够有效实现视觉上的真实感,为了对重建后的精度进行定量的分析,选取了人体特征中具有代表性的腰围特征作为误差评估;为了增加三维人体模型库中人体形态的多样性,还引入一种便捷的三维人体模型数据增强技术。实验结果表明,该算法只需要输入一张二维点云图像,就能快速创建出相应的数字化人体模型。  相似文献   

7.
三维点云数据通常具备无序排列的结构。在三维点云数据处理领域,深度学习模型通常会利用最大池化等对称操作来处理点云的排列不变性。最大池化方法一方面会破坏点云的信息结构,使得局部信息与全局信息难以交互。另一方面,最大池化方法对点云信息过度压缩,得到的特征对局部细节描述不足。针对上述问题,提出了AttentionPointNet的网络结构。该网络利用注意力机制,使每个点与点云其余部分进行特征交互,实现了局部与全局信息的综合。为降低最大池化造成的信息损失,提出了一种稀疏卷积方法来替代池化操作。这种方法利用大步长的稀疏卷积实现全局信息的提取。在ModelNet40数据集上,AttentionPointNet取得了87.2%的准确率。不使用池化层,完全采用卷积层实现的模型取得了86.2%的分类准确率。  相似文献   

8.
In this study, a method for generation of sectional contour curves directly from cloud point data is given. This method computes contour curves for rapid prototyping model generation via adaptive slicing, data points reducing and B-spline curve fitting. In this approach, first a cloud point data set is segmented along the component building direction to a number of layers. The points are projected to the mid-plane of the layer to form a 2-dimensional (2D) band of scattered points. These points are then utilized to construct a boundary curve. A number of points are picked up along the band and a B-spline curve is fitted. Then points are selected on the B-spline curve based on its discrete curvature. These are the points used as centers for generation of circles with a user-define radius to capture a piece of the scattered band. The geometric center of the points lying within these circles is treated as a control point for a B-spline curve fitting that represents a boundary contour curve. The advantage of this method is simplicity and insensitivity to common small inaccuracies. Two experimental results are included to demonstrate the effectiveness and applicability of the proposed method.  相似文献   

9.
We introduce a novel weight pruning methodology for MLP classifiers that can be used for model and/or feature selection purposes. The main concept underlying the proposed method is the MAXCORE principle, which is based on the observation that relevant synaptic weights tend to generate higher correlations between error signals associated with the neurons of a given layer and the error signals propagated back to the previous layer. Nonrelevant (i.e. prunable) weights tend to generate smaller correlations. Using the MAXCORE as a guiding principle, we perform a cross-correlation analysis of the error signals at successive layers. Weights for which the cross-correlations are smaller than a user-defined error tolerance are gradually discarded. Computer simulations using synthetic and real-world data sets show that the proposed method performs consistently better than standard pruning techniques, with much lower computational costs.  相似文献   

10.
杨飞  王欢  金忠 《机器人》2018,40(6):803-816
为了在道路检测中结合图像的多尺度特征以及点云的空间结构特征,使检测算法能有效地排除道路场景中的阴影、光线等干扰,本文提出一种基于融合分层条件随机场的图像和点云融合的道路分割模型.首先,利用Meanshift算法产生多个尺度的超像素分割,建立基于图像的多尺度分层条件随机场.将点云数据投影到图像平面,再建立基于点云的多尺度分层条件随机场.在条件随机场的像素层和点云层之间建立连接,构造多尺度的融合模型.然后,针对多尺度融合模型中图像层的每一层和点云层的每一层,分别提取对应尺度的图像特征或点云特征.每一层用梯度提升树算法根据提取的特征训练1个分类器,利用每一层的分类器得到对应层的数据项代价.最后,使用α扩张算法对融合模型进行联合优化求解.在KITTI Road数据集上的实验结果表明,该方法具有良好的道路检测性能.  相似文献   

11.
目的 点态卷积网络用于点云分类分割任务时,由于点态卷积算子可直接处理点云数据,逐点提取局部特征向量,解决了结构化点云带来的维度剧增和信息丢失等问题。但是为了保持点云数据结构的一致性,点态卷积算子及卷积网络模型本身并不具有描述点云全局特征的结构,因此,对点态卷积网络模型进行扩展,扩展后的模型具有的全局特征是保证分类分割准确性的重要依据。方法 构造中心点放射模型来描述点云逐点相对于全局的几何约束关系,将其引入到点态卷积网络的特征拼接环节扩展特征向量,从而为点态卷积网络构建完善的局部—全局特征描述,用于点云数据的分类分割任务。首先,将点云视为由中心点以一定方向和距离放射到物体表面的点的集合,由中心点指向点云各点的放射矢量,其矢量大小确定了各点所存在的曲面和对于中心点的紧密程度,矢量方向描述了各点对于中心点的包围方向及存在的射线。进而由点云中的坐标信息得到点云的中心点,逐点计算放射矢量构造中心点放射模型,用以描述点云的全局特征。然后,利用点云数据的坐标信息来检索点的属性,确定卷积中参与特定点卷积运算的邻域,点态卷积算子遍历点云各点,输出逐点局部特征,进一步经多层点态卷积操作得到不同深度上的局部特征描述。最后,将中心点放射模型的全局特征和点态卷积的局部特征拼接,完成特征扩展,得到点态卷积网络的扩展模型。拼接后的局部—全局特征输入全连接层用于类标签预测,输入点态卷积层用于逐点标签预测。结果 在ModelNet40和S3DIS(Stanford large-scale 3D indoor spaces dataset)数据集上分别进行实验,验证模型的分类分割性能。在ModelNet40的分类实验中,与点态卷积网络相比,扩展后的网络模型在整体分类精度和类属分类精度上分别提高1.8%和3.5%,在S3DIS数据集的分割实验中,扩展后的点态卷积网络模型整体分割精度和,类属分割精度分别提高0.7%和2.2%。结论 引入的中心点放射模型可以有效获取点云数据的全局特征,扩展后的点态卷积网络模型实现了更优的分类和分割效果。  相似文献   

12.
利用微波辐射计对月壤厚度进行研究   总被引:8,自引:0,他引:8       下载免费PDF全文
介绍了运用并矢Green函数和起伏逸散定理来计算平面分层媒质的辐射亮温,同时利用最小二乘法对多通道辐射计的模拟测量结果进行处理得到分层媒质厚度的方法。将该方法应用于月壤厚度的反演研究,在假定月壤如平面分层结构模型的情况下,得到了其厚度的反演结果,并对反演误差原因进行分析。  相似文献   

13.
Several attempts have been made to grasp three‐dimensional (3D) ground shape from a 3D point cloud generated by aerial vehicles, which help fast situation recognition. However, identifying such objects on the ground from a 3D point cloud, which consists of 3D coordinates and color information, is not straightforward due to the gap between the low‐level point information (coordinates and colors) and high‐level context information (objects). In this paper, we propose a ground object recognition and segmentation method from a geo‐referenced point cloud. Basically, we rely on some existing tools to generate such a point cloud from aerial images, and our method tries to give semantics to each set of clustered points. In our method, firstly, such points that correspond to the ground surface are removed using the elevation data from the Geographical Survey Institute. Next, we apply an interpoint distance‐based clustering and color‐based clustering. Then, such clusters that share some regions are merged to correctly identify a cluster that corresponds to a single object. We have evaluated our method in several experiments in real fields. We have confirmed that our method can remove the ground surface within 20 cm error and can recognize most of the objects.  相似文献   

14.
应用最小生成树实现点云分割   总被引:3,自引:1,他引:2       下载免费PDF全文
点云分割是点云参数化、形状识别、编辑造型等领域的关键基础算法。提出一种基于最小生成树的点云模型分割算法,包括生成带状分割边界、区域增长、拆分带状分割边界以及生成最终区域4个步骤。算法采用Snake模型提取分割曲线并向两侧扩展形成带状分割边,利用最小生成树实现区域增长来提取区域内部点,最后拆分带状分割边界并与已有区域合并形成最终区域。实验结果表明,该算法能够有效避免过分割和欠分割,能够生成光顺分割边界,与Level Set分割算法相比具有较高的效率。  相似文献   

15.
A multi-spectral non-local (MSN) method is developed for advanced retrieval of boundary layer cloud properties from remote sensing data, as an alternative to the independent pixel approximation (IPA) method. The non-local method uses data at both the target pixel and neighboring pixels to retrieve cloud properties such as pixel-averaged cloud optical thickness and effective droplet radius. Radiance data to be observed from space were simulated by a three-dimensional (3D) radiation model and a stochastic boundary layer cloud model with two-dimensional (horizontal and vertical) variability in cloud liquid water and effective radius. An adiabatic assumption is used for each cloud column to model the geometrical thickness and vertical profiles of cloud liquid water content and effective droplet radius, neglecting drizzle and cloud brokenness for simplicity. The dependence of radiative smoothing and roughening on horizontal scale, optical thickness and single scattering albedo are investigated. Then, retrieval methods using 250-m horizontal resolution data onboard new generation satellites are discussed. The regression model for the MSN method was trained based on datasets from numerical simulations. The training was performed with respect to various domain averages of optical thickness and effective radius, because smoothing and roughening effects are strongly dependent on the two variables. Retrieval accuracy is discussed here with datasets independent of those used in the training, towards assessing the generality of the technique. It is demonstrated that retrieval accuracy of cloud optical thickness, which is often retrieved from single-spectral visible-wavelength data, is improved the most using neighboring pixel data and secondly using multi-spectral data, and ideally with both. When the IPA retrieval method is applied to optical thickness and effective radius, the root-mean-square relative errors can be 15-90%, depending on solar and view directions. In contrast, the MSN method has errors of 4-10%, which is smaller than IPA by a factor of 2-10. It is also suggested that the accuracy of the MSN method is insensitive to some assumptions in the inhomogeneous cloud input data used to train the regression model.  相似文献   

16.
建立了点云几何分析的相关理论框架,即定义和计算点云潜在曲线的几何微分量,包括Frenet标架、曲率、挠率等;在此基础上提出一种新的点云空间曲线匹配方法。直接在点云上计算微分量来获取相应曲线的特征信息,从而构建全局粗匹配方案,并进一步建立基于空间动力学的精细匹配优化模型。数值实验表明,微分信息计算和匹配方法能很好地适用于带噪音的点云数据,有效地实现点云空间曲线的高精度匹配。  相似文献   

17.
Non-uniform deformation of an STL model satisfying error criteria   总被引:3,自引:0,他引:3  
In this research, a method is presented for generating a deformed model satisfying given error criteria from an STL model in a triangular-mesh representation suitable for rapid prototyping (RP) processes. A deformed model is a non-uniformly modified shape from a base STL model. In developing a family product with various sizes such as a shoe, sometimes prototypes for all sizes should be made using an RP machine. Although an STL model is generated from a solid model, it is well known that creating a non-uniformly modified solid model from a base solid model is very difficult. Generally there are some gaps between surfaces after modification, and stitching the gaps is very difficult. To solve this problem, the authors explored the possibility of generating a deformed STL model directly from a base STL model. This research includes a data structure for modifying the STL model, checking the error of a modified edge compared with the exact non-uniformly scaled curve, checking the error of a modified facet compared with the exact non-uniformly scaled surface, and splitting a facet with an error greater than the allowable tolerance. Using the results of this research, the difficult work of creating solid models to build non-uniformly deformed STL models could be avoided.  相似文献   

18.
自动驾驶汽车虚拟测试已成为自动驾驶或车路协同测试评价的一个重要手段,三维激光雷达数据模拟生成是自动驾驶汽车虚拟测试中的重要任务之一,目前多采用基于飞行时间原理的几何模型方法生成激光雷达三维点云数据,该方法生成点云实时性较差。布告牌是虚拟场景中常采用的树木建模方法,由于布告牌仅由两个矩形面片即八个三角形面片组成,直接采用布告牌方法生成的三维点云数据难以反映树木的真实空间信息。针对上述问题,提出了一种基于布告牌空间变换的快速树木三维点云生成方法。以布告牌的纹理图像为依据,根据纹理透明度获取树木二维平面点云分布,经二维树木点云的轮廓提取,结合树木结构的先验知识进行旋转、随机偏移和尺度变换,以更少的三角形面片数和更小的计算代价获得树木的三维点云数据。提出了一种空间直方图三维点云相似度评价方法,将三维点云空间量化为若干个子空间,获得三维点云的投影空间直方图,采用巴氏系数计算投影空间直方图相似度,以投影空间直方图加权相似度作为点云相似度评价值。实验结果表明,基于布告牌空间变换方法和几何模型方法生成的云杉等三种树木的三维点云数据的平均相似度在90%以上,且该方法生成树木点云的时间仅是几何模型法的1%,因此布告牌空间变换树木三维点云生成方法快速且准确,可以满足自动驾驶汽车虚拟测试的性能要求。  相似文献   

19.
目的 当前的大场景3维点云语义分割方法一般是将大规模点云切成点云块再进行处理。然而在实际计算过程中,切割边界的几何特征容易被破坏,使得分割结果呈现明显的边界现象。因此,迫切需要以原始点云作为输入的高效深度学习网络模型,用于点云的语义分割。方法 为了解决该问题,提出基于多特征融合与残差优化的点云语义分割方法。网络通过一个多特征提取模块来提取每个点的几何结构特征以及语义特征,通过对特征的加权获取特征集合。在此基础上,引入注意力机制优化特征集合,构建特征聚合模块,聚合点云中最具辨别力的特征。最后在特征聚合模块中添加残差块,优化网络训练。最终网络的输出是每个点在数据集中各个类别的置信度。结果 本文提出的残差网络模型在S3DIS (Stanford Large-scale 3D Indoor Spaces Dataset)与户外场景点云分割数据集Semantic3D等2个数据集上与当前的主流算法进行了分割精度的对比。在S3DIS数据集中,本文算法在全局准确率以及平均准确率上均取得了较高精度,分别为87.2%,81.7%。在Semantic3D数据集上,本文算法在全局准确率和平均交并比上均取得了较高精度,分别为93.5%,74.0%,比GACNet (graph attention convolution network)分别高1.6%,3.2%。结论 实验结果验证了本文提出的残差优化网络在大规模点云语义分割的应用中,可以缓解深层次特征提取过程中梯度消失和网络过拟合现象并保持良好的分割性能。  相似文献   

20.
在真实的扫描环境中,由于视线遮挡或技术人员操作不当,实际采集到的点云模型会存在形状不完整的问题。点云模型的不完整性会对后续应用产生严重的影响,因此提出3D点云形状补全GAN用于完成点云模型的形状补全。该网络的点云重建部分由PointNet中用于数据对齐的T-Net结构与3D点云AutoEncoder网络相结合,来完成预测和填充缺失数据,识别器采用3D点云AutoEncoder中的Encoder部分对补全3D点云数据与真实的3D点云数据进行区分。最后,在ShapeNet数据集中训练上述网络结构,对所训练的网络模型进行验证并与其他基准方法进行定性比较。从实验结果可以看出,3D点云形状补全GAN可以将具有缺失数据的点云模型补全为完整的3D点云。在ShapeNet的3个子数据集chair,table以及bed上,相比基于3D点云AutoEncoder的方法,所提方法的F 1分数分别提高了3.0%,3.3%以及3.1%,相比基于体素3D-EPN的方法,所提方法的F 1分数分别提高了9.9%,5.8%以及4.3%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号