首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
STL模型切片轮廓数据的修正与优化   总被引:8,自引:0,他引:8  
STL文件因其简单和通用性好,一直作为快速成型领域的准标准。但是由于其本身的缺陷,造成切片之后的轮廓信息数据有大量的冗余数据甚至错误。该文针对切片轮廓的不封闭,给出了有效的修正算法;通过对轮廓信息中冗余数据的分析,提出了一种冗余数据的滤除算法。该算法高效简单,提高了后续的数据处理的效率和成型件的加工质量,改善了零件成型的加工性能。  相似文献   

2.
一种改进的STL文件快速分层算法   总被引:3,自引:0,他引:3  
高效的STL模型切片算法是快速成型制造的前提和基础,在有向加权图切片算法的基础上提出了一种快速STL模型切片分层算法,去除了耗时的有向加权图建立,对切片后的数据进行后处理,除去数据中的冗余点,从而提供了一种快速的STL模型切片算法。大量实验及数据表明,新算法具有较高的效率。  相似文献   

3.
段黎明  宋军  聂璇 《计算机科学》2008,35(1):263-265
针对原始的工业CT切片数据,应用矢量化软件提取工件封闭轮廓点数据,并通过轮廓配准、数据精简、三角网格划分、端面处理对轮廓点数据进行处理,实现了面向RP的工业CT切片数据格式转换.为使构成STL文件的三角网格更加优化,文中提出了平均点距值法数据精简与Delaunay三角化的网格精简相结合的数据精简算法,实例验证了该方法的正确性.  相似文献   

4.
激光直接制造和再制造中的三维CAD模型直接分层技术   总被引:2,自引:0,他引:2  
在分析SolidWorks软件平台下CAD模型数据的内部表达方法以及拓扑信息和几何信息提取方法的基础上,研究三维CAD模型直接分层技术.对SolidWorks进行二次开发,调用SolidWorks应用程序接口函数中的曲面一曲面求交函数对CAD模型曲面与分层平面求交,得到的交线首尾相连形成轮廓轨迹;同时研究了光栅填充扫描算法及程序实现.为实现切片数据的通用化,设计了记录切片数据的文件格式,用直线、圆弧或圆描述分层轮廓.对上述的直接分层不仅进行了软件模拟,还用于直接制造.制作的试件与STL间接分层试件比较结果表明,采用直接分层的试件的精度和表面质量优于STL间接分层.  相似文献   

5.
为了提高后续截面轮廓重建的精度,提出了基于截面切片后数据处理的系列算法.首先用点云束细化算法对切片数据进行细化处理,采用类似于移动最小二乘法的跟踪方法,整个过程不对测量数据进行局部坐标变换,迭代步长由点云密度控制;将截面切片数据细化后,用双链表排序算法对细化后的数据进行排序处理;对截面测量数据的特征点提取,结合"角偏差法"和"弦高差法"的优点,研究了对提取特征点结果影响的几个主要因子,提出一种对冗余数据处理及特征点提取的方法,得到的点云数据可以进行很好的分组处理,并拟合成合适的轮廓特征单元.  相似文献   

6.
STL模型分割截面的三角剖分算法   总被引:4,自引:0,他引:4  
针对分割STL模型时需要对分割截而进行三角剖分的问题,提出STL模型分割截面的Delauay三角剖分算法,将截面轮廓围成区域分成一个或多个区域单元,分别进行Delaunay三角剖分,并按STL模型标准拾取三角形,文中算法不用对分割截面轮廓进行复杂的凸划分和多轮廓的单轮廓化处理,提高了STL模型分割截面的三角剖分效率,尤其适合对具有复杂型腔的STL模型的截面进行三角剖分,应用实例表明:文中算法是正确有效的,具有实用价值。  相似文献   

7.
STL模型特征面片自适应分层算法*   总被引:1,自引:1,他引:0  
为获得聚苯乙烯泡沫塑料(EPS)异步快速成形机所需的加工路径,提出了一种基于特征面片的Stereolithography(STL)模型自适应分层算法。算法根据模型在分层方向上的特征面片和最小加工厚度来确定分层位置和切割平面,无需坐标变换,可沿任意方向直接获取切片两端截面轮廓信息。该算法采用了一种新方法快速分割边界面片,通过轮廓信息整理可得到点轮廓、非闭合环轮廓、闭合环轮廓。闭合环轮廓经三角化后,可封闭切片端面。本文算法均采用Visual C++ 6.0实现,经实验证明运行稳定有效。  相似文献   

8.
为了解决CAD模型转换成STL模型时出现误差、均匀切片时加工时间和表面质量难以协调的问题,提出了自适应的直接切片算法.该算法调用商用软件中切片函数对模型直接切片,切片厚度选择采用自适应切片方法.首先求出能够表示模型垂直方向轮廓变化情况的参考曲线,然后在切片时根据参考曲线上各点处切线确定在该处的切片厚度.使用该算法避免了用三角面片逼近CAD模型时的误差,而且根据参考曲线上点的切线决定切片的厚度,不需要试切,在保证模型表面精度的同时提高了成型效率.  相似文献   

9.
杨晟院  陈瑶  易飞  刘新 《软件学报》2017,28(12):3358-3366
STL(stereo lithography)作为3D扫描数据和快速原型制造事实上的标准,其广泛应用于娱乐、制造业和Internet等领域.但随着3D模型越来越复杂,数据量越来越庞大,从STL文件难以快速获得完整拓扑关系以及其存在大量冗余信息的缺点,制约着STL网格模型的进一步优化处理与应用.为此,需要针对STL网格模型进行网格重建.本文针对2维流形的STL三角形曲面网格模型,提出了一种快速的网格重建方法.主要利用删除在重建过程中达到饱和的顶点,以便减少需要比对的顶点数,并结合STL文件数据的相关性来提高顶点搜索与比较的效率.对于非封闭的曲面网格,本文算法在提高曲面网格重建效率的同时,还能有效地提取曲面网格模型的边界信息.另外,重建的曲面网格数据文件大大地减少了存储空间,有效地去除了冗余数据.实验结果表明本文算法的高效性及鲁棒性.  相似文献   

10.
针对RP软件系统中STL模型存在精度损失、数据冗余大等缺陷,基于VRML模型具有建模精度高、数据冗余小、适于网络传输等特点,因此RP软件能够接受和处理VRML模型,也是克服STL模型不足的一个途径.对RP软件中VRML模型的可视化进行了深入的系统研究.讨论了VRML文件的语法分析和场景渲染,设计了兼顾VRML引用技术和RP后续数据处理要求的数据结构,给出了编组节点的嵌套处理及参数化Transform节点的OpenGL实现方法.提出并利用OpenGL模型视图矩阵实现分层方向的手动选择.最后利用VC 和OpenGL开发并实现了VRML模型的可视化.  相似文献   

11.
文档图像分割的研究对于打印、传真以及这样的数据处理工作具有十分重要的意义 .提出了一个文档图像分割的新算法 .分割算法的特征是基于根据文档图像中各种图像类型直方图的不同特性 .算法中重要的特征是通过小波图像来加强原始图像的特征 ,从而使得精确度提高  相似文献   

12.
Document image classification is an important step in Office Automation, Digital Libraries, and other document image analysis applications. There is great diversity in document image classifiers: they differ in the problems they solve, in the use of training data to construct class models, and in the choice of document features and classification algorithms. We survey this diverse literature using three components: the problem statement, the classifier architecture, and performance evaluation. This brings to light important issues in designing a document classifier, including the definition of document classes, the choice of document features and feature representation, and the choice of classification algorithm and learning mechanism. We emphasize techniques that classify single-page typeset document images without using OCR results. Developing a general, adaptable, high-performance classifier is challenging due to the great variety of documents, the diverse criteria used to define document classes, and the ambiguity that arises due to ill-defined or fuzzy document classes.  相似文献   

13.
K最近邻算法理论与应用综述   总被引:2,自引:0,他引:2  
k最近邻算法(kNN)是一个十分简单的分类算法,该算法包括两个步骤:(1)在给定的搜索训练集上按一定距离度量,寻找一个k的值。(2)在这个kNN算法当中,根据大多数分为一致的类来进行分类。kNN算法具有的非参数性质使其非常易于实现,并且它的分类误差受到贝叶斯误差的两倍的限制,因此,kNN算法仍然是模式分类的最受欢迎的选择。通过总结多篇使用了基于kNN算法的文献,详细阐述了每篇文献所使用的改进方法,并对其实验结果进行了分析;通过分析kNN算法在人脸识别、文字识别、医学图像处理等应用中取得的良好分类效果,对kNN算法的发展前景无比期待。  相似文献   

14.
Learning middle-level image representations is very important for the computer vision community, especially for scene classification tasks. Middle-level image representations currently available are not sparse enough to make training and testing times compatible with the increasing number of classes that users want to recognize. In this work, we propose a middle-level image representation based on the pattern that extremely shared among different classes to reduce both training and test time. The proposed learning algorithm first finds some class-specified patterns and then utilizes the lasso regularization to select the most discriminative patterns shared among different classes. The experimental results on some widely used scene classification benchmarks (15 Scenes, MIT-indoor 67, SUN 397) show that the fewest patterns are necessary to achieve very remarkable performance with reduced computation time.  相似文献   

15.
Several methods for segmentation of document images (maps, drawings, etc.) are explored. The segmentation operation is posed as a statistical classification task with two pattern classes: print and background. A number of classification strategies are available. All require some prior information about the distribution of gray levels for the two classes. Training (either supervised or unsupervised) is employed to form these initial density estimates. Automatic updating of the class-conditional densities is performed within subregions in the image to adapt these global density estimates to the local image area. After local class-conditional densities have been obtained, each pixel is classified within the window using several techniques: a noncontextual Bayes classifier, Besag's classifier, relaxation, Owen and Switzer's classifier, and Haslett's classifier. Four test images were processed. In two of these, the relaxation method performed best, and in the other two, the noncontextual method performed best. Automatic updating improved the results for both classifiers  相似文献   

16.
In this paper, a novel approach for face recognition is proposed by using vector projection length to formulate the pattern recognition problem. Face images of a single-object class are more similar than those of different-object classes. The projection length of a test image vector on the direction of a training image vector can measure the similarity of the two images. But the decision cannot be made by only a training image which is the most similar to the test one, the mean image vector of each class also contributes to the final classification. Thus, the decision of the proposed vector projection classification (VPC) algorithm is ruled in favor of the maximum combination projection length. To address the partial occlusion problem in face recognition, we propose a local vector projection classification (LVPC) algorithm. The experimental results show that the proposed VPC and LVPC approaches are efficient and outperform some existing approaches.  相似文献   

17.
Text categorization presents unique challenges to traditional classification methods due to the large number of features inherent in the datasets from real-world applications of text categorization, and a great deal of training samples. In high-dimensional document data, the classes are typically categorized only by subsets of features, which are typically different for the classes of different topics. This paper presents a simple but effective classifier for text categorization using class-dependent projection based method. By projecting onto a set of individual subspaces, the samples belonging to different document classes are separated such that they are easily to be classified. This is achieved by developing a new supervised feature weighting algorithm to learn the optimized subspaces for all the document classes. The experiments carried out on common benchmarking corpuses showed that the proposed method achieved both higher classification accuracy and lower computational costs than some distinguishing classifiers in text categorization, especially for datasets including document categories with overlapping topics.  相似文献   

18.
The major issue in pattern classification is in the extraction of features in the training phase. The focus of this work is on combining the ability of wavelet networks and the deep learning techniques to propose a new supervised feature extraction method to pattern classification. This new approach allows the classification of all classes of the dataset by the reconstruction of a Deep Stacked wavelet Auto-Encoder. This Network is obtained after a series of wavelet Auto-Encoders followed by a Softmax classifier at the last layer. Finally, a fine-tuning is applied for the improvement of our result using a back propagation algorithm. Our approach is tested with different image datasets which are the COIL-100, the APTI and the ImageNet datasets and is also tested with two other audio corpuses that contain Arabic words and French words. The experimental test demonstrates the efficiency of our network for image and audio classification compared to other methods.  相似文献   

19.
20.
文本是计算机视觉的许多应用中的一项重要特征,图像中的文本往往包含着比较丰富的信息,将文本图像信息里的文字进行提取和识别,对于图像内容的分析、理解、信息检索等方面具有重要的意义。文本图像的识别分为预处理,文字的切分,细化,特征选择与提取,最后对候选文字进行识别。在文字的切分方面提出了一种改进的投影算法,该算法能在很大程度上提高文字切分的准确度,采用基于数学形态学算法对文字进行细化处理,并在特征选择方面引用了多级分类的算法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号