首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 843 毫秒
1.
目的 针对现有的血管分割方法对血管的分割精度尚有不足,尤其是对噪声等影响下的断裂血管,基于Stein-Weiss函数的解析性提出了一种新的3维血管分割算法,能够分割出更精细更清晰的血管。方法 首先,通过图像增强和窗宽窗位调节的预处理来增加血管点与背景的对比度。然后,将Stein-Weiss函数与梯度算子结合起来,把CT体数据的每一个体素都表示为一个Stein-Weiss函数,体素6邻域的灰度值作为Stein-Weiss函数各组成部分的系数。再求出Stein-Weiss函数在xyz 3个方向上的梯度值,大于某一个阈值时,便将此体素视为血管边缘上的点。最后,根据提取出血管边缘的2维CT切片重建出3维的血管。结果 对肝静脉的造影数据S70进行肝脏血管分割与3维重建的实验结果表明,利用该算法进行血管分割的敏感性和特异性相对于区域生长算法和八元数解析分割算法都较高。尤其是对于血管分割的去噪方面有明显优势,因此能够快速有效地分割出更清晰更精细的血管。结论 提出了一种新的血管分割算法,利用Stein-Weiss函数的解析性来提取血管的边缘,实验结果表明,此算法可以有效快速地去除血管噪声并得到更精细的分割结果。由于Stein-Weiss解析的性质可以适合任意维数,所以利用Stein-Weiss解析函数性质可以进行2维或更高维的图像边缘识别。  相似文献   

2.
Modern 3D printing technologies and the upcoming mass‐customization paradigm call for efficient methods to produce and distribute arbitrarily shaped 3D objects. This paper introduces an original algorithm to split a 3D model in parts that can be efficiently packed within a box, with the objective of reassembling them after delivery. The first step consists in the creation of a hierarchy of possible parts that can be tightly packed within their minimum bounding boxes. In a second step, the hierarchy is exploited to extract the (single) segmentation whose parts can be most tightly packed. The fact that shape packing is an NP‐complete problem justifies the use of heuristics and approximated solutions whose efficacy and efficiency must be assessed. Extensive experimentation demonstrates that our algorithm produces satisfactory results for arbitrarily shaped objects while being comparable to ad hoc methods when specific shapes are considered.  相似文献   

3.
融合边界信息的高分辨率遥感影像分割优化算法   总被引:1,自引:0,他引:1       下载免费PDF全文
目的 针对目前区域分割算法获取的区域边界与真实地物边界不一致问题,利用高分辨率遥感影像地物内具有均质性和地物间边缘信息突出的特点,提出一种融合边界信息的高分辨率遥感影像分割优化算法。方法 首先采用Canny算法对遥感影像进行边缘提取并进行边缘连接处理,产生闭合边界;然后将边界与初始分割结果进行融合处理,获得新的分割结果;最后在闭合边界约束下,基于灰度相似性准则对新的分割结果进行区域合并,获得优化后的最终分割结果。结果 采用本文提出的分割优化算法对Mean Shift算法和eCognition软件获得的分割结果进行优化处理,优化后的分割结果与初始分割结果相比正确分割率(RR)平均提高了4%,验证了本文算法的有效性。结论 该优化算法适用性广,可优化基于区域、基于边界和基于聚类等多种分割方法,同时该算法既能保持高分辨率遥感影像分割的区域完整性,又能保持地物边缘细节特征,提高了分割精度。  相似文献   

4.
Determining the robust stability of interval quasipolynomials leads to a NP problem: an enormous number of testing edge polynomials. This paper develops an efficient approach to reducing the number of testing edge polynomials. This paper solves the stability test problem of interval quasipolynomials by transforming interval quasipolynomials into two‐dimensional (2‐D) interval polynomials. It is shown that the robust stability of an interval 2‐D polynomial can ensure the stability of the quasipolynomial, and the algebraic test algorithm for 2‐D s‐z interval polynomials is provided. The stability of 2‐D s‐z vertex polynomials and 2‐D s‐z edge polynomials were tested by using a Schur Table of complex polynomials.  相似文献   

5.
Despite the success of quad‐based 2D surface parameterization methods, effective parameterization algorithms for 3D volumes with cubes, i.e. hexahedral elements, are still missing. Cube Cover is a first approach for generating a hexahedral tessellation of a given volume with boundary aligned cubes which are guided by a frame field. The input of Cube Cover is a tetrahedral volume mesh. First, a frame field is designed with manual input from the designer. It guides the interior and boundary layout of the parameterization. Then, the parameterization and the hexahedral mesh are computed so as to align with the given frame field. Cube Cover has similarities to the Quad Cover algorithm and extends it from 2D surfaces to 3D volumes. The paper also provides theoretical results for 3D hexahedral parameterizations and analyses topological properties of the appropriate function space.  相似文献   

6.
In this article, an improved iterative arithmetic of the symmetric successive over‐relaxation preconditioning biconjugate‐gradient algorithm (ISSOR‐PBCG) is utilized to solve the 3D edge FEM equations derived from the time‐harmonic electromagnetic‐field boundary value problems. Several typical structures have been analyzed, and the computation time is compared with that of other algorithms such as biconjugate‐gradient (BCG) algorithm and the conventional symmetric successive over‐relaxation preconditioning biconjugate‐ gradient algorithm (SSOR‐PBCG). The CPU time saved using the ISSOR‐PBCG algorithm is nearly 27% and 65.5%, as compared with that using the SSOR‐PBCG and the BCG algorithm. It can be seen that the ISSOR‐PBCG algorithm is efficient for edge FEM equation sets derived from large‐scale time‐harmonic electromagnetic‐field problems. © 2006 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2006.  相似文献   

7.
栾婉娜  刘成明 《图学学报》2020,41(6):980-986
摘 要:三维网格简化是在保留目标物体几何形状信息的前提下尽量减小精细化三维模型 中的点数和面数的一种操作,对提高三维网格数据的存取和网络传输速度、编辑和渲染效率具 有十分重要的作用。针对大多网格简化算法在简化过程中未考虑网格拓扑结构与视觉质量的问 题,提出了一种基于逆 Loop 细分的半正则网格简化算法。首先根据邻域质心偏移量进行特征 点检测,随后随机选取种子三角形,以边扩展方式获取正则区域并执行逆 Loop 细分进行简化。 最后,以向内分割方式进行边缘拼接,获取最终的简化模型。与经典算法在公开数据集上进行 实验对比,结果表明,该算法能够在简化的同时有效地保持网格特征,尽可能保留与原始网格 一致的规则的拓扑结构,并且在视觉质量上优于边折叠以及聚类简化算法。  相似文献   

8.
This paper surveys mesh segmentation techniques and algorithms, with a focus on part‐based segmentation, that is, segmentation that divides a mesh (featuring a 3D object) into meaningful parts. Part‐based segmentation applies to a single object and also to a family of objects (i.e. co‐segmentation). However, we shall not address here chart‐based segmentation, though some mesh co‐segmentation methods employ such chart‐based segmentation in the initial step of their pipeline. Finally, the taxonomy proposed in this paper is new in the sense that one classifies each segmentation algorithm regarding the dimension (i.e. 1D, 2D and 3D) of the representation of object parts. The leading idea behind this survey is to identify the properties and limitations of the state‐of‐the‐art algorithms to shed light on the challenges for future work.  相似文献   

9.
Consistent segmentation is to the center of many applications based on dynamic geometric data. Directly segmenting a raw 3D point cloud sequence is a challenging task due to the low data quality and large inter‐frame variation across the whole sequence. We propose a local‐to‐global approach to co‐segment point cloud sequences of articulated objects into near‐rigid moving parts. Our method starts from a per‐frame point clustering, derived from a robust voting‐based trajectory analysis. The local segments are then progressively propagated to the neighboring frames with a cut propagation operation, and further merged through all frames using a novel space‐time segment grouping technqiue, leading to a globally consistent and compact segmentation of the entire articulated point cloud sequence. Such progressive propagating and merging, in both space and time dimensions, makes our co‐segmentation algorithm especially robust in handling noise, occlusions and pose/view variations that are usually associated with raw scan data.  相似文献   

10.
基于GAC模型实现交互式图像分割的改进算法   总被引:1,自引:1,他引:0  
提出了一种改进的交互式图像分割算法。采用全变分去噪模型对图像进行预处理,在去除噪声的同时更好地保护了边缘;提出了一种对梯度模值进行曲率加权的边缘检测方法,采用该方法获得图像的边缘点集;将边缘点集中曲率较大的边缘点作为候选边界点推荐给用户;用户通过主观判断,在候选边界点中选择合适的"初始边界点",算法便可采用GAC模型完成对目标的分割。实验结果表明,改进算法提高了交互式图像分割的自动化程度,有效地减少了交互过程中的人工参与量。  相似文献   

11.
Traversing voxels along a three dimensional (3D) line is one of the most fundamental algorithms for voxel‐based applications. This paper presents a new 6‐connectivity integer algorithm for this task. The proposed algorithm accepts voxels having different sizes in x, y and z directions. To explain the idea of the proposed approach, a 2D algorithm is firstly considered and then extended in 3D. This algorithm is a multi‐step as up to three voxels may be added in one iteration. It accepts both integer and floating‐point input. The new algorithm was compared to other popular voxel traversing algorithms. Counting the number of arithmetic operations showed that the proposed algorithm requires the least amount of operations per traversed voxel. A comparison of spent CPU time using either integer or floating‐point arithmetic confirms that the proposed algorithm is the most efficient. This algorithm is simple, and in compact form which also makes it attractive for hardware implementation.  相似文献   

12.
We present a simple and effective method for the interactive segmentation of feature regions in a triangular mesh. From the user-specified radius and click position, the candidate region that contains the desired feature region is defined as geodesic disc on a triangle mesh. A concavity-aware harmonic field is then computed on the candidate region using the appropriate boundary constraints. An initial isoline is chosen by evaluating the uniformly sampled ones on the harmonic field based on the gradient magnitude. A set of feature points on the initial isoline is selected and the anisotropic geodesics passing through them are then determined as the final segmentation boundary, which is smooth and locally shortest. The experimental results show several segmentation results for various 3D models, revealing the effectiveness of the proposed method.  相似文献   

13.
We propose a parameter‐free method to recover manifold connectivity in unstructured 2D point clouds with high noise in terms of the local feature size. This enables us to capture the features which emerge out of the noise. To achieve this, we extend the reconstruction algorithm HNN‐Crust , which connects samples to two (noise‐free) neighbours and has been proven to output a manifold for a relaxed sampling condition. Applying this condition to noisy samples by projecting their k‐nearest neighbourhoods onto local circular fits leads to multiple candidate neighbour pairs and thus makes connecting them consistently an NP‐hard problem. To solve this efficiently, we design an algorithm that searches that solution space iteratively on different scales of k. It achieves linear time complexity in terms of point count plus quadratic time in the size of noise clusters. Our algorithm FitConnect extends HNN‐Crust seamlessly to connect both samples with and without noise, performs as local as the recovered features and can output multiple open or closed piecewise curves. Incidentally, our method simplifies the output geometry by eliminating all but a representative point from noisy clusters. Since local neighbourhood fits overlap consistently, the resulting connectivity represents an ordering of the samples along a manifold. This permits us to simply blend the local fits for denoising with the locally estimated noise extent. Aside from applications like reconstructing silhouettes of noisy sensed data, this lays important groundwork to improve surface reconstruction in 3D. Our open‐source algorithm is available online.  相似文献   

14.
The goal of our work is to develop an algorithm for automatic and robust detection of global intrinsic symmetries in 3D surface meshes. Our approach is based on two core observations. First, symmetry invariant point sets can be detected robustly using critical points of the Average Geodesic Distance (AGD) function. Second, intrinsic symmetries are self‐isometries of surfaces and as such are contained in the low dimensional group of Möbius transformations. Based on these observations, we propose an algorithm that: 1) generates a set of symmetric points by detecting critical points of the AGD function, 2) enumerates small subsets of those feature points to generate candidate Möbius transformations, and 3) selects among those candidate Möbius transformations the one(s) that best map the surface onto itself. The main advantages of this algorithm stem from the stability of the AGD in predicting potential symmetric point features and the low dimensionality of the Möbius group for enumerating potential self‐mappings. During experiments with a benchmark set of meshes augmented with human‐specified symmetric correspondences, we find that the algorithm is able to find intrinsic symmetries for a wide variety of object types with moderate deviations from perfect symmetry.  相似文献   

15.
In this work, we propose a controlled simplification strategy for degenerated points in symmetric 2D tensor fields that is based on the topological notion of robustness. Robustness measures the structural stability of the degenerate points with respect to variation in the underlying field. We consider an entire pipeline for generating a hierarchical set of degenerate points based on their robustness values. Such a pipeline includes the following steps: the stable extraction and classification of degenerate points using an edge labeling algorithm, the computation and assignment of robustness values to the degenerate points, and the construction of a simplification hierarchy. We also discuss the challenges that arise from the discretization and interpolation of real world data.  相似文献   

16.
Captured reflectance fields tend to provide a relatively coarse sampling of the incident light directions. As a result, sharp illumination features, such as highlights or shadow boundaries, are poorly reconstructed during relighting; highlights are disconnected, and shadows show banding artefacts. In this paper, we propose a novel interpolation technique for 4D reflectance fields that reconstructs plausible images even for non-observed light directions. Given a sparsely sampled reflectance field, we can effectively synthesize images as they would have been obtained from denser sampling. The processing pipeline consists of three steps: (1) segmentation of regions where intermediate lighting cannot be obtained by blending, (2) appropriate flow algorithms for highlights and shadows, plus (3) a final reconstruction technique that uses image-based priors to faithfully correct errors that might be introduced by the segmentation or flow step. The algorithm reliably reproduces scenes that contain specular highlights, interreflections, shadows or caustics.  相似文献   

17.
This paper introduces a variant of k Bipartite Neighbors (k‐BN), called k‐BN2, for use in function prediction. Like k‐BN, k‐BN2 selects k instances surrounding the query, i.e., the novel instance, and keeps them bipartitely. However, in order to improve the prediction precision, based on the bipartite neighborhood, k‐BN2 combines local linear models and the global nonlinear model to predict the value of the novel instance. Applied to two real measured datasets, k‐BN2 outperforms the typical k‐BN andthose methods in which k‐BN or a related approximate physical model alone is exploited.  相似文献   

18.
M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures—each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure.A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects.The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported.  相似文献   

19.
This paper proposes a novel scheme for 3D model compression based on mesh segmentation using multiple principal plane analysis. This algorithm first performs a mesh segmentation scheme, based on fusion of the well-known k-means clustering and the proposed principal plane analysis to separate the input 3D mesh into a set of disjointed polygonal regions. The boundary indexing scheme for the whole object is created by assembling local regions. Finally, the current work proposes a triangle traversal scheme to encode the connectivity and geometry information simultaneously for every patch under the guidance of the boundary indexing scheme. Simulation results demonstrate that the proposed algorithm obtains good performance in terms of compression rate and reconstruction quality.  相似文献   

20.
许肖  顾磊 《计算机科学》2016,43(4):313-317
针对复杂背景下的文本检测问题,提出了显著性检测与中心分割算法相结合的文本检测技术。对于输入的图像,首先分别使用前景与背景作为标准的显著性检测方法,背景检测时将图像的四边分别作为基准,前景检测时将背景检测中得到的非背景区域作为基准,最终可得到较准确的备选文本区。然后使用中心分割算法,得到精确的边缘图。由于显著性图备选区域准确边缘细节缺失,而边缘图边缘精确但无法得出备选文本区,因此将两者进行融合处理,得到最终文本区域。实验表明,所提出的方法有较好的检测效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号