首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 57 毫秒
1.
Automatic image processing methods are a prerequisite to efficiently analyze the large amount of image data produced by computed tomography (CT) scanners during cardiac exams. This paper introduces a model-based approach for the fully automatic segmentation of the whole heart (four chambers, myocardium, and great vessels) from 3-D CT images. Model adaptation is done by progressively increasing the degrees-of-freedom of the allowed deformations. This improves convergence as well as segmentation accuracy. The heart is first localized in the image using a 3-D implementation of the generalized Hough transform. Pose misalignment is corrected by matching the model to the image making use of a global similarity transformation. The complex initialization of the multicompartment mesh is then addressed by assigning an affine transformation to each anatomical region of the model. Finally, a deformable adaptation is performed to accurately match the boundaries of the patient's anatomy. A mean surface-to-surface error of 0.82 mm was measured in a leave-one-out quantitative validation carried out on 28 images. Moreover, the piecewise affine transformation introduced for mesh initialization and adaptation shows better interphase and interpatient shape variability characterization than commonly used principal component analysis.   相似文献   

2.
We propose a pattern classification based approach for simultaneous three-dimensional (3-D) object modeling and segmentation in image volumes. The 3-D objects are described as a set of overlapping ellipsoids. The segmentation relies on the geometrical model and graylevel statistics. The characteristic parameters of the ellipsoids and of the graylevel statistics are embedded in a radial basis function (RBF) network and they are found by means of unsupervised training. A new robust training algorithm for RBF networks based on alpha-trimmed mean statistics is employed in this study. The extension of the Hough transform algorithm in the 3-D space by employing a spherical coordinate system is used for ellipsoidal center estimation. We study the performance of the proposed algorithm and we present results when segmenting a stack of microscopy images.  相似文献   

3.
一种改进的Hough变换提取圆的方法   总被引:9,自引:1,他引:8  
刘勋  毋立芳  林娟 《信号处理》2004,20(6):623-627
本文提出了一种基于方向可变滤波器的改进Hough变换方法。该方法首先利用方向可变滤波器检测出图像边缘以及边缘的方向特性,然后基于边缘点及其方向,通过改进的Hougll变换得到圆心、半径。最后,将该算法应用于球类对象分割并得到较好的结果。  相似文献   

4.
提出了根据高斯分布模型的自适应阈值分割方法,使用了基于形态学变换的二值图优化算法得到车道线边缘图.改进了概率霍夫变换,使其更能满足实际情况,从而换检测出车道线.实验表明了该方法可以有效检测出车道线,并且速度上得到了极大的提高.  相似文献   

5.
Novel detection of conics using 2-D Hough planes   总被引:1,自引:0,他引:1  
The authors present a new approach to the use of the Hough transform for the detection of ellipses in a 2-D image. In the proposed algorithm, the conventional 5-D Hough voting space is replaced by four 2-D Hough planes which require only 90 kbytes of memory for a 384×256 image. One of the main differences between the proposed transform and other techniques is the way to extract feature points from the image under question. For the accumulation process in the Hough domain, an inherent property of the suggested algorithm is its capability to effect verification. Experimental results from the authors' work on real and synthetic images show that a significant improvement of the recognition is achieved as compared to other algorithms. Furthermore, the proposed algorithm is applicable to the detection of both circular and elliptical objects concurrently  相似文献   

6.
7.
赵明华  王理  李鹏 《激光技术》2011,35(3):428-432
为了弥补基于固定阈值的肤色分割方法存在的缺陷,在对多种彩色空间和肤色模型进行分析的基础上,提出采用改进的2-D Otsu方法和YCgCr彩色空间进行肤色分割。首先将光照补偿之后的肤色样本图像从RGB彩色空间转换到YCgCr彩色空间,并利用样本图像上的179221个肤色点建立2维高斯模型;进而将待分割的图像进行光照补偿并转换到YCgCr彩色空间,利用已经建立的高斯模型计算图像的肤色相似度,得到肤色相似度图像;最后,结合像素的空间邻域信息,使用改进的2-D Otsu方法对肤色相似度图像进行2值化处理。对这种方法进行了理论分析和实验验证。结果表明,该肤色分割算法有效地克服了使用固定阈值法进行图像分割时缺乏针对性和抗噪性的缺陷,该算法是可行的。  相似文献   

8.
This paper presents a new approach for the segmentation of color textured images, which is based on a novel energy function. The proposed energy function, which expresses the local smoothness of an image area, is derived by exploiting an intermediate step of modal analysis that is utilized in order to describe and analyze the deformations of a 3-D deformable surface model. The external forces that attract the 3-D deformable surface model combine the intensity of the image pixels with the spatial information of local image regions. The proposed image segmentation algorithm has two steps. First, a color quantization scheme, which is based on the node displacements of the deformable surface model, is utilized in order to decrease the number of colors in the image. Then, the proposed energy function is used as a criterion for a region growing algorithm. The final segmentation of the image is derived by a region merge approach. The proposed method was applied to the Berkeley segmentation database. The obtained results show good segmentation robustness, when compared to other state of the art image segmentation algorithms.  相似文献   

9.
基于过渡区提取的视网膜血管分割方法   总被引:2,自引:0,他引:2       下载免费PDF全文
姚畅  陈后金  李居朋 《电子学报》2008,36(5):974-978
 针对现有视网膜血管分割方法对于小血管和低对比度血管分割效果差的问题,提出了一种基于过渡区提取的视网膜血管分割方法.该方法首先采用二维高斯匹配滤波预处理以增强血管,然后采用基于最佳熵的方法提取主血管、采用基于分布式遗传算法和Otsu相结合的方法提取过渡区,最后利用区域连通性分析所提取的主血管和过渡区,分割出最终的血管.通过在Hoover眼底图像库中的实验,结果表明该方法在小血管的提取、连通性和有效性方面均优于Hoover算法,另外由于迁移策略的分布式遗传算法的引入,使得算法效率也明显提高.  相似文献   

10.
李灿标  郑楚君 《激光杂志》2020,41(1):185-191
视网膜血管自动分割能辅助诊断某些眼底疾病和系统性血管疾病。为了提高血管自动分割的效率,因此提出了一种线算子引导Gabor小波的视网膜血管分割方法。利用线算子检测血管方向的最优匹配角,将其作为Gabor小波变换的旋转角构建4个不同尺度的Gabor小波,并提取4维Gabor小波特征,加上两个线强度和预处理后的图像灰度,构建7维特征向量,采用SVM进行分类。与其他基于Gabor小波的方法相比,本方法只需计算最优匹配角所对应方向的Gabor小波特征,大大降低了多尺度Gabor小波特征提取的计算量,此外线算子特征与Gabor小波特征的良好互补性,有利于提高血管与背景的辨别度。在DRIVE眼底数据库上进行实验,其平均准确率、灵敏度及特异性分别为0.9361、0.8238及0.9554,获得了不错的分割性能。  相似文献   

11.
Intravascular ultrasound (IVUS) is a catheter based medical imaging technique particularly useful for studying atherosclerotic disease. It produces cross-sectional images of blood vessels that provide quantitative assessment of the vascular wall, information about the nature of atherosclerotic lesions as well as plaque shape and size. Automatic processing of large IVUS data sets represents an important challenge due to ultrasound speckle, catheter artifacts or calcification shadows. A new three-dimensional (3-D) IVUS segmentation model, that is based on the fast-marching method and uses gray level probability density functions (PDFs) of the vessel wall structures, was developed. The gray level distribution of the whole IVUS pullback was modeled with a mixture of Rayleigh PDFs. With multiple interface fast-marching segmentation, the lumen, intima plus plaque structure, and media layers of the vessel wall were computed simultaneously. The PDF-based fast-marching was applied to 9 in vivo IVUS pullbacks of superficial femoral arteries and to a simulated IVUS pullback. Accurate results were obtained on simulated data with average point to point distances between detected vessel wall borders and ground truth <0.072 mm. On in vivo IVUS, a good overall performance was obtained with average distance between segmentation results and manually traced contours <0.16 mm. Moreover, the worst point to point variation between detected and manually traced contours stayed low with Hausdorff distances <0.40 mm, indicating a good performance in regions lacking information or containing artifacts. In conclusion, segmentation results demonstrated the potential of gray level PDF and fast-marching methods in 3-D IVUS image processing.  相似文献   

12.
One of the most important problems in the segmentation of lung nodules in CT imaging arises from possible attachments occurring between nodules and other lung structures, such as vessels or pleura. In this report, we address the problem of vessels attachments by proposing an automated correction method applied to an initial rough segmentation of the lung nodule. The method is based on a local shape analysis of the initial segmentation making use of 3-D geodesic distance map representations. The correction method has the advantage that it locally refines the nodule segmentation along recognized vessel attachments only, without modifying the nodule boundary elsewhere. The method was tested using a simple initial rough segmentation, obtained by a fixed image thresholding. The validation of the complete segmentation algorithm was carried out on small lung nodules, identified in the ITALUNG screening trial and on small nodules of the lung image database consortium (LIDC) dataset. In fully automated mode, 217/256 (84.8%) lung nodules of ITALUNG and 139/157 (88.5%) individual marks of lung nodules of LIDC were correctly outlined and an excellent reproducibility was also observed. By using an additional interactive mode, based on a controlled manual interaction, 233/256 (91.0%) lung nodules of ITALUNG and 144/157 (91.7%) individual marks of lung nodules of LIDC were overall correctly segmented. The proposed correction method could also be usefully applied to any existent nodule segmentation algorithm for improving the segmentation quality of juxta-vascular nodules.  相似文献   

13.
一种血管约束的局部活动轮廓模型   总被引:1,自引:1,他引:0       下载免费PDF全文
梁思  王雷  杨晓冬 《液晶与显示》2016,31(7):686-694
活动轮廓作为一种重要的图像分割工具,近几年来在理论和应用方面都有很大的发展。然而,现有轮廓模型在处理灰度均匀性较差的图像时,通常存在较高的分割误差,并且对初始轮廓曲线位置敏感。为此,本文提出一种基于血管特征约束的活动轮廓模型,该模型首先使用局部相位(Local Phase)的血管增强算法对图像进行增强处理以生成一种不同于图像灰度的血管特征信息,然后将血管信息和图像灰度以线性加权的形式引入到局部二值拟合(Local Binary Fitting,LBF)能量泛函中,指导图像血管分割。基于视网膜血管图像数据(Digital Retinal Images for Vessel Extraction,DRIV)的实验显示:该模型能成功地从灰度分布不均匀和弱边界轮廓的视网膜图像中提取血管,分割灵敏度和准确性分别达到74.43%和93.67%,同时对初始轮廓曲线位置的敏感性大为降低。由上述可知,该模型具有高分割准确性和低初始位置敏感性。  相似文献   

14.
This paper presents a new hybrid color image segmentation approach, which attempts two different transforms for texture representation and extraction. The 2-D discrete wavelet transform that can express the variance in frequency and direction of textures, and the contourlet transform that represents boundaries even more accurately are applied in our algorithm. The whole segmentation algorithm contains three stages. First, an adaptive color quantization scheme is utilized to obtain a coarse image representation. Then, the tiny regions are combined based on color information. Third, the proposed energy transform function is used as a criterion for image segmentation. The motivation of the proposed method is to obtain the complete and significant objects in the image. Ultimately, according to our experiments on the Berkeley segmentation database, our techniques have more reasonable and robust results than other two widely adopted image segmentation algorithms, and our method with contourlet transform has better performance than wavelet transform.  相似文献   

15.
This paper provides methodology for fully automated model-based image segmentation. All information necessary to perform image segmentation is automatically derived from a training set that is presented in a form of segmentation examples. The training set is used to construct two models representing the objects--shape model and border appearance model. A two-step approach to image segmentation is reported. In the first step, an approximate location of the object of interest is determined. In the second step, accurate border segmentation is performed. The shape-variant Hough transform method was developed that provides robust object localization automatically. It finds objects of arbitrary shape, rotation, or scaling and can handle object variability. The border appearance model was developed to automatically design cost functions that can be used in the segmentation criteria of edge-based segmentation methods. Our method was tested in five different segmentation tasks that included 489 objects to be segmented. The final segmentation was compared to manually defined borders with good results [rms errors in pixels: 1.2 (cerebellum), 1.1 (corpus callosum), 1.5 (vertebrae), 1.4 (epicardial), and 1.6 (endocardial) borders]. Two major problems of the state-of-the-art edge-based image segmentation algorithms were addressed: strong dependency on a close-to-target initialization, and necessity for manual redesign of segmentation criteria whenever new segmentation problem is encountered.  相似文献   

16.
Presents an automated, knowledge-based method for segmenting chest computed tomography (CT) datasets. Anatomical knowledge including expected volume, shape, relative position, and X-ray attenuation of organs provides feature constraints that guide the segmentation process. Knowledge is represented at a high level using an explicit anatomical model. The model is stored in a frame-based semantic network and anatomical variability is incorporated using fuzzy sets. A blackboard architecture permits the data representation and processing algorithms in the model domain to be independent of those in the image domain. Knowledge-constrained segmentation routines extract contiguous three-dimensional (3-D) sets of voxels, and their feature-space representations are posted on the blackboard. An inference engine uses fuzzy logic to match image to model objects based on the feature constraints. Strict separation of model and image domains allows for systematic extension of the knowledge base. In preliminary experiments, the method has been applied to a small number of thoracic CT datasets. Based on subjective visual assessment by experienced thoracic radiologists, basic anatomic structures such as the lungs, central tracheobronchial tree, chest wall, and mediastinum were successfully segmented. To demonstrate the extensibility of the system, knowledge was added to represent the more complex anatomy of lung lesions in contact with vessels or the chest wall. Visual inspection of these segmented lesions was also favorable. These preliminary results suggest that use of expert knowledge provides an increased level of automation compared with low-level segmentation techniques. Moreover, the knowledge-based approach may better discriminate between structures of similar attenuation and anatomic contiguity. Further validation is required  相似文献   

17.
针对图像分割问题,结合高斯混合模型与信息论中的相对熵测度概念,提出一种新的图像阈值化方法。在提出方法中把图像阈值化问题看成是两个概率向量之间的匹配问题,因此首先用高斯混合模型去拟合图像直方图的灰度级分布,然后用相对熵测度去度量拟合分布与图像原灰度级分布之间的差异,并把该度量作为图像阈值化的准则函数。在对图像实施分割时,通过在图像灰度级范围中求取所定义的准则函数的最小值获得最佳阈值。在NDT、SAR及红外图像上的分割实验中用提出方法与传统及最新的图像阈值化方法进行比较,结果表明提出方法获得的结果要优于相比较方法获得的分割结果,因此提出方法是一种有效的图像分割方法。  相似文献   

18.
一种新的视网膜血管网络自动分割方法   总被引:2,自引:1,他引:1  
提出了一种基于脉冲耦合神经网络(PCNN)和分布式遗传算法(DGA)的视网膜血管自动分割方法.首先采用二维高斯匹配滤波器预处理以增强血管,然后采用DGA快速搜索出PCNN的最佳参数设置值并运用PCNN分割出增强图像的血管网络,最后对分割得到的血管网络结合区域连通性特征,采用面积滤波算子滤除噪声,提取出最终的血管网络.通过在国际上公开的Hoover眼底图像库中的实验,结果表明,该方法在血管分支提取和算法有效性方面明显优于Hoover算法,具有较高的临床应用价值.  相似文献   

19.
This paper proposes an algorithm to measure the width of retinal vessels in fundus photographs using graph-based algorithm to segment both vessel edges simultaneously. First, the simultaneous two-boundary segmentation problem is modeled as a two-slice, 3-D surface segmentation problem, which is further converted into the problem of computing a minimum closed set in a node-weighted graph. An initial segmentation is generated from a vessel probability image. We use the REVIEW database to evaluate diameter measurement performance. The algorithm is robust and estimates the vessel width with subpixel accuracy. The method is used to explore the relationship between the average vessel width and the distance from the optic disc in 600 subjects.  相似文献   

20.
A statistical model is presented that represents the distributions of major tissue classes in single-channel magnetic resonance (MR) cerebral images. Using the model, cerebral images are segmented into gray matter, white matter, and cerebrospinal fluid (CSF). The model accounts for random noise, magnetic field inhomogeneities, and biological variations of the tissues. Intensity measurements are modeled by a finite Gaussian mixture. Smoothness and piecewise contiguous nature of the tissue regions are modeled by a three-dimensional (3-D) Markov random field (MRF). A segmentation algorithm, based on the statistical model, approximately finds the maximum a posteriori (MAP) estimation of the segmentation and estimates the model parameters from the image data. The proposed scheme for segmentation is based on the iterative conditional modes (ICM) algorithm in which measurement model parameters are estimated using local information at each site, and the prior model parameters are estimated using the segmentation after each cycle of iterations. Application of the algorithm to a sample of clinical MR brain scans, comparisons of the algorithm with other statistical methods, and a validation study with a phantom are presented. The algorithm constitutes a significant step toward a complete data driven unsupervised approach to segmentation of MR images in the presence of the random noise and intensity inhomogeneities  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号