首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Standardized evaluation methodology for 2-D-3-D registration   总被引:3,自引:0,他引:3  
In the past few years, a number of two-dimensional (2-D) to three-dimensional (3-D) (2-D-3-D) registration algorithms have been introduced. However, these methods have been developed and evaluated for specific applications, and have not been directly compared. Understanding and evaluating their performance is therefore an open and important issue. To address this challenge we introduce a standardized evaluation methodology, which can be used for all types of 2-D-3-D registration methods and for different applications and anatomies. Our evaluation methodology uses the calibrated geometry of a 3-D rotational X-ray (3DRX) imaging system (Philips Medical Systems, Best, The Netherlands) in combination with image-based 3-D-3-D registration for attaining a highly accurate gold standard for 2-D X-ray to 3-D MR/CT/3DRX registration. Furthermore, we propose standardized starting positions and failure criteria to allow future researchers to directly compare their methods. As an illustration, the proposed methodology has been used to evaluate the performance of two 2-D-3-D registration techniques, viz. a gradient-based and an intensity-based method, for images of the spine. The data and gold standard transformations are available on the internet (http://www.isi.uu.nl/Research/Databases/).  相似文献   

2.
In image-guided therapy, high-quality preoperative images serve for planning and simulation, and intraoperatively as "background", onto which models of surgical instruments or radiation beams are projected. The link between a preoperative image and intraoperative physical space of the patient is established by image-to-patient registration. In this paper, we present a novel 3-D/2-D registration method. First, a 3-D image is reconstructed from a few 2-D X-ray images and next, the preoperative 3-D image is brought into the best possible spatial correspondence with the reconstructed image by optimizing a similarity measure (SM). Because the quality of the reconstructed image is generally low, we introduce a novel SM, which is able to cope with low image quality as well as with different imaging modalities. The novel 3-D/2-D registration method has been evaluated and compared to the gradient-based method (GBM) using standardized evaluation methodology and publicly available 3-D computed tomography (CT), 3-D rotational X-ray (3DRX), and magnetic resonance (MR) and 2-D X-ray images of two spine phantoms, for which gold standard registrations were known. For each of the 3DRX, CT, or MR images and each set of X-ray images, 1600 registrations were performed from starting positions, defined as the mean target registration error (mTRE), randomly generated and uniformly distributed in the interval of 0-20 mm around the gold standard. The capture range was defined as the distance from gold standard for which the final TRE was less than 2 mm in at least 95% of all cases. In terms of success rate, as the function of initial misalignment and capture range the proposed method outperformed the GBM. TREs of the novel method and the GBM were approximately the same. For the registration of 3DRX and CT images to X-ray images as few as 2-3 X-ray views were sufficient to obtain approximately 0.4 mm TREs, 7-9 mm capture range, and 80%-90% of successful registrations. To obtain similar results for MR to X-ray registrations, an image, reconstructed from at least 11 X-ray images was required. Reconstructions from more than 11 images had no effect on the registration results.  相似文献   

3.
A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image quality and to improve the performance of all the tasks needed for quantitative imaging analysis. The method proposed in this paper is based on a 3-D optimized blockwise version of the nonlocal (NL)-means filter (Buades, , 2005). The NL-means filter uses the redundancy of information in the image under study to remove the noise. The performance of the NL-means filter has been already demonstrated for 2-D images, but reducing the computational burden is a critical aspect to extend the method to 3-D images. To overcome this problem, we propose improvements to reduce the computational complexity. These different improvements allow to drastically divide the computational time while preserving the performances of the NL-means filter. A fully automated and optimized version of the NL-means filter is then presented. Our contributions to the NL-means filter are: 1) an automatic tuning of the smoothing parameter; 2) a selection of the most relevant voxels; 3) a blockwise implementation; and 4) a parallelized computation. Quantitative validation was carried out on synthetic datasets generated with BrainWeb (Collins, , 1998). The results show that our optimized NL-means filter outperforms the classical implementation of the NL-means filter, as well as two other classical denoising methods [anisotropic diffusion (Perona and Malik, 1990)] and total variation minimization process (Rudin, , 1992) in terms of accuracy (measured by the peak signal-to-noise ratio) with low computation time. Finally, qualitative results on real data are presented.  相似文献   

4.
Interior-point methodology for 3-D PET reconstruction   总被引:1,自引:0,他引:1  
Interior-point methods have been successfully applied to a wide variety of linear and nonlinear programming applications. This paper presents a class of algorithms, based on path-following interior-point methodology, for performing regularized maximum-likelihood (ML) reconstructions on three-dimensional (3-D) emission tomography data. The algorithms solve a sequence of subproblems that converge to the regularized maximum likelihood solution from the interior of the feasible region (the nonnegative orthant). We propose two methods, a primal method which updates only the primal image variables and a primal-dual method which simultaneously updates the primal variables and the Lagrange multipliers. A parallel implementation permits the interior-point methods to scale to very large reconstruction problems. Termination is based on well-defined convergence measures, namely, the Karush-Kuhn-Tucker first-order necessary conditions for optimality. We demonstrate the rapid convergence of the path-following interior-point methods using both data from a small animal scanner and Monte Carlo simulated data. The proposed methods can readily be applied to solve the regularized, weighted least squares reconstruction problem.  相似文献   

5.
An algorithm for the computation of 3-D coordinates (space intersection) of marked points on a moving subject surveyed by a couple of TV cameras is presented herein. It has been designed in order to meet the requirements of routinary analysis in biomechanic laboratories. 3-D geometrical arrangement of the TV cameras (space resection) is obtained by means of a method which is based on an iterative least-squares estimation and requires little time for calibration operations; 3-D coordinates are computed by means of a fast geometrical intersection algorithm. The whole algorithm has been extensively used in different laboratories and results on its reliability and accuracy are reported.  相似文献   

6.
Accurate automatic extraction of a 3-D cerebrovascular system from images obtained by time-of-flight (TOF) or phase contrast (PC) magnetic resonance angiography (MRA) is a challenging segmentation problem due to the small size objects of interest (blood vessels) in each 2-D MRA slice and complex surrounding anatomical structures (e.g., fat, bones, or gray and white brain matter). We show that due to the multimodal nature of MRA data, blood vessels can be accurately separated from the background in each slice using a voxel-wise classification based on precisely identified probability models of voxel intensities. To identify the models, an empirical marginal probability distribution of intensities is closely approximated with a linear combination of discrete Gaussians (LCDG) with alternate signs, using our previous EM-based techniques for precise linear combination of Gaussian-approximation adapted to deal with the LCDGs. The high accuracy of the proposed approach is experimentally validated on 85 real MRA datasets (50 TOF and 35 PC) as well as on synthetic MRA data for special 3-D geometrical phantoms of known shapes.  相似文献   

7.
A new convenient method of showing 3-D magnetic flux lines on a plane is presented. It reduces a cross-section to an equivalent 2-D plane and thereafter treats it as one would a 2-D vector potential problem.  相似文献   

8.
Model-based quantitation of 3-D magnetic resonance angiographic images   总被引:4,自引:0,他引:4  
Quantification of the degree of stenosis or vessel dimensions are important for diagnosis of vascular diseases and planning vascular interventions. Although diagnosis from three-dimensional (3-D) magnetic resonance angiograms (MRA's) is mainly performed on two-dimensional (2-D) maximum intensity projections, automated quantification of vascular segments directly from the 3-D dataset is desirable to provide accurate and objective measurements of the 3-D anatomy. A model-based method for quantitative 3-D MRA is proposed. Linear vessel segments are modeled with a central vessel axis curve coupled to a vessel wall surface. A novel image feature to guide the deformation of the central vessel axis is introduced. Subsequently, concepts of deformable models are combined with knowledge of the physics of the acquisition technique to accurately segment the vessel wall and compute the vessel diameter and other geometrical properties. The method is illustrated and validated on a carotid bifurcation phantom, with ground truth and medical experts as comparisons. Also, results on 3-D time-of-flight (TOF) MRA images of the carotids are shown. The approach is a promising technique to assess several geometrical vascular parameters directly on the source 3-D images, providing an objective mechanism for stenosis grading.  相似文献   

9.
10.
3-D object recognition using 2-D views   总被引:1,自引:0,他引:1  
We consider the problem of recognizing 3-D objects from 2-D images using geometric models and assuming different viewing angles and positions. Our goal is to recognize and localize instances of specific objects (i.e., model-based) in a scene. This is in contrast to category-based object recognition methods where the goal is to search for instances of objects that belong to a certain visual category (e.g., faces or cars). The key contribution of our work is improving 3-D object recognition by integrating Algebraic Functions of Views (AFoVs), a powerful framework for predicting the geometric appearance of an object due to viewpoint changes, with indexing and learning. During training, we compute the space of views that groups of object features can produce under the assumption of 3-D linear transformations, by combining a small number of reference views that contain the object features using AFoVs. Unrealistic views (e.g., due to the assumption of 3-D linear transformations) are eliminated by imposing a pair of rigidity constraints based on knowledge of the transformation between the reference views of the object. To represent the space of views that an object can produce compactly while allowing efficient hypothesis generation during recognition, we propose combining indexing with learning in two stages. In the first stage, we sample the space of views of an object sparsely and represent information about the samples using indexing. In the second stage, we build probabilistic models of shape appearance by sampling the space of views of the object densely and learning the manifold formed by the samples. Learning employs the Expectation-Maximization (EM) algorithm and takes place in a "universal," lower-dimensional, space computed through Random Projection (RP). During recognition, we extract groups of point features from the scene and we use indexing to retrieve the most feasible model groups that might have produced them (i.e., hypothesis generation). The likelihood of each hypothesis is then computed using the probabilistic models of shape appearance. Only hypotheses ranked high enough are considered for further verification with the most likely hypotheses verified first. The proposed approach has been evaluated using both artificial and real data, illustrating promising performance. We also present preliminary results illustrating extensions of the AFoVs framework to predict the intensity appearance of an object. In this context, we have built a hybrid recognition framework that exploits geometric knowledge to hypothesize the location of an object in the scene and both geometrical and intesnity information to verify the hypotheses.  相似文献   

11.
In this investigation, subfilters are cascaded in the design of a 2-D narrow transition band FIR digital filter with double transformations, a transformation from wide transition band subfilter into 1-D narrow transition band filter and a McClellan transformation from 1-D filter into 2-D filter. The traditional method for designing a 2-D FIR digital filter with a narrow transition band yields very high orders. The difficulty of the design and implementation will increase with orders exponentially. Numerous identical low-order subfilters are cascaded together to simplify the design of a high-order 2-D filter compared to traditional design method. A powerful genetic algorithm (GA) is presented to determine the best coefficients of the McClellan transformation. It can be used to design any contours of arbitrary shape for mapping 1-D to 2-D FIR filters very effectively. A generalized McClellan transformation is presented, and can be used to design 2-D complex FIR filters. Various numerical design examples are presented to demonstrate the usefulness and effectiveness of the presented approach.
Shian-Tang Tzeng (Corresponding author)Email:
  相似文献   

12.
Three-dimensional (3-D) mesh sequences using the principal component analysis technique are encoded. First, rate and distortion models for principal components are developed and then a rate-distortion optimised quantisation scheme is proposed. Simulation results show that the proposed algorithm provides a much higher coding gain than conventional coders.  相似文献   

13.
A shape analysis technique has been developed to quantify intracranial deformation as a means of objectively assessing treatment for brain tumor. Conventional measurements of tumor volume are prone to ambiguity and error, so instead the authors are investigating the secondary space occupying effects of tumor, namely the deformation of structures within the brain. In order to avoid surface segmentation problems in MR images and to facilitate computation, the B-splines method has been introduced to approximate digital 3-D image surfaces. Using the mean curvature and the Gaussian curvature the authors classify a surface into 4 basic types: planar, parabolic, elliptic, and hyperbolic. The deformation of a surface can be described by measuring the geometric changes in these basic types. The method is independent of size, domain (translation), and viewpoint (rotation). These invariance properties are important as they overcome problems caused by wide variations in brain size within the normal population as well as small differences in patient orientation during acquisition. Experimental results show the potential of the technique in objectively monitoring patient response to treatment.  相似文献   

14.
A novel method for denoising functional magnetic resonance imaging temporal signals is presented in this note. The method is based on progressively enhancing the temporal signal by means of adaptive anisotropic spatial averaging. This average is based on a new metric for comparing temporal signals corresponding to active fMRI regions. Examples are presented both for simulated and real two and three-dimensional data. The software implementing the proposed technique is publicly available for the research community.  相似文献   

15.
王守鹏  王丽  门艳彬 《激光技术》2007,31(4):351-353
为了研究非线性光学晶体KBe2BO3F2的光参变特性,根据晶体的色散方程和动量、能量守恒定律,采用计算机数值模拟的方法,得出了KBe2BO3F2晶体在Ⅰ类和Ⅱ类相位匹配下的角度调谐范围,并与另一种优良的非线性晶体CsL iB6O10进行了比较,从而得出KBe2BO3F2晶体比CsLiB6O10晶体可获得更短的紫外波长输出和更宽的连续调谐输出的结果。理论计算结果表明,当抽运光为213nm时,对于Ⅰ类相位匹配条件,可以获得最短波长220nm的紫外光输出。  相似文献   

16.
The authors analyzed the noise characteristics of two-dimensional (2-D) and three-dimensional (3-D) images obtained from the GE Advance positron emission tomography (PET) scanner. Three phantoms were used: a uniform 20-cm phantom, a 3-D Hoffman brain phantom, and a chest phantom with heart and lung inserts. Using gated acquisition, the authors acquired 20 statistically equivalent scans of each phantom in 2-D and 3-D modes at several activity levels. From these data, they calculated pixel normalized standard deviations (NSD's), scaled to phantom mean, across the replicate scans, which allowed them to characterize the radial and axial distributions of pixel noise. The authors also performed sequential measurements of the phantoms in 2-D and 3-D modes to measure noise (from interpixel standard deviations) as a function of activity. To compensate for the difference in axial slice width between 2-D and 3-D images (due to the septa and reconstruction effects), they developed a smoothing kernel to apply to the 2-D data. After matching the resolution, the ratio of image-derived NSD values (NSD2D/NSD3D)2 averaged throughout the uniform phantom was in good agreement with the noise equivalent count (NEC) ratio (NEC3D/NEC2D). By comparing different phantoms, the authors showed that the attenuation and emission distributions influence the spatial noise distribution. The estimates of pixel noise for 2-D and 3-D images produced here can be applied in the weighting of PET kinetic data and may be useful in the design of optimal dose and scanning requirements for PET studies. The accuracy of these phantom-based noise formulas should be validated for any given imaging situation, particularly in 3-D, if there is significant activity outside the scanner field of view  相似文献   

17.
A new algorithm for three-dimensional reconstruction of two-dimensional crystals from projections is presented, and its applicability to biological macromolecules imaged using transmission electron microscopy (TEM) is investigated. Its main departures from the traditional approach is that it works in real space, rather than in Fourier space, and it is iterative. This has the advantage of making it convenient to introduce additional constraints (such as the support of the function to be reconstructed, which may be known from alternative measurements) and has the potential of more accurately modeling the TEM image formation process. Phantom experiments indicate the superiority of the new approach even without the introduction of constraints in addition to the projection data.  相似文献   

18.
Computational modeling effectively analyzes the wave propagation and associated interaction within heterogeneous reinforced concrete bridge decks, providing valuable information for sensor selection and placement. It provides a good basis for the implementation of the inverse problem in defect detection and the reconstruction of subsurface properties, which is beneficial for defect diagnosis. The objective of this study is to evaluate the effectiveness of lower order models in the evaluation of bridge-deck subsurfaces modeled as layered media. The two lower order models considered are a 2-D model and a 2.5-D model that uses the 2-D geometry with a compressed coordinate system to capture wave behavior outside the cross-sectional plane. Both the 2- and 2.5-D models are compared to the results obtained from a full 3-D model. A filter that maps the 3-D excitation signal appropriately for 2- and 2.5-D simulations is presented. The 2.5-D model differs from the 2-D model in that it is capable of capturing 3-D wave behavior interacting with a 2-D geometry. The 2.5-D matches results from the corresponding 3-D model when there is no variation in the third dimension. Computational models for air-launched ground-penetrating radar with 1-GHz central frequency and bandwidth for the detection of bridge-deck delamination are implemented in 2-, 2.5-, and 3-D using FDTD simulations. In all cases, the defect is identifiable in the results. Thus, it is found that in layered media (such as bridge decks) 2- and 2.5-D models are good approximations for modeling bridge-deck deterioration, each with an order of magnitude reduction in computational time.   相似文献   

19.
20.
Two-dimensional (2-D) approaches to microwave imaging have dominated the research landscape primarily due to the moderate levels of measurement data, data-acquisition time, and computational costs required. Three-dimensional (3-D) approaches have been investigated in simulation, phantom, and animal experiments. While 3-D approaches are certainly important in terms of the potential to improve image quality, their associated costs are significant at this time. In addition, benchmarks are needed to evaluate these new generation systems as more 3-D methods begin to appear. In this paper, we present a systematic series of experiments which assess the capability of our 2-D system to image classical 3-D geometries. We demonstrate where current methods suffer from 3-D effects but also identify situations where they remain quite useful. Comparisons between reconstructions utilizing phantom measurements and simulated 3-D data are also shown to validate the results. These findings suggest that for certain biomedical applications, 2-D approaches remain quite attractive.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号