首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A method has been developed to reconstruct three-dimensional (3-D) surfaces from two-dimensional (2-D) projection data. It is used to produce individualized boundary element models, consisting of thorax and lung surfaces, for electro- and magnetocardiographic inverse problems. Two orthogonal projections are utilized. A geometrical prior model, built using segmented magnetic resonance images, is deformed according to profiles segmented from projection images. In the authors' method, virtual X-ray images of the prior model are first constructed by simulating real X-ray imaging. The 2-D profiles of the model are segmented from the projections and elastically matched with the profiles segmented from patient data. The displacement vectors produced by the elastic 2-D matching are back projected onto the 3-D surface of the prior model. Finally, the model is deformed, using the back-projected vectors. Two different deformation methods are proposed. The accuracy of the method is validated by a simulation. The average reconstruction error of a thorax and lungs was 1.22 voxels, corresponding to about 5 mm  相似文献   

2.
This paper examines the impacts of different types of circuit partitioning on reducing the computational complexity for computing the fault detection probability, which usually grows exponentially with the number of input lines in the given circuit. Partitioning a large combinational circuit into arbitrary subcircuits does not, in general, reduce the computational time complexity of the fault detection probability. In fact, partitioning a given circuit into general subcircuits is expected to increase the time complexity by the amount of time spent in the partition process itself. Nevertheless, it will be shown that decomposing a general combinational circuit into its modules (supergates) such that these modules constitute the basic elements of a tree circuit (network) considerably reduces the computational complexity of the fault detection probability problem. Toward this goal, two algorithms are developed. The first partitions a given circuit into maximal supergates whenever this is possible. Its computational complexity depends linearly on the number of edges (or lines) and nodes (or gates) of the circuit. The second computes the exact detection probabilities of single faults in the tree network and its computational complexity grows exponentially with the largest number of input lines in any of the network maximal supergates rather than the total number of inputs. The case of multi-output circuits is also discussed.  相似文献   

3.
A canonical form for AR 2-D systems representations is introduced. This yields a method for computing the system trajectories by means of a line-by-line recursion, and displays some relevant information about the system structure such as the choice of inputs and initial conditions.Partly supported by the Calouste Gulbenkian Foundation, Portugal.  相似文献   

4.
The authors discuss two techniques for solving two-dimensional (2D) inverse scattering problems by parameterizing the scattering configuration, and determining the optimum value of the parameters by minimizing a cost function involving the known scattered-field data. The computation of the fields in each estimated configuration is considered as an auxiliary problem. To improve the efficiency of these computations, the CGFFT iterative scheme is combined with a special extrapolation procedure that is valid for problems with a varying physical parameter such as frequency, angle of incidence, or contrast. Further, they analyze the dynamic range and the resolution of linearized schemes. To obtain an acceptable resolution for an object with a large contrast with respect to the surrounding medium, multiple-frequency information is used. Finally, the availability of a fast-forward solver was an incentive to consider nonlinear optimization. In particular, the authors use a quasi-Newton algorithm at only twice the computational cost of the distorted-wave Born iterative scheme  相似文献   

5.
This paper proposes the use of a polynomial interpolator structure (based on Horner's scheme) which is efficiently realizable in hardware, for high-quality geometric transformation of two- and three-dimensional images. Polynomial-based interpolators such as cubic B-splines and optimal interpolators of shortest support are shown to be exactly implementable in the Horner structure framework. This structure suggests a hardware/software partition which can lead to efficient implementations for multidimensional interpolation.  相似文献   

6.
3-D object recognition using 2-D views   总被引:1,自引:0,他引:1  
We consider the problem of recognizing 3-D objects from 2-D images using geometric models and assuming different viewing angles and positions. Our goal is to recognize and localize instances of specific objects (i.e., model-based) in a scene. This is in contrast to category-based object recognition methods where the goal is to search for instances of objects that belong to a certain visual category (e.g., faces or cars). The key contribution of our work is improving 3-D object recognition by integrating Algebraic Functions of Views (AFoVs), a powerful framework for predicting the geometric appearance of an object due to viewpoint changes, with indexing and learning. During training, we compute the space of views that groups of object features can produce under the assumption of 3-D linear transformations, by combining a small number of reference views that contain the object features using AFoVs. Unrealistic views (e.g., due to the assumption of 3-D linear transformations) are eliminated by imposing a pair of rigidity constraints based on knowledge of the transformation between the reference views of the object. To represent the space of views that an object can produce compactly while allowing efficient hypothesis generation during recognition, we propose combining indexing with learning in two stages. In the first stage, we sample the space of views of an object sparsely and represent information about the samples using indexing. In the second stage, we build probabilistic models of shape appearance by sampling the space of views of the object densely and learning the manifold formed by the samples. Learning employs the Expectation-Maximization (EM) algorithm and takes place in a "universal," lower-dimensional, space computed through Random Projection (RP). During recognition, we extract groups of point features from the scene and we use indexing to retrieve the most feasible model groups that might have produced them (i.e., hypothesis generation). The likelihood of each hypothesis is then computed using the probabilistic models of shape appearance. Only hypotheses ranked high enough are considered for further verification with the most likely hypotheses verified first. The proposed approach has been evaluated using both artificial and real data, illustrating promising performance. We also present preliminary results illustrating extensions of the AFoVs framework to predict the intensity appearance of an object. In this context, we have built a hybrid recognition framework that exploits geometric knowledge to hypothesize the location of an object in the scene and both geometrical and intesnity information to verify the hypotheses.  相似文献   

7.
8.
In this paper we consider the problem of internally and externally stabilising controlled invariant and output-nulling subspaces for two-dimensional (2-D) Fornasini–Marchesini models, via static feedback. A numerically tractable procedure for computing a stabilising feedback matrix is developed via linear matrix inequality techniques. This is subsequently applied to solve, for the first time, various 2-D disturbance decoupling problems subject to a closed-loop stability constraint.  相似文献   

9.
We consider the error performance of an optical code-division multiple access network in which two-dimensional codes are generated in time and wavelength. We show from first principles that the optimum single-user detection scheme which yields is the AND detector. By replacing the widely considered SUM detector with the AND detector, the channel capacity can be at least doubled for a given data rate, number of active users, and bit error rate. We have also shown that the error performance of a random code gives a tight upper bound on the performance of deterministic code with the same weight and dimension  相似文献   

10.
Image compression using the 2-D wavelet transform   总被引:108,自引:0,他引:108  
The 2-D orthogonal wavelet transform decomposes images into both spatial and spectrally local coefficients. The transformed coefficients were coded hierarchically and individually quantized in accordance with the local estimated noise sensitivity of the human visual system (HVS). The algorithm can be mapped easily onto VLSI. For the Miss America and Lena monochrome images, the technique gave high to acceptable quality reconstruction at compression ratios of 0.3-0.2 and 0.64-0.43 bits per pixel (bpp), respectively.  相似文献   

11.
This paper reconsiders the discrete cosine transform (DCT) algorithm of Narashima and Peterson (1978) in order to reduce the computational cost of the evaluation of N-point inverse discrete cosine transform (IDCT) through an N-point FFT. A new relationship between the IDCT and the discrete Fourier transform (DFT) is established. It allows the evaluation of two simultaneous N-point IDCTs by computing a single FFT of the same dimension. This IDCT implementation technique reduces by half the number of operations  相似文献   

12.
A new approach is proposed for registering a set of histological coronal two-dimensional images of a rat brain sectional material with coronal sections of a three-dimensional brain atlas, an intrinsic step and a significant challenge to current efforts in brain mapping and multimodal fusion of experimental data. The alignment problem is based on matching external contours of the brain sections, and operates in the presence of tissue distortion and tears which are routinely encountered, and possible scale, rotation, and shear changes (the affine and weak perspective groups). It is based on a novel set of local absolute affine invariants derived from the set of ordered inflection points on the external contour represented by a cubic B-spline curve. The inflection points are local intrinsic geometric features, which are preserved under both the affine and the weak perspective transformations. The invariants are constructed from the sequence of area patches bounded by the contour and the line connecting two consecutive inflection points, and hence do make direct use of the area (volume) invariance property associated with the affine transformation. These local absolute invariants are very well suited to handle the tissue distortion and tears (occlusion problem).  相似文献   

13.
The aim of this paper is to present a hybrid approach to accurate quantification of vascular structures from magnetic resonance angiography (MRA) images using level set methods and deformable geometric models constructed with 3-D Delaunay triangulation. Multiple scale filtering based on the analysis of local intensity structure using the Hessian matrix is used to effectively enhance vessel structures with various diameters. The level set method is then applied to automatically segment vessels enhanced by the filtering with a speed function derived from enhanced MRA images. Since the goal of this paper is to obtain highly accurate vessel borders, suitable for use in fluid flow simulations, in a subsequent step, the vessel surface determined by the level set method is triangulated using 3-D Delaunay triangulation and the resulting surface is used as a parametric deformable model. Energy minimization is then performed within a variational setting with a first-order internal energy; the external energy is derived from 3-D image gradients. Using the proposed method, vessels are accurately segmented from MRA data.  相似文献   

14.
We consider the problem of estimating the harmonics of a noisy 2-D signal. The observed data is modeled as a 2-D sinusoidal signal, with either random or deterministic phases, plus additive Gaussian noise ofunknown covariance. Our method utilizes recently defined higher-order statistics, referred to as mixed-cumulants, which permit a formulation that is applicable to both the random and deterministic case. In particular, we first estimate the frequencies in each dimension using an overdetermined Yule-Walker type approach. Then, the 1-D frequencies are paired using a matching criterion. To support our theory, we examine the performance of the proposed method via simulations.  相似文献   

15.
罗罡  杨冠玲  母国光 《激光杂志》1999,20(3):14-15,17
本文介绍了利用成熟的二维PIV技术实现三维流场速度测量方法。  相似文献   

16.
An elliptic approximation-based design approach is proposed for obtaining 2-D recursive fan filters. The 1-D elliptic filter is reduced to a cascade-parallel combination of all-pass sections and is then used as a prototype for fan filter synthesis, resulting in final realization of 2-D transfer functions using allpass filters. It is shown that the synthesis procedure not only gives a filter that has far fewer coefficients but also enjoys a very low computational complexity  相似文献   

17.
The class of geometric deformable models, also known as level sets, has brought tremendous impact to medical imagery due to its capability of topology preservation and fast shape recovery. In an effort to facilitate a clear and full understanding of these powerful state-of-the-art applied mathematical tools, the paper is an attempt to explore these geometric methods, their implementations and integration of regularizers to improve the robustness of these topologically independent propagating curves/surfaces. The paper first presents the origination of level sets, followed by the taxonomy of level sets. We then derive the fundamental equation of curve/surface evolution and zero-level curves/surfaces. The paper then focuses on the first core class of level sets, known as "level sets without regularizers." This class presents five prototypes: gradient, edge, area-minimization, curvature-dependent and application driven. The next section is devoted to second core class of level sets, known as "level sets with regularizers." In this class, we present four kinds: clustering-based, Bayesian bidirectional classifier-based, shape-based and coupled constrained-based. An entire section is dedicated to optimization and quantification techniques for shape recovery when used in the level set framework. Finally, the paper concludes with 22 general merits and four demerits on level sets and the future of level sets in medical image segmentation. We present applications of level sets to complex shapes like the human cortex acquired via MRI for neurological image analysis  相似文献   

18.
Kwan Hon Keung   《Electronics letters》1984,20(24):994-995
The outline of an approach for image data compression using a 2-D lattice predictor is presented. Preliminary results indicate that acceptable quality images (quantised to 15 levels) at information rates, bit rates and signal/noise ratios ranging, respectively, from 1.16 to 1.38 bpp, 1.19 to 1.40 bpp and 20.6 to 22.5 dB have been obtained for lattice stages 1 to 5.  相似文献   

19.
High-resolution radar imaging using 2-D linear prediction   总被引:7,自引:0,他引:7  
An algorithm for radar imaging is described. The algorithm is based on two-dimensional (2-D) linear prediction of 2-D Cartesian frequency spectra. It is shown that the algorithm provides much better resolution than the ISAR image obtained using a 2-D inverse Fourier transform. The algorithm is especially useful for imaging targets using small-bandwidth RCS data over limited aspect angle regions  相似文献   

20.
An advance in the simulation of a single event upset (SEU) of a static memory is achieved by combining transport and circuit effects in a single calculation. The program SIFCOD [4] is applied to the four transistors of a CMOS SRAM cell to determine its transient circuit response following a very high energy ion hit. Results unique to this type of calculation include determination of relative upset sensitivites and different upset mechanisms for specific area hits, i.e., the OFF p-channel drain, the OFF or ON n-channel drain, etc. The calculation determines the transport variables as a function of time in two-space dimensions for each of the four transistors and provides the nodal voltage and current responses for assessing memory upset conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号