首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 156 毫秒
1.
计算特征向量及其导数的同步迭代法   总被引:2,自引:1,他引:1  
以前的灵敏度分析方法都是先计算特征对然后再计算它们的导数,本文对特征对计算的矩阵迭代法及子空间迭代法进行改造,在迭代计算特征对的同时计算特征向量的导数。采用矩阵迭代法可以直接迭代计算特征向量导数,避免了对奇异灵敏度方程的求解。采用子空间迭代法可以将原来的大型特征方程和灵敏度方程缩阶为较小的方程。算例表明这两种算法精度较高,采用子空间迭代法计算多个特征向量对一个设计变量的导数计算效率较高。  相似文献   

2.
张立峰  王化祥 《计量学报》2016,37(3):271-274
研究了灵敏度矩阵更新的Landweber迭代图像重建算法,以期提高重建图像精度。灵敏度矩阵更新时的初始图像由Landweber迭代法获得,对不同迭代次数的灵敏度矩阵更新间隔进行了比较,并且对灵敏度矩阵的更新次数进行了分析,仿真及实验结果表明,该方法能有效提高图像重建精度。  相似文献   

3.
针对传统的Frangi管状结构滤波方法存在的计算量大、耗时长的问题,本文使用一种基于快速Hessian矩阵的滤波算法对MSCT图像进行冠状动脉增强滤波.通过对Hessian矩阵多项式系数的条件分析,预识别出不满足条件的元素为非血管元素,可省略对该部分元素的Hessian矩阵特征值求解,从而降低计算量,减少滤波耗时.实验结果表明,在保证有效评价图像中管状结构相似度的情况下,该方法可将滤波耗时平均降低至传统Frangi算法的30.63%.  相似文献   

4.
我院于86年引进美国GE公司CT9800,经过近10年时间使用,探索和总结经验认为,该机虽属大型CT机,在计算机方面也具有优越性,如图像质量高,机器操作易掌握。但该机引进时图像处理系统只 有一套,重建图像速度较慢,虽经改装增加一套处理系统,但日常工作中仍存在重建图像时间长,病人等 候时间也长,无法满足工作要求.经过多年的工作积累了不少经验,充分开发该机的优越性,在进行特殊 部位的检查项目中,对靶扫描的方法进行大胆的改进,由原来的一次扫描两次特殊重建处理图像方法, 改为一次靶直接扫描,只进行一次特殊重建图像的步骤,不采用放大重建方法进行图像处理,克服了多 数靶放大扫描;遗漏病变的问题,保证图像的诊断要求,又缩短了该机重建图像时间,提高了工作效率。  相似文献   

5.
张立峰  李佳  田沛 《计量学报》2017,38(3):315-318
利用Kalman滤波算法进行电容层析成像图像重建,通过不断更新测量信息提高重建图像质量。在滤波初始,需要确定重建图像灰度及估计误差方差矩阵的初值。为研究不同初值组合对重建图像质量的影响,选择3流型进行了仿真实验,获得了最佳的初值组合,并在电容层析成像实验装置上进行了静态测试,实验结果表明,与Landweber算法相比,使用最佳初值组合的Kalman滤波算法可获得更为接近真实分布的重建图像。  相似文献   

6.
魏彦 《包装工程》2017,38(13):204-207
目的为了提高激光三维成像系统中的图像质量,有效滤除图像中噪声,提出一种自适应均值漂移的图像滤波算法。方法在传统算法基础上对均值漂移滤波算法进行改进,选取领域内像素的均方差为控制参量对带宽矩阵h大小进行自适应调控。根据宽带矩阵h的大小,选择合适的像元值参与到计算均值过程中,以提高结果的计算精度。结果实验结果表明改进后的算法能够有效滤除图像中的噪声,提高图像清晰度。结论该算法具有良好的保边去噪特性。  相似文献   

7.
作者给出了联立求解平衡方程、屈服条件和塑性流动方程的弱解形式, 同时提出了两种位移增量迭代算法:一是以应变增量和塑性因子为独立基本未知量的联立求解迭代算法;二是以应变增量为独立基本未知量的数值求解方法。为了避免弹塑性刚度矩阵的奇异性, 改善迭代收敛速度, 针对上述两种迭代算法本文提出了隐式阻尼迭代法, 并编写了相应脚本文件和计算流程, 并基于有限元程序自动生成系统(FEPG)生成了求解弹塑性问题的有限元源程序。最后, 通过算例验证了所提出的迭代法和计算程序的可靠性和有效性。  相似文献   

8.
贾硕  李钢  李宏男 《工程力学》2019,36(8):16-29,58
在结构局部非线性求解过程中,刚度矩阵仅部分元素发生改变,此时切线刚度矩阵可写成初始刚度矩阵与其低秩修正矩阵和的形式,每个增量步的位移响应可用数学中快速求矩阵逆的Woodbury公式高效求解,但通常情况下迭代计算在结构非线性分析中是不可避免的,因此迭代算法的计算性能也对分析效率有重要影响。该文以基于Woodbury非线性方法为基础,分别采用Newton-Raphson (N-R)法、修正牛顿法、3阶两点法、4阶两点法及三点法求解其非线性平衡方程,并对比分析5种迭代算法的计算性能。利用算法时间复杂度理论,得到了5种迭代算法求解基于Woodbury非线性方法平衡方程的时间复杂度分析模型,定量对比了5种迭代算法的计算效率。通过2个数值算例,从收敛速度、时间复杂度和误差等方面对比了各迭代算法的计算性能,分析了各算法适用的非线性问题。最后,计算了5种算法求解基于Woodbury非线性方法平衡方程的综合性能指标。  相似文献   

9.
姚军财  申静 《包装工程》2015,36(21):95-101
目的为了提高解压缩重建图像的视觉效果和质量。方法结合人眼视觉特性和图像变换域频谱系数特征,构建了一个提升解压缩图像质量的补偿矩阵,用以在解压缩过程中对反量化系数起到补偿作用,并采用JPEG压缩算法,对3幅彩色图像进行仿真实验验证。结果在6种压缩比下,相对于JPEG技术,在解压缩重建图像过程中加入补偿矩阵后,反映提升图像质量的参数SSIM和PSNR值分别平均增加了2.5275%和11.8977%。结论该补偿矩阵有效提升了解压缩图像的质量,较好地弥补了压缩过程中因量化而导致图像质量下降的不足。  相似文献   

10.
马平  司志宁 《计量学报》2018,39(4):536-540
在基于工业CT及ECT的多相流参数测量中,CT所得重建图像分辨率较高,但由于其投影角度较少,重建图像边缘出现失真;对于ECT 系统,由于其敏感场软场特性,其重建图像分辨率较低,但图像边缘保真度高。基于二者的互补特性,研究了CT/ECT图像融和方法,提出了改进的逻辑滤波器算法应用于小波变换的低频系数融合规则,结果表明该方法可提高融合后的重建图像质量。  相似文献   

11.
The development of fast and accurate image reconstruction algorithms is a central aspect of computed tomography. In this paper, we investigate this issue for the sparse data problem in photoacoustic tomography (PAT). We develop a direct and highly efficient reconstruction algorithm based on deep learning. In our approach, image reconstruction is performed with a deep convolutional neural network (CNN), whose weights are adjusted prior to the actual image reconstruction based on a set of training data. The proposed reconstruction approach can be interpreted as a network that uses the PAT filtered backprojection algorithm for the first layer, followed by the U-net architecture for the remaining layers. Actual image reconstruction with deep learning consists in one evaluation of the trained CNN, which does not require time-consuming solution of the forward and adjoint problems. At the same time, our numerical results demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative approaches for PAT from sparse data.  相似文献   

12.
It is a significant challenge to accurately reconstruct medical computed tomography (CT) images with important details and features. Reconstructed images always suffer from noise and artifact pollution because the acquired projection data may be insufficient or undersampled. In reality, some “isolated noise points” (similar to impulse noise) always exist in low‐dose CT projection measurements. Statistical iterative reconstruction (SIR) methods have shown greater potential to significantly reduce quantum noise but still maintain the image quality of reconstructions than the conventional filtered back‐projection (FBP) reconstruction algorithm. Although the typical total variation‐based SIR algorithms can obtain reconstructed images of relatively good quality, noticeable patchy artifacts are still unavoidable. To address such problems as impulse‐noise pollution and patchy‐artifact pollution, this work, for the first time, proposes a joint regularization constrained SIR algorithm for sparse‐view CT image reconstruction, named “SIR‐JR” for simplicity. The new joint regularization consists of two components: total generalized variation, which could process images with many directional features and yield high‐order smoothness, and the neighborhood median prior, which is a powerful filtering tool for impulse noise. Subsequently, a new alternating iterative algorithm is utilized to solve the objective function. Experiments on different head phantoms show that the obtained reconstruction images are of superior quality and that the presented method is feasible and effective.  相似文献   

13.
Positron emission tomography (PET) is becoming increasingly important in the fields of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation for image reconstruction in emission tomography place conditions on which types of images are accepted as solutions. The recently introduced median root prior (MRP) favors locally monotonic images. MRP can preserve sharp edges, but a steplike streaking effect and much noise are still observed in the reconstructed image, both of which are undesirable. An MRP tomography reconstruction combined with nonlinear anisotropic diffusion interfiltering is proposed for removing noise and preserving edges. Analysis shows that the proposed algorithm is capable of producing better reconstructed images compared with those reconstructed by conventional maximum-likelihood expectation maximization (MLEM), MAP, and MRP-based algorithms in PET image reconstruction.  相似文献   

14.
The iterative maximum‐likelihood expectation‐maximization (ML‐EM) algorithm is an excellent algorithm for image reconstruction and usually provides better images than the filtered backprojection (FBP) algorithm. However, a windowed FBP algorithm can outperform the ML‐EM in certain occasions, when the least‐squared difference from the true image, that is, the least‐squared error (LSE), is used as the comparison criterion. Computer simulations were carried out for the two algorithms. For a given data set the best reconstruction (compared to the true image) from each algorithm was first obtained, and the two reconstructions are compared. The stopping iteration number of the ML‐EM algorithm and the parameters of the windowed FBP algorithm were determined, so that they produced an image that was closest to the true image. However, to use the LSE criterion to compare algorithms, one must know the true image. How to select the optimal parameters when the true image is unknown is a practical open problem. For noisy Poisson projections, computer simulation results indicate that the ML‐EM images are better than the regular FBP images, and the windowed FBP algorithm images are better than the ML‐EM images. For the noiseless projections, the FBP algorithms outperform the ML‐EM algorithm. The computer simulations reveal that the windowed FBP algorithm can provide a reconstruction that is closer to the true image than the ML‐EM algorithm. © 2012 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 22, 114–120, 2012  相似文献   

15.
增广Lagrange乘子算法是求解矩阵压缩恢复的一种有效迭代方法.为了有效求解Toeplitz矩阵压缩恢复模型,本文提出了两种中值修正的增广Lagrange乘子算法.在新算法中,对增广Lagrange乘子算法每步产生的迭代矩阵进行中值修正并保证其Toeplitz结构.新算法不仅减少了奇异值分解所用的时间和CPU时间,而且获得更精确的迭代矩阵.同时,本中还详细给出了两种新算法的收敛性分析.最后通过数值例子验证了新算法的可行性和有效性,并展示了新算法在计算时间和精度方面比增广Lagrange乘子算法更有优势.  相似文献   

16.
Yang S  Shimomura T 《Applied optics》1996,35(35):6983-6989
An interlacing technique algorithm is proposed for the synthesis of kinoforms. The conventional iterative methods are quite powerful for optimizing kinoforms, but there is still a large reconstruction error for a quantized kinoform. We suggest the use of a number of subkinoforms interlaced together to synthesize a multikinoform for reconstructing the desired image. The idea of our interlacing technique is to increase the size of a kinoform to reduce the reconstruction error. The first subkinoform is generated from the desired image. Other subkinoforms are generated from the error images between the desired image and the image reconstructed from the previous subkinoforms. A theoretical analysis shows that the reconstruction error will be reduced as the number of subkinoforms is increased. Simulation results show that our interlacing method can reduce the reconstruction error more than do the conventional iterative methods and that the reconstructed image can be improved.  相似文献   

17.
We have developed an efficient fully three-dimensional (3D) reconstruction algorithm for diffuse optical tomography (DOT). The 3D DOT, a severely ill-posed problem, is tackled through a pseudodynamic (PD) approach wherein an ordinary differential equation representing the evolution of the solution on pseudotime is integrated that bypasses an explicit inversion of the associated, ill-conditioned system matrix. One of the most computationally expensive parts of the iterative DOT algorithm, the reevaluation of the Jacobian in each of the iterations, is avoided by using the adjoint-Broyden update formula to provide low rank updates to the Jacobian. In addition, wherever feasible, we have also made the algorithm efficient by integrating along the quadratic path provided by the perturbation equation containing the Hessian. These algorithms are then proven by reconstruction, using simulated and experimental data and verifying the PD results with those from the popular Gauss-Newton scheme. The major findings of this work are as follows: (i) the PD reconstructions are comparatively artifact free, providing superior absorption coefficient maps in terms of quantitative accuracy and contrast recovery; (ii) the scaling of computation time with the dimension of the measurement set is much less steep with the Jacobian update formula in place than without it; and (iii) an increase in the data dimension, even though it renders the reconstruction problem less ill conditioned and thus provides relatively artifact-free reconstructions, does not necessarily provide better contrast property recovery. For the latter, one should also take care to uniformly distribute the measurement points, avoiding regions close to the source so that the relative strength of the derivatives for measurements away from the source does not become insignificant.  相似文献   

18.
Positron emission tomography (PET) and single-photon emission computed tomography have revolutionized the field of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation eliminate noisy artifacts by utilizing available prior information in the reconstruction process but often result in a blurring effect. MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class. Reconstruction with better edge information is often difficult because prior knowledge is not taken into account. The recently introduced median-root-prior (MRP)-based algorithm preserves the edges, but a steplike streaking effect is observed in the reconstructed image, which is undesirable. A fuzzy approach is proposed for modeling the nature of interpixel interaction in order to build an artifact-free edge-preserving reconstruction. The proposed algorithm consists of two elementary steps: (1) edge detection, in which fuzzy-rule-based derivatives are used for the detection of edges in the nearest neighborhood window (which is equivalent to recognizing nearby density classes), and (2) fuzzy smoothing, in which penalization is performed only for those pixels for which no edge is detected in the nearest neighborhood. Both of these operations are carried out iteratively until the image converges. Analysis shows that the proposed fuzzy-rule-based reconstruction algorithm is capable of producing qualitatively better reconstructed images than those reconstructed by MAP and MR P algorithms. The reconstructed images a resharper, with small features being better resolved owing to the nature of the fuzzy potential function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号