首页 | 官方网站   微博 | 高级检索  
     

融合残差和对抗网络的跨模态PET图像合成方法
引用本文:肖晨晨,陈乐庚,王书强. 融合残差和对抗网络的跨模态PET图像合成方法[J]. 计算机工程与应用, 2022, 58(1): 218-223. DOI: 10.3778/j.issn.1002-8331.2008-0325
作者姓名:肖晨晨  陈乐庚  王书强
作者单位:1.桂林电子科技大学 电子工程与自动化学院,广西 桂林 541004 2.中国科学院 深圳先进技术研究院 数字所生物医学信息技术研究中心,广东 深圳 518055
基金项目:国家自然科学基金(61872351);深圳市重点基础研究项目(JCYJ20180507182506416)。
摘    要:针对现有跨模态图像合成方法不能很好地捕获人体组织的空间信息与结构信息,合成的图像具有边缘模糊、信噪比低等问题,提出一种融合残差模块和生成对抗网络的跨模态PET图像合成方法。该算法在生成器网络中引入改进的残差初始模块和注意力机制,减少参数量的同时增强了生成器的特征学习能力。判别器采用多尺度判别器,以提升判别性能。损失函数中引入多层级结构相似损失,以更好地保留图像的对比度信息。该算法在ADNI数据集上与主流算法进行对比,实验结果表明,合成PET图像的MAE指标有所下降,SSIM与PSNR指标有所提升。实验结果显示,提出的模型能很好地保留图像的结构信息,在视觉和客观指标上都能提高合成图像的质量。

关 键 词:跨模态图像合成  生成对抗网络  残差初始模块  多尺度判别器  

Cross-Modality PET Synthesis Method Based on Residual and Adversarial Networks
XIAO Chenchen,CHEN Legeng,WANG Shuqiang. Cross-Modality PET Synthesis Method Based on Residual and Adversarial Networks[J]. Computer Engineering and Applications, 2022, 58(1): 218-223. DOI: 10.3778/j.issn.1002-8331.2008-0325
Authors:XIAO Chenchen  CHEN Legeng  WANG Shuqiang
Affiliation:1.School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China2.Research Center for Biomedical Information Techology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
Abstract:In order to solve the problem that the existing cross-modality image synthesis methods cannot capture the spatial and structural information of human tissues well, and the synthesized image has fuzzy edges and low signal-to-noise ratio, a cross-modality PET synthesis method based on residual module and generative adversarial networks is proposed. The algorithm introduces an improved residual inception module and attention mechanism into the generator network to enhance the feature learning ability and reduce the number of parameters. Multiscale discriminator is used to improve the discrimination performance. Multi-scale structural similarity loss is introduced into the loss function to better preserve the contrast information of the image. This algorithm is compared with several existing methods on the ADNI dataset. The experimental results show that the MAE of synthetic PET image is decreased, and the SSIM and PSNR are improved. Comparisons with the existing methods demonstrate that the improved model can retain the structure information of the image better and improve the quality of the synthetic image both visually and objectively.
Keywords:cross-modality image synthesis  generative adversarial network  residual inception module  multi-scale dis-criminator
本文献已被 维普 万方数据 等数据库收录!
点击此处可从《计算机工程与应用》浏览原始摘要信息
点击此处可从《计算机工程与应用》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号