首页 | 官方网站   微博 | 高级检索  
     

预训练模型特征提取的双对抗磁共振图像融合网络研究
引用本文:刘慧,李珊珊,高珊珊,邓凯,徐岗,张彩明.预训练模型特征提取的双对抗磁共振图像融合网络研究[J].软件学报,2023,34(5):2134-2151.
作者姓名:刘慧  李珊珊  高珊珊  邓凯  徐岗  张彩明
作者单位:山东财经大学 计算机科学与技术学院, 山东 济南 250014;山东省数字媒体技术重点实验室, 山东 济南 250014;山东第一医科大学第一附属医院, 山东 济南 250013;杭州电子科技大学 计算机学院, 浙江 杭州 310018;山东省数字媒体技术重点实验室, 山东 济南 250014;山东大学软件学院, 山东 济南 250101
基金项目:国家自然科学基金(62072274,U1909210);山东省科技成果转移转化项目(2021LYXZ011);浙江省重点研发计划(2021C01108)
摘    要:随着多模态医学图像在临床诊疗工作中的普及,建立在时空相关性特性基础上的融合技术得到快速发展,融合后的医学图像不仅可以保留各模态源图像的独有特征,而且能够强化互补信息、便于医生阅片.目前大多数方法采用人工定义约束的策略来实现特征提取和特征融合,这容易导致融合图像中部分有用信息丢失和细节不清晰等问题.为此,提出一种基于预训练模型特征提取的双对抗融合网络实现MR-T1/MR-T2图像的融合.该网络由一个特征提取模块、一个特征融合模块和两个鉴别网络模块组成.由于已配准的多模态医学图像数据集规模较小,无法对特征提取网络进行充分的训练,又因预训练模型具有强大的数据表征能力,故将预先训练的卷积神经网络模型嵌入到特征提取模块以生成特征图.然后,特征融合网络负责融合深度特征并输出融合图像.两个鉴别网络通过对源图像与融合图像进行准确分类,分别与特征融合网络建立对抗关系,最终激励其学习出最优的融合参数.实验结果证明了预训练技术在所提方法中的有效性,同时与现有的6种典型融合方法相比,所提方法融合结果在视觉效果和量化指标方面均取得最优表现.

关 键 词:多模态医学图像  图像融合  预训练模型  双鉴别网络  对抗学习
收稿时间:2022/4/18 0:00:00
修稿时间:2022/5/29 0:00:00

Research on Dual-adversarial MR Image Fusion Network Using Pre-trained Model for Feature Extraction
LIU Hui,LI Shan-Shan,GAO Shan-Shan,DENG Kai,XU Gang,ZHANG Cai-Ming.Research on Dual-adversarial MR Image Fusion Network Using Pre-trained Model for Feature Extraction[J].Journal of Software,2023,34(5):2134-2151.
Authors:LIU Hui  LI Shan-Shan  GAO Shan-Shan  DENG Kai  XU Gang  ZHANG Cai-Ming
Affiliation:School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan 250014, China;Shandong Key Laboratory of Digital Media Technology, Jinan 250014, China;The First Affiliated Hospital of Shandong First Medical University, Jinan 250013, China;School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China; Shandong Key Laboratory of Digital Media Technology, Jinan 250014, China;School of Software, Shandong University, Jinan 250101, China
Abstract:With the popularization of multi-modal medical images in clinical diagnosis and treatment, the fusion technology based on spatial-temporal correlation characteristics has been developed rapidly. The fused medical images not only retain the unique features of source images with various modalities but also strengthen the complementary information, which can facilitate doctors to acquire more information. At present, most methods perform feature extraction and feature fusion by manually defining constraints, which easily causes the useful information loss and the unclear details in the fused image. In light of this, a dual-adversarial MR image fusion network based on feature extraction of pre-trained model is proposed in this paper. The network consists of a feature extraction module, a feature fusion module and two discriminator network modules. Due to the small scale of the registered multi-modal medical image dataset, the feature extraction network cannot be fully trained. And because the pre-trained model has powerful data representation ability, we embed the pre-trained convolutional neural network model into the feature extraction module to generate the feature map. The feature fusion network aims to fuse the deep features and output fused images. Through accurate classification of the source and fused images, the two discriminator networks respectively establish adversarial relations with feature fusion network, and finally encourage it to learn the optimal fusion parameters. The experimental results illustrate the effectiveness of pre-trained technology in this method. And compared with the six existing typical fusion methods, the fused results generated by our method have the best performance in visual effect and quantitative metrics.
Keywords:multi-modal medical image  image fusion  pre-trained model  dual-discriminate network  adversarial learning
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号