首页 | 官方网站   微博 | 高级检索  
     

基于残差融合网络的定量磁敏感图像与T1加权图像配准
引用本文:王毅,田梨梨,程欣宇,王丽会.基于残差融合网络的定量磁敏感图像与T1加权图像配准[J].计算机系统应用,2022,31(8):46-54.
作者姓名:王毅  田梨梨  程欣宇  王丽会
作者单位:贵州大学 计算机科学与技术学院 贵州省智能医学影像分析与精准诊断重点实验室, 贵阳 550025
基金项目:国家自然科学基金(62161004); 中法蔡元培项目(N.41400TC); 贵州省科技计划(ZK[2021] Key 002, [2018]5301)
摘    要:医学图像配准对医学图像处理和分析至关重要, 由于定量磁敏感图像 (quantitative susceptibility mapping, QSM) 与T1加权图像的灰度、纹理等信息存在较大的差异, 现有的医学图像配准算法难以高效精确地完成两者配准. 因此, 本文提出了一个基于残差融合的无监督深度学习配准模型RF-RegNet (residual fusion registration network, RF-RegNet). RF-RegNet由编解码器、重采样器以及上下文自相似特征提取器3部分组成. 编解码器用于提取待配准图像对的特征和预测两者的位移矢量场 (displacement vector field, DVF), 重采样器根据估计的DVF对浮动QSM图像重采样, 上下文自相似特征提取器分别用于提取参考T1加权图像和重采样后的QSM图像的上下文自相似特征并计算两者的平均绝对误差 (mean absolute error, MAE) 以驱动卷积神经网络 (convolutional neural network, ConvNet) 学习. 实验结果表明本文提出的方法显著地提高了QSM图像与T1加权图像的配准精度, 满足临床的配准需求.

关 键 词:卷积神经网络  医学图像配准  QSM  残差融合  图像处理
收稿时间:2021/11/15 0:00:00
修稿时间:2021/12/13 0:00:00

Quantitative Susceptibility Mapping and T1-weighted Image Registration Based on Residual Fusion Network
WANG Yi,TIAN Li-Li,CHENG Xin-Yu,WANG Li-Hui.Quantitative Susceptibility Mapping and T1-weighted Image Registration Based on Residual Fusion Network[J].Computer Systems& Applications,2022,31(8):46-54.
Authors:WANG Yi  TIAN Li-Li  CHENG Xin-Yu  WANG Li-Hui
Affiliation:Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
Abstract:Medical image registration plays a crucial role in medical image processing and analysis. Due to the large differences in gray scale and texture information between quantitative susceptibility mapping (QSM) and T1-weighted images, it is difficult for existing medical image registration algorithms to obtain accurate registration results efficiently. Therefore, this study proposes an unsupervised deep learning registration model (residual fusion registration network, RF-RegNet) based on residual fusion. RF-RegNet is composed of an encoder, a decoder, a resampler, and a context-similarity feature extractor. The encoder and decoder are used to extract the features of the image pair to be aligned and estimate the displacement vector field (DVF). The moving QSM image is resampled according to the estimated DVF, and the context-similarity feature extractor is used to extract separately the context-similarity features of the reference T1-weighted image and the resampled QSM image to describe the similarity of the two images. The mean absolute error (MAE) between context-similarity features from the two images is used to drive the convolutional neural network (ConvNet) learning. Experimental results reveal that the proposed method significantly improves the registration accuracy of QSM images and T1-weighted images, which is adequate for clinical demands.
Keywords:convolutional neural network (ConvNet)  medical image registration  quantitative susceptibility mapping (QSM)  residual fusion  image processing
点击此处可从《计算机系统应用》浏览原始摘要信息
点击此处可从《计算机系统应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号