首页 | 官方网站   微博 | 高级检索  
     

面向乳腺超声计算机辅助诊断的两阶段深度迁移学习
引用本文:贡荣麟,施俊,周玮珺,汪程.面向乳腺超声计算机辅助诊断的两阶段深度迁移学习[J].中国图象图形学报,2022,27(3):898-910.
作者姓名:贡荣麟  施俊  周玮珺  汪程
作者单位:上海大学通信与信息工程学院, 上海 200444;安徽医科大学第一附属医院超声科, 合肥 230032
基金项目:国家自然科学基金项目(81830058,81627804)
摘    要:目的 为了提升基于单模态B型超声(B超)的乳腺癌计算机辅助诊断(computer-aided diagnosis,CAD)模型性能,提出一种基于两阶段深度迁移学习(two-stage deep transfer learning,TSDTL)的乳腺超声CAD算法,将超声弹性图像中的有效信息迁移至基于B超的乳腺癌CAD模型之中,进一步提升该CAD模型的性能。方法 在第1阶段的深度迁移学习中,提出将双模态超声图像重建任务作为一种自监督学习任务,训练一个关联多模态深度卷积神经网络模型,实现B超图像和超声弹性图像之间的信息交互迁移;在第2阶段的深度迁移学习中,基于隐式的特权信息学习(learning using privilaged information,LUPI)范式,进行基于双模态超声图像的乳腺肿瘤分类任务,通过标签信息引导下的分类进一步加强两个模态之间的特征融合与信息交互;采用单模态B超数据对所对应通道的分类网络进行微调,实现最终的乳腺癌B超图像分类模型。结果 实验在一个乳腺肿瘤双模超声数据集上进行算法性能验证。实验结果表明,通过迁移超声弹性图像的信息,TSDTL在基于B超的乳腺癌诊断任务中取得的平均分类准确率为87.84±2.08%、平均敏感度为88.89±3.70%、平均特异度为86.71±2.21%、平均约登指数为75.60±4.07%,优于直接基于单模态B超训练的分类模型以及多种典型迁移学习算法。结论 提出的TSDTL算法通过两阶段的深度迁移学习,将超声弹性图像的信息有效迁移至基于B超的乳腺癌CAD模型,提升了模型的诊断性能,具备潜在的应用可行性。

关 键 词:B型超声成像  超声弹性成像  乳腺癌计算机辅助诊断  特权信息学习(LUPI)  深度迁移学习  自监督学习(SSL)
收稿时间:2021/8/13 0:00:00
修稿时间:2021/11/18 0:00:00

Two-stage deep transfer learning for human breast ultrasound computer-aided diagnosis
Gong Ronglin,Shi Jun,Zhou Weijun,Wang Cheng.Two-stage deep transfer learning for human breast ultrasound computer-aided diagnosis[J].Journal of Image and Graphics,2022,27(3):898-910.
Authors:Gong Ronglin  Shi Jun  Zhou Weijun  Wang Cheng
Affiliation:School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China;Department of Ultrasound, the First Affiliated Hospital of Anhui Medical University, Hefei 230032, China
Abstract:Objective B-mode ultrasound (BUS) provides information about the structure and morphology information of human breast lesions, while elastography ultrasound (EUS) can provide additional bio-mechanical information. Dual-modal ultrasound imaging can effectively improve the accuracy of the human breast cancer diagnosis. The single-modal ultrasound-based computer-aided diagnosis (CAD) model has its potential applications. Deep transfer learning is a significant branch of transfer learning analysis. This technique can be utilized to guide the information transfer between EUS images and BUS images. However, clinical image samples are limited based on training deep learning models due to the high cost of data collection and annotation. Self-supervised learning (SSL) is an effective solution to demonstrate its potential in a variety of medical image analysis tasks. In respect of the SSL pipeline, the backbone network is trained based on a pretext task, where the supervision information is generated from the train samples without manual annotation. Based on the weight parameters of the trained backbone network, the obtained results are then transferred to the downstream network for further fine-tuning with small size annotated samples. A step-based correlation multi-modal deep convolution neural network (CorrMCNN) has been facilitated to conduct a self-supervised image reconstruction task currently. In the training process, the model transfers the effective information between two modalities to optimize the correlation loss through SSL-based deep transfer learning. Since each BUS and EUS scan the same lesion area for the targeted patient simultaneously, the analyzed results are demonstrated in pairs and share labels. Learning using privileged information (LUPI) is a supervised transfer learning paradigm for paired source domain (privileged information) and target domain data based on shared labels. It can exploit the intrinsic knowledge correlation between the paired data in the source domain and target domain with shared labels, which guides knowledge transfer to improve model capability. Since the label information is used to guide transfer learning in classifiers, the current LUPI algorithm focus on the classifier. Feature representation is also the key step for a qualified CAD system. A two-stage deep transfer learning (TSDTL) algorithm is demonstrated for human breast ultrasound CAD, which transfers the clear information from EUS images to the CAD model of BUS-based human breast cancer and further improves the performance of the CAD model. Method In the first stage of deep transfer learning, an SSL task is first designed based on dual-modal ultrasonic image reconstruction, which trains the CorrMCNN model to conduct the interactive transfer of information between the two modal images of BUS and EUS images. The bi-channel encoder networks are adopted to learn the feature representation derived from the dual-modal images, respectively. The high-level learned features are used following for concatenation to obtain the joint representation. The original BUS and EUS images are reconstructed from the joint feature representation through the bi-channel decoder networks. In the training process, the network implicitly implements deep transfer learning via the correlation loss optimization amongst high-level features derived from two channels. In the second stage of deep transfer learning, the pre-training backbone network is reused followed by a sub-network for classification. The BUS and EUS images are input into this new network for targeted breast cancer classification based on dual-modal ultrasound images. In this training process, the data of the source domain and target domain can be applied to supervised transfer learning with the shared labels, and this strategy belongs to the general LUPI paradigm. Consequently, it can be considered that this deep transfer learning stage implicitly conducts knowledge transfer under the LUPI paradigm, which is based on the dual-modal ultrasound breast cancer classification task. In the final stage, the corresponding channel sub-network is fine-tuned with single-modal ultrasound data, which obtains an accurate breast cancer B-mode image classification model. The obtained single-channel network is the final network model of BUS-based breast cancer CAD, which can be directly applied to the diagnostic tasks of the emerging BUS images. Result The performance of the algorithm is demonstrated on a breast tumor dual-modal ultrasound dataset. Our illustrated TSDTL achieves the mean classification accuracy of 87.84±2.08%, sensitivity of 88.89±3.70%, specificity of 86.71±2.21%, and Youden index of 75.60±4.07% respectively, which develops the classification model trained on single-modal BUS images and a variety of typical deep transfer learning algorithms. Conclusion Our TSDTL algorithm analysis can effectively transfer the information of EUS to the BUS-based human breast cancer CAD model through our illustrated two-stage deep transfer learning.
Keywords:B-mode ultrasound imaging  elastography ultrasound imaging  computer-aided diagnosis of breast cancer  learning using privileged information(LUPI)  deep transfer learning  self-supervised learning(SSL)
点击此处可从《中国图象图形学报》浏览原始摘要信息
点击此处可从《中国图象图形学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号