首页 | 官方网站   微博 | 高级检索  
     

基于3D深度残差全卷积网络的头颈CT放疗危及器官自动勾画
引用本文:田娟秀,刘国才,谷珊珊,顾冬冬,龚军辉.基于3D深度残差全卷积网络的头颈CT放疗危及器官自动勾画[J].中国生物医学工程学报,2019,38(3):257-265.
作者姓名:田娟秀  刘国才  谷珊珊  顾冬冬  龚军辉
作者单位:1(湖南大学电气与信息工程学院,长沙 410082); 2(湖南工程学院计算机与通信学院, 湖南 湘潭 411104); 3(中国人民解放军总医院放射治疗科, 北京 100853); 4(机器人视觉感知与控制技术国家工程实验室,长沙 410082
基金项目:国家自然科学基金(61671204);湖南省科技计划重点研发专项基金(2016WK2001);湖南省教育厅科学研究重点项目(18A356)
摘    要:头颈部肿瘤放射治疗危及器官的准确勾画是放疗计划的关键步骤,然而头颈部放疗危及器官的精确分割挑战性很大,目前临床医生手动勾画危及器官非常繁琐、耗时且缺乏一致性。提出基于3D深度残差全卷积网络的头颈部肿瘤放疗危及器官自动分割方法,通过改进的V-Net网络分割模型,有效地结合危及器官CT影像的深层特征和浅层特征,同时根据特别设计的端到端监督学习确定危及器官分割模型参数。为了解决小器官类分布极不平衡问题,提出利用器官位置先验约束采样区域与随机采样相结合的训练样本选择策略,同时采用Dice损失函数对网络进行训练。该策略不仅可加速训练过程,提升分割性能,而且可保证小器官的分割准确率。该方法在2015年MICCAI头颈自动分割挑战赛数据集PDDCA上验证,各器官分割的Dice系数平均值分别为:颌下骨0.945、左腮腺0.884、右腮腺0.882、脑干0.863、左颌下腺0.825、右颌下腺0.842、左视神经0.807、右视神经0.847、视交叉0.583。大多数器官的95% Hausdorff距离小于3 mm,所有器官的勾画平均距离均小于1.2 mm。实验结果表明,该方法在除脑干以外的危及器官分割中性能比其他对比方法更优。

关 键 词:危及器官勾画  卷积神经网络  头颈部肿瘤  放射治疗  CT  
收稿时间:2018-09-11

Segmentation of Organs at Risk on Head and Neck CT for Radiotherapy Based on 3D Deep Residual Fully Convolutional Neural Network
Tian Juanxiu,Liu Guocai,Gu Shanshan,Gu Dongdong,Gong Junhui.Segmentation of Organs at Risk on Head and Neck CT for Radiotherapy Based on 3D Deep Residual Fully Convolutional Neural Network[J].Chinese Journal of Biomedical Engineering,2019,38(3):257-265.
Authors:Tian Juanxiu  Liu Guocai  Gu Shanshan  Gu Dongdong  Gong Junhui
Affiliation:(College of Electrical and Information Engineering, Hunan University, Changsha 410082, China); (College of Computer and Communication, Hunan Institute of Engineering, Xiangtan 411104, Hunan, China); (Departments of Radiation Oncology, Chinese PLA General Hospital, Beijing 100853, China); (National Engineering Laboratory for Robot Visual Perception and Control Technology, Changsha 410082, China)
Abstract:Segmentation of organs at risk (OARs) is a crucial process during the planning of radiation therapy for head and neck cancer treatment. However, accurate OAR segmentation in CT images is a challenging task. Manual delineation of OARs is tedious, time-consuming and inconsistent. To tackle these challenges, we proposed an automatic deep-learning-based method for head and neck OARs segmentation. A modified V-Net structure was constructed to extract deep and shallow features of OARs by specialized end-to-end supervised learning. To address the extremely class imbalances of small organs, a positional prior knowledge restricted sampling strategy was proposed, and Dice loss function was used to train the network. The strategy could not only accelerate the training process and improve the segmentation performance, but also ensure the accuracy of small organ segmentation. The performance of the proposed method was validated on PDDCA dataset, which was used in Head and Neck Auto-Segmentation Challenge of MICCAI 2015. The mean Dice coefficient of each organ was 0.945 of mandible, 0.884 of left parotid gland, 0.882 of right parotid gland, 0.863 of brainstem, 0.825 of leftsubmandibular gland, 0.842 of rightsubmandibular gland, 0.807 of left optic nerve, 0.847 of right optic nerve and 0.583 of optic chiasm. The 95% of Hausdorff distances of mandible, parotid glands, brainstem and submandibular glands was all within 3 mm. The contour mean distance of all organs was less than 1.2 mm. The experimental results demonstrated that the performance of the proposed method was superior to the compared state-of-the-art algorithms on segmentation of most of OARs.
Keywords:organs at risk segmentation  convolutional neural networks  head-and-neck cancer  radiotherapy  CT  
本文献已被 CNKI 等数据库收录!
点击此处可从《中国生物医学工程学报》浏览原始摘要信息
点击此处可从《中国生物医学工程学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号