2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于双模型交互学习的半监督医学图像分割

方超伟 李雪 李钟毓 焦李成 张鼎文

方超伟, 李雪, 李钟毓, 焦李成, 张鼎文. 基于双模型交互学习的半监督医学图像分割. 自动化学报, 2023, 49(4): 805−819 doi: 10.16383/j.aas.c210667
引用本文: 方超伟, 李雪, 李钟毓, 焦李成, 张鼎文. 基于双模型交互学习的半监督医学图像分割. 自动化学报, 2023, 49(4): 805−819 doi: 10.16383/j.aas.c210667
Fang Chao-Wei, Li Xue, Li Zhong-Yu, Jiao Li-Cheng, Zhang Ding-Wen. Interactive dual-model learning for semi-supervised medical image segmentation. Acta Automatica Sinica, 2023, 49(4): 805−819 doi: 10.16383/j.aas.c210667
Citation: Fang Chao-Wei, Li Xue, Li Zhong-Yu, Jiao Li-Cheng, Zhang Ding-Wen. Interactive dual-model learning for semi-supervised medical image segmentation. Acta Automatica Sinica, 2023, 49(4): 805−819 doi: 10.16383/j.aas.c210667

基于双模型交互学习的半监督医学图像分割

doi: 10.16383/j.aas.c210667
基金项目: 国家自然科学基金(62003256, 61876140, U21B2048) 资助
详细信息
    作者简介:

    方超伟:西安电子科技大学人工智能学院讲师. 2019年获得香港大学博士学位, 2013年获得西安交通大学学士学位. 主要研究方向为图像处理, 医学影像分析, 计算机视觉, 机器学习. E-mail: chaoweifang@outlook.com

    李雪:西安电子科技大学机电工程学院硕士研究生. 2020年获得西安理工大学自动化学院学士学位. 主要研究方向为医学影像分析, 计算机视觉. E-mail: lixue@stu.xidian.edu.cn

    李钟毓:西安交通大学软件学院副教授. 2018年获得美国北卡罗来纳大学夏洛特分校博士学位, 2015年和2012年分别获得西安交通大学硕士和学士学位. 主要研究方向为计算视觉, 医学影像分析. E-mail: zhongyuli@xjtu.edu.cn

    焦李成:西安电子科技大学智能感知与图像理解教育部重点实验室教授. 1982 年获得上海交通大学学士学位, 1984年和1990年分别获得西安交通大学硕士和博士学位. 主要研究方向为图像处理, 自然计算, 机器学习和智能信息处理. E-mail: lchjiao@mail.xidian.edu.cn

    张鼎文:西北工业大学脑与人工智能实验室教授. 2018年获得西北工业大学博士学位. 主要研究方向为计算机视觉和多媒体处理, 显著性检测, 视频物体分割和弱监督学习. 本文通信作者.E-mail: zhangdingwen2006yyy@gmail.com

Interactive Dual-model Learning for Semi-supervised Medical Image Segmentation

Funds: Supported by National Natural Science Foundation of China (62003256, 61876140, U21B2048)
More Information
    Author Bio:

    FANG Chao-Wei Lecturer at the School of Artificial Intelligence, Xidian University. He received his Ph.D. degree from University of Hong Kong in 2019. He received his bachelor degree from Xi'an Jiaotong University in 2013. His research interest covers image processing, medical image analysis, computer vision, and machine learning

    LI Xue Master student at the School of Mechano-Electronic Engineering, Xidian University. She received her bachelor degree from the School of Automation, Xi'an University of Technology in 2020. Her research interest covers medical image analysis and computer vision

    LI Zhong-Yu Associate professor at the School of Software Engineering, Xi'an Jiaotong University. He received his Ph.D. degree from the University of North Carolina at Charlotte, USA in 2018. He received his master degree and bachelor degree from Xi'an Jiaotong University in 2015 and 2012, respectively. His research interest covers computer vision and medical image analysis

    JIAO Li-Cheng Professor at the Key Laboratory of Intelligent Perception and Image Understanding, Ministry of Education, Xidian University. He received his bachelor degree from Shanghai Jiao Tong University in 1982, his master and Ph.D. degrees from Xi'an Jiaotong University in 1984 and 1990, respectively. His research interest covers image processing, natural computation, machine learning, and intelligent information processing

    ZHANG Ding-Wen Professor at the Brain and Artificial Intelligence Laboratory, Northwestern Polytechnical University. He received his Ph.D. degree from Northwestern Polytechnical University in 2018. His research interest covers computer vision and multimedia processing, especially on saliency detection, video object segmentation, and weakly supervised learning. Corresponding author of thispaper

  • 摘要: 在医学图像中, 器官或病变区域的精准分割对疾病诊断等临床应用有着至关重要的作用, 然而分割模型的训练依赖于大量标注数据. 为减少对标注数据的需求, 本文主要研究针对医学图像分割的半监督学习任务. 现有半监督学习方法广泛采用平均教师模型, 其缺点在于, 基于指数移动平均(Exponential moving average, EMA)的参数更新方式使得老师模型累积学生模型的错误知识. 为避免上述问题, 提出一种双模型交互学习方法, 引入像素稳定性判断机制, 利用一个模型中预测结果更稳定的像素监督另一个模型的学习, 从而缓解了单个模型的错误经验的累积和传播. 提出的方法在心脏结构分割、肝脏肿瘤分割和脑肿瘤分割三个数据集中取得优于前沿半监督方法的结果. 在仅采用30%的标注比例时, 该方法在三个数据集上的戴斯相似指标(Dice similarity coefficient, DSC)分别达到89.13%, 94.15%, 87.02%.
  • 图  1  模型框架的对比图 ((a)基于双模型交互学习的半监督分割框架; (b)基于平均教师模型[22]的半监督分割框架; (c)基于一致性约束的单模型半监督分割框架. 实线箭头表示训练数据的传递和模型的更新, 虚线箭头表示无标注数据监督信息的来源)

    Fig.  1  Comparison of the model framework ((a) Semi-supervised segmentation framework based on dual-model interactive learning; (b) Semi-supervised segmentation framework based on the mean teacher model[22]; (c) Semi-supervised segmentation framework based on single model. Solid arrows represent the propagation of training data and the update of models. Dashed arrows point out the origin of the supervisions on unlabeled images)

    图  2  双模型交互学习框架图. MSE、CE 和 DICE 分别表示均方误差函数、交叉熵函数和戴斯函数. 单向实线箭头表示原始图像($ {{\boldsymbol{I}}}^{{{l}}} $$ {{\boldsymbol{I}}}^{{{u}}} $)在各模型中的前向计算过程, 单向虚线箭头表示噪声图像($ {{\bar{{\boldsymbol{I}}}}}^{{{l}}} $$ {{\bar{{\boldsymbol{I}}}}}^{{{u}}} $)在各模型中的前向计算过程

    Fig.  2  Framework of interactive learning of dual-models. MSE, CE and DICE represent mean square error function, cross entropy function and DICE function, respectively. The solid single-directional arrow represents the forward calculation process of the original image ($ {{\boldsymbol{I}}}^{{{l}}} $ and $ {{\boldsymbol{I}}}^{{{u}}} $) in each model. The dashed single-directional arrow represents the forward calculation process of noise images (${{\bar{{{{\boldsymbol{I}}}}}}}^{{{l}}}$ and ${{\bar{{\boldsymbol{I}}}}}^{{{u}}}$) in each model

    图  3  在 CSS 数据集中, 双模型与其他半监督方法分割结果图, 图中黑色区域代表背景, 深灰色区域代表左室腔,浅灰色区域代表左室心肌, 白色区域代表右室腔

    Fig.  3  Segmentation results of our method and other semi-supervised methods on the CSS dataset. The black, dark gray, light gray, and white represents the background, left ventricle cavity (LV Cavity), left ventricular myocardium (LV Myo), and right ventricle cavity (RV Cavity), respectively

    图  4  在训练过程, 平均教师模型和双模型的输出结果对比图

    Fig.  4  Comparison between the mean teacher method and our proposed dual-model learning method

    图  5  双模型与其他半监督方法在 LiTS 数据集中的分割结果, 其中白色区域为肝脏区域

    Fig.  5  Liver segmentation results of our method and other semi-supervised methods on the LiTS dataset. The white is the liver region

    图  6  双模型与其他半监督方法在 BraTS 数据集中的分割结果, 其中白色区域为整个肿瘤区域

    Fig.  6  The whole tumor segmentation results of our method and other semi-supervised methods on the BraTS dataset. The white is the whole tumor region

    图  7  不使用伴随变量Q和使用伴随变量Q时, 模型在验证集上的分割性能变化趋势

    Fig.  7  The segmentation performance variation trend of the model on the validation set when the adjoint variable Q is not used and when the adjoint variable Q is used

    表  1  本文双模型方法与其他双模型方法的比较

    Table  1  Compared with other dual-model methods

    方法任务网络损失函数主要贡献
    DML[29]图像分类残差网络
    移动网络
    宽残差网络
    谷歌网络
    相对熵损失函数
    交叉熵损失函数
    提出双模型, 两个小网络实现交互学习. 用 KL 散度评估两个模型网络预测结果之间的差异
    FML[52]图像分类残差网络
    宽残差网络
    相对熵损失函数
    交叉熵损失函数
    对抗损失函数
    提出双模型, 在 DML 基础上, 在两个网络模型输出预测结果之间引入对抗学习
    D-N[53]图像分类计算机视觉组网络
    残差网络
    交叉熵损失函数
    学生与老师之间的知识提取损失函数
    提出双模型, 每个模型提取特征并通过辅助分类器做出预测.同时将两个分支提取的特征进行融合, 通过融合分类器得到整体分类结果
    本文方法半监督医学
    图像分割
    U 形网络
    密集 U 形网络
    三维 U 形网络
    交叉熵损失函数
    戴斯损失函数
    均方误差函数
    提出双模型, 引入稳定伪标签判断机制, 用一个模型的稳定像素约束另一个模型的不稳定像素
    下载: 导出CSV

    表  2  采用U-Net和DenseU-Net网络结构时, 在不同标签比例的CSS数据集下与其他方法的对比结果

    Table  2  Comparison with other methods on the CSS dataset when different training images are annotated. The baseline segmentation network is U-Net or DenseU-Net

    基准模型方法5%10%20%30%50%
    DSC (%)HD95ASDDSC (%)HD95ASDDSC (%)HD95ASDDSC (%)HD95ASDDSC (%)HD95ASD
    U-NetMT[28]57.9835.8113.7180.7010.752.9385.327.632.2587.406.771.8588.655.601.62
    DAN[25]53.8235.7214.6179.359.642.6984.677.562.2886.316.702.0588.403.691.10
    TCSM[48]50.8223.269.0279.7114.273.5985.516.901.9587.217.101.7789.035.191.46
    EM[47]59.9512.683.7782.288.322.5684.727.602.3687.755.981.8989.129.922.51
    UAMT[21]55.0825.248.2380.048.042.2184.856.351.9987.526.592.0189.304.761.30
    ICT[30]53.7514.464.9581.368.662.4085.686.802.1088.175.671.4589.454.781.57
    DS[51]68.185.781.5481.856.262.0687.075.571.3988.185.821.3889.243.370.97
    DML[29]59.9210.962.1677.235.611.9882.618.413.0786.488.291.8988.203.861.21
    FML[52]60.1310.011.8479.965.481.9083.047.872.8987.137.281.6988.273.671.19
    D-N[53]57.6115.265.3675.0610.493.0182.138.713.4186.418.051.7688.084.081.27
    双模型76.945.381.4787.533.201.1188.675.281.3589.135.641.4790.112.510.86
    DenseU-NetMT[28]51.9134.6911.5675.1919.395.5783.629.563.0686.975.201.4588.244.241.55
    UAMT[21]59.7323.337.0778.2012.953.6683.1210.033.0487.075.842.0588.085.121.57
    ICT[30]71.1013.253.8983.4114.063.5185.687.832.4587.744.391.4188.634.801.39
    双模型81.343.691.1487.453.931.4787.985.151.1888.203.460.9889.603.030.95
    下载: 导出CSV

    表  3  采用U-Net和DenseU-Net网络结构, 在30%标签比例的LiTS数据集下与其他方法的对比结果

    Table  3  Comparison with other methods on LiTS when 30% training images are annotated. The baseline segmentation network is U-Net or DenseU-Net

    网络结构方法DSC (%)HD95ASD
    U-NetMT[28]86.980.880.17
    DAN[25]86.152.540.62
    TCSM[48]84.770.960.20
    EM[47]87.210.700.17
    UAMT[21]85.690.970.20
    ICT[30]88.420.990.21
    DS[51]86.901.230.61
    DML[29]84.921.260.92
    FML[52]85.140.970.25
    D-N[53]84.171.330.95
    双模型94.150.090.03
    DenseU-NetMT[28]93.690.170.04
    UAMT[31]93.910.180.05
    ICT[30]93.900.110.04
    双模型94.430.120.05
    下载: 导出CSV

    表  4  采用3D U-Net网络, 在30%标签比例的BraTS数据集下与其他方法的对比结果

    Table  4  Comparison with other methods on the BraTS dataset when 30% training images are annotated. The baseline network is 3D U-Net

    方法DSC (%)HD95ASD
    MT[28]83.969.972.29
    DAN[25]84.7010.122.10
    EM[47]84.359.012.21
    UAMT[21]84.868.762.18
    ICT[30]82.399.412.56
    DS[51]86.207.442.14
    DML[29]84.608.082.17
    FML[52]84.837.992.06
    D-N[53]84.0210.772.25
    双模型87.027.001.83
    下载: 导出CSV

    表  5  采用U-Net网络, 在标签比例为10%的CSS数据上验证不同变体对结果的影响

    Table  5  Performance of different variants of our method on the CSS dataset when 10% training images are annotated. The baseline segmentation network is U-Net

    序号有监督约束无监督一致性交互学习稳定性选择策略不使用伴随变量QDSC (%)HD95ASD
    176.4110.463.12
    283.425.841.64
    385.525.471.57
    4情况1)86.624.921.44
    5情况1)和情况2)86.213.841.33
    6情况1)和情况2)87.533.201.11
    下载: 导出CSV

    表  6  采用U-Net网络, 在标签比例为10%的CSS数据上验证模型数量对结果的影响

    Table  6  Performance of number of model on the CSS dataset when 10% training images are annotated. The baseline network is U-Net

    学生数量 DSC (%)
    2 87.53
    4 87.32
    6 87.46
    下载: 导出CSV

    表  7  采用U-Net网络, 在标签比例为10%的CSS数据上验证损失函数对结果的影响

    Table  7  Performance of different loss function of our method on the CSS dataset when 10% training images are annotated. The baseline network is U-Net

    损失函数DSC (%)HD95ASD
    ${L}_{{\rm{ce}}}$73.2312.514.19
    $ {L}_{{\rm{dice}}} $75.0010.943.63
    $ {L}_{{\rm{seg}}} $76.4110.463.12
    $ {L}_{{\rm{seg}}}+{L}_{{\rm{con}}\_P} $80.627.352.66
    $ {L}_{{\rm{seg}}}+{L}_{{\rm{con}}\text{­}} $83.425.841.64
    ${L}_{ {\rm{seg} } }+{L}_{ {\rm{con} } }+{L}_{{\rm{sta}}}$87.533.201.61
    下载: 导出CSV

    表  8  采用U-Net网络, 在标签比例为10%的CSS数据集上验证损失函数对结果的影响

    Table  8  Performance of network sharing of our method on the CSS dataset when 10% training images are annotated. The baseline network is U-Net

    共享网络DSC (%)HD95ASD
    单模型83.046.024.13
    编码器85.054.412.65
    双模型87.533.201.61
    下载: 导出CSV
  • [1] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, Nevada, USA: ACM, 2012. 1097−1105
    [2] 罗建豪, 吴建鑫. 基于深度卷积特征的细粒度图像分类研究综述. 自动化学报, 2017, 43(8): 1306-1318

    Luo Jian-Hao, Wu Jian-Xin. A survey on fine-grained image categorization using deep convolutional features. Acta Automatica Sinica, 2017, 43(8): 1306-1318
    [3] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 770−778
    [4] Zhang D W, Zeng W Y, Yao J R, Han J W. Weakly supervised object detection using proposal- and semantic-level relationships. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(6): 3349-3363 doi: 10.1109/TPAMI.2020.3046647
    [5] Zhang D W, Han J W, Guo G Y, Zhao L. Learning object detectors with semi-annotated weak labels. IEEE Transactions on Circuits and Systems for Video Technology, 2019, 29(12): 3622-3635 doi: 10.1109/TCSVT.2018.2884173
    [6] 刘小波, 刘鹏, 蔡之华, 乔禹霖, 王凌, 汪敏. 基于深度学习的光学遥感图像目标检测研究进展. 自动化学报, 2021, 47(9): 2078-2089

    Liu Xiao-Bo, Liu Peng, Cai Zhi-Hua, Qiao Yu-Lin, Wang Ling, Wang Min. Research progress of optical remote sensing image object detection based on deep learning. Acta Automatica Sinica, 2021, 47(9): 2078-2089
    [7] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 3431−3440
    [8] Badrinarayanan V, Kendall A, Cipolla R. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12): 2481-2495 doi: 10.1109/TPAMI.2016.2644615
    [9] Li S L, Zhang C Y, He X M. Shape-aware semi-supervised 3D semantic segmentation for medical images. In: Proceedings of the 23rd International Conference on Medical Image Computing and Computer Assisted Intervention. Lima, Peru: Springer, 2020. 552−561
    [10] Fang C W, Li G B, Pan C W, Li Y M, Yu Y Z. Globally guided progressive fusion network for 3D pancreas segmentation. In: Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention. Shenzhen, China: Springer, 2019. 210−218
    [11] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Proceedings of the 18th International Conference on Medical Image Computing and Computer Assisted Intervention. Munich, Germany: Springer, 2015. 234−241
    [12] Zhou Z W, Siddiquee M M R, Tajbakhsh N, Liang J M. UNet++: A nested U-Net architecture for medical image segmentation. In: Proceedings of the 4th International Workshop on Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Granada, Spain: Springer, 2018. 3−11
    [13] 田娟秀, 刘国才, 谷珊珊, 鞠忠建, 刘劲光, 顾冬冬. 医学图像分析深度学习方法研究与挑战. 自动化学报, 2018, 44(3): 401-424

    Tian Juan-Xiu, Liu Guo-Cai, Gu Shan-Shan, Ju Zhong-Jian, Liu Jin-Guang, Gu Dong-Dong. Deep learning in medical image analysis and its challenges. Acta Automatica Sinica, 2018, 44(3): 401-424
    [14] Zhu J W, Li Y X, Hu Y F, Ma K, Zhou S K, Zheng Y F. Rubik's Cube+: A self-supervised feature learning framework for 3D medical image analysis. Medical Image Analysis, 2020, 64: Article No. 101746
    [15] Dai J F, He K M, Sun J. BoxSup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. 1635−1643
    [16] Lin D, Dai J F, Jia J Y, He K M, Sun J. ScribbleSup: Scribble-supervised convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 3159−3167
    [17] Lee J, Kim E, Lee S, Lee J, Yoon S. FickleNet: Weakly and semi-supervised semantic image segmentation using stochastic inference. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 5262−5271
    [18] Chen C, Dou Q, Chen H, Heng P A. Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest X-ray segmentation. In: Proceedings of the 9th International Workshop on Machine Learning in Medical Imaging. Granada, Spain: Springer, 2018. 143−151
    [19] Ghafoorian M, Mehrtash A, Kapur T, Karssemeijer N, Marchiori E, Pesteie M, et al. Transfer learning for domain adaptation in MRI: Application in brain lesion segmentation. In: Proceedings of the 20th International Conference on Medical Image Computing and Computer Assisted Intervention. Quebec City, Canada: Springer, 2017. 516−524
    [20] Li X M, Yu L Q, Chen H, Fu C W, Xing L, Heng P A. Semi-supervised skin lesion segmentation via transformation consistent self-ensembling model. In: Proceedings of the 29th British Machine Vision Conference. Newcastle, UK: BMVC, 2018.
    [21] Yu L Q, Wang S J, Li X M, Fu C W, Heng P A. Uncertainty-aware self-ensembling model for semi-supervised 3D left atrium segmentation. In: Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention. Shenzhen, China: Springer, 2019. 605−613
    [22] Nie D, Gao Y Z, Wang L, Shen D G. ASDNet: Attention based semi-supervised deep networks for medical image segmentation. In: Proceedings of the 21st International Conference on Medical Image Computing and Computer Assisted Intervention. Granada, Spain: Springer, 2018. 370−378
    [23] Miyato T, Maeda S I, Koyama M, Ishii S. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(8): 1979-1993 doi: 10.1109/TPAMI.2018.2858821
    [24] Laine S, Aila T. Temporal ensembling for semi-supervised learning. In: Proceedings of the International Conference on Learning Representations. Toulon, France: ICLR, 2017.
    [25] Zhang Y Z, Yang L, Chen J X, Fredericksen M, Hughes D P, Chen D Z. Deep adversarial networks for biomedical image segmentation utilizing unannotated images. In: Proceedings of the 20th International Conference on Medical Image Computing and Computer Assisted Intervention. Quebec City, Canada: Springer, 2017. 408−416
    [26] Zheng H, Lin L F, Hu H J, Zhang Q W, Chen Q Q, Iwamoto Y, et al. Semi-supervised segmentation of liver using adversarial learning with deep atlas prior. In: Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention. Shenzhen, China: Springer, 2019. 148−156
    [27] Ouali Y, Hudelot C, Tami M. Semi-supervised semantic segmentation with cross-consistency training. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 12671−12681
    [28] Tarvainen A, Valpola H. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: ACM, 2017. 1195−1204
    [29] Zhang Y, Xiang T, Hospedales T M, Lu H C. Deep mutual learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 4320−4328
    [30] Verma V, Kawaguchi K, Lamb A, Kannala J, Solin A, Bengio Y, et al. Interpolation consistency training for semi-supervised learning. Neural Networks, 2022, 145: 90-106 doi: 10.1016/j.neunet.2021.10.008
    [31] Luo X D, Chen J N, Song T, Wang G T. Semi-supervised medical image segmentation through dual-task consistency. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI, 2021. 8801−8809
    [32] Cui W H, Liu Y L, Li Y X, Guo M H, Li Y M, Li X L, et al. Semi-supervised brain lesion segmentation with an adapted mean teacher model. In: Proceedings of the 26th International Conference on Information Processing in Medical Imaging. Hong Kong, China: Springer, 2019. 554−565
    [33] Bernard O, Lalande A, Zotti C, Cervenansky F, Yang X, Heng P A, et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: Is the problem solved? IEEE Transactions on Medical Imaging, 2018, 37(11): 2514-2525 doi: 10.1109/TMI.2018.2837502
    [34] Bilic P, Christ P F, Vorontsov E, Chlebus G, Chen H, Dou Q, et al. The liver tumor segmentation benchmark (LiTS). arXiv: 1901.04056, 2019.
    [35] Menze B H, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Transactions on Medical Imaging, 2015, 34(10): 1993-2024 doi: 10.1109/TMI.2014.2377694
    [36] You X G, Peng Q M, Yuan Y, Cheung Y M, Lei J J. Segmentation of retinal blood vessels using the radial projection and semi-supervised approach. Pattern Recognition, 2011, 44(10-11): 2314-2324 doi: 10.1016/j.patcog.2011.01.007
    [37] Portela N M, Cavalcanti G D C, Ren T I. Semi-supervised clustering for MR brain image segmentation. Expert Systems With Applications, 2014, 41(4): 1492-1497 doi: 10.1016/j.eswa.2013.08.046
    [38] Kohl S A A, Romera-Paredes B, Meyer C, De Fauw J, Ledsam J R, Maier-Hein K H, et al. A probabilistic U-Net for segmentation of ambiguous images. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montréal, Canada: ACM, 2018. 6965−6975
    [39] Zhang Y, Zhou Z X, David P, Yue X Y, Xi Z R, Gong B Q, et al. PolarNet: An improved grid representation for online LiDAR point clouds semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 9598−9607
    [40] Isola P, Zhu J Y, Zhou T H, Efros A A. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 5967−5976
    [41] Lin T Y, Dollár P, Girshick R, He K M, Hariharan B, Belongie S. Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 936−944
    [42] Li X M, Chen H, Qi X J, Dou Q, Fu C W, Heng P A. H-DenseUNet: Hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Transactions on Medical Imaging, 2018, 37(12): 2663-2674 doi: 10.1109/TMI.2018.2845918
    [43] Milletari F, Navab N, Ahmadi S A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. In: Proceedings of the 4th International Conference on 3D Vision. Stanford, USA: IEEE, 2016. 565−571
    [44] Howard A G, Zhu M L, Chen B, Kalenichenko D, Wang W J, Weyand T, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv: 1704.04861, 2017.
    [45] Ćićek Ö, Abdulkadir A, Lienkamp S S, Brox T, Ronneberger O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In: Proceedings of the 19th International Conference on Medical Image Computing and Computer-Assisted Intervention. Athens, Greece: Springer, 2016. 424−432
    [46] Hang W L, Feng W, Liang S, Yu L Q, Wang Q, Choi K S, et al. Local and global structure-aware entropy regularized mean teacher model for 3D left atrium segmentation. In: Proceedings of the 23rd International Conference on Medical Image Computing and Computer Assisted Intervention. Lima, Peru: Springer, 2020. 562−571
    [47] Vu T H, Jain H, Bucher M, Cord M, Pérez P. ADVENT: Adversarial entropy minimization for domain adaptation in semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 2512−2521
    [48] Li X M, Yu L Q, Chen H, Fu C W, Xing L, Heng P A. Transformation-consistent self-ensembling model for semisupervised medical image segmentation. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(2): 523−534
    [49] Yu X R, Han B, Yao J C, Niu G, Tsang I W, Sugiyama M. How does disagreement help generalization against label corruption? In: Proceedings of the 36th International Conference on Machine Learning. Long Beach, USA: PMLR, 2019. 7164−7173
    [50] Han B, Yao Q M, Yu X R, Niu G, Xu M, Hu W H, et al. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montréal, Canada: ACM, 2018. 8536−8546
    [51] Ke Z H, Wang D Y, Yan Q, Ren J, Lau R. Dual student: Breaking the limits of the teacher in semi-supervised learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. Seoul, South Korea: IEEE, 2019. 6727−6735
    [52] Chung I, Park S, Kim J, Kwak N. Feature-map-level online adversarial knowledge distillation. In: Proceedings of the 37th International Conference on Machine Learning. Vienna, Austria: PMLR, 2020. 2006−2015
    [53] Hou S H, Liu X, Wang Z L. DualNet: Learn complementary features for image recognition. In: Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 502−510
    [54] Wang L C, Liu Y Y, Qin C, Sun G, Fu Y. Dual relation semi-supervised multi-label learning. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, USA: AAAI, 2020. 6227−6234
    [55] Xia Y C, Tan X, Tian F, Qin T, Yu N H, Liu T Y. Model-level dual learning. In: Proceedings of the 35th International Conference on Machine Learning. Stockholm, Sweden: PMLR, 2018. 5383−5392
  • 加载中
图(7) / 表(8)
计量
  • 文章访问数:  2086
  • HTML全文浏览量:  491
  • PDF下载量:  468
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-07-16
  • 录用日期:  2022-01-11
  • 网络出版日期:  2022-05-04
  • 刊出日期:  2023-04-20

目录

    /

    返回文章
    返回