2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于CE TransNet的腹部CT图像多器官分割

廖苗 杨睿新 赵于前 邸拴虎 杨振

廖苗, 杨睿新, 赵于前, 邸拴虎, 杨振. 基于CE TransNet的腹部CT图像多器官分割. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240489
引用本文: 廖苗, 杨睿新, 赵于前, 邸拴虎, 杨振. 基于CE TransNet的腹部CT图像多器官分割. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240489
Liao Miao, Yang Rui-Xin, Zhao Yu-Qian, Di Shuan-Hu, Yang Zhen. Multi-organ segmentation from abdominal CT images based on CE TransNet. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240489
Citation: Liao Miao, Yang Rui-Xin, Zhao Yu-Qian, Di Shuan-Hu, Yang Zhen. Multi-organ segmentation from abdominal CT images based on CE TransNet. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240489

基于CE TransNet的腹部CT图像多器官分割

doi: 10.16383/j.aas.c240489 cstr: 32138.14.j.aas.c240489
基金项目: 国家自然科学基金 (62272161, U23B2063, 62076256), 湖南省科技创新计划 (2024RC3216), 湖南省教育厅资助科研项目 (24A0356)资助
详细信息
    作者简介:

    廖苗:湖南科技大学计算机科学与工程学院副教授. 主要研究方向为图像处理与模式识别. E-mail: mliao@hnust.edu.cn

    杨睿新:湖南科技大学计算机科学与工程学院硕士研究生. 主要研究方向为医学图像处理, 图像分割. E-mail: 22020501025@mail.hnust.edu.cn

    赵于前:中南大学自动化学院教授. 主要研究方向为图像处理, 模式识别, 机器学习. 本文通信作者. E-mail: zyq@csu.edu.cn

    邸拴虎:国防科技大学智能科学学院讲师. 主要研究方向为模式识别和持续学习. E-mail: dishuanhu@nudt.edu.cn

    杨振:中南大学湘雅医院副主任医师, 研究方向为医学影像学, 计算机辅助放疗. E-mail: yangzhen@188.com

Multi-organ Segmentation From Abdominal CT Images Based on CE TransNet

Funds: Supported by National Natural Science Foundation of China (62272161, U23B2063, and 62076256), Science and Technology Innovation Program of Hunan Province (2024RC3216), and Scientific Research Fund of Hunan Provincial Education Department (24A0356)
More Information
    Author Bio:

    LIAO Miao Associate professor at the School of Computer Science and Engineering, Hunan University of Science and Technology. Her research interest covers image processing and pattern recognition

    YANG Rui-Xin Master student at the School of Computer Science and Engineering, Hunan University of Science and Technology. His research interest covers medical image processing and image segmentation

    ZHAO Yu-Qian Professor at the School of Automation, Central South University. His research interest covers image processing, pattern recognition and machine learning. Corresponding author of this paper

    Di Shuan-Hu Lecturer at the College of Intelligence Science and Technology, National University of Defense Technology. His research interest covers pattern recognition and continual learning

    YANG Zhen Associate chief physician in Xiangya Hospital, Central South University. His research interest covers medical image science and computer-assisted radiotherapy

  • 摘要: 受限于局部感受野, 卷积神经网络无法建立足够的长距离依赖关系. 一些方法通过将Transformer部署至卷积网络的某些特定部位来缓解这个问题, 如网络编码器、解码器或跳跃连接层. 但这些方法只能为某些特定特征建立长距离依赖关系, 难以捕获大小、形态多样的腹部器官之间的复杂依赖关系. 针对该问题, 提出了一种交叉增强Transformer (Cross-connection enhanced transformer, CE transformer)结构, 并用它作为特征提取单元构建了一种新的多层级编-解码分割网络CE TransNet. CE transformer采用双路径设计, 深度融合Transformer与卷积结构, 可同时对长、短距离依赖关系进行建模. 在双路径中, 引入密集交叉连接促进不同粒度信息的交互与融合, 提高模型整体特征捕获能力. 将CE transformer部署于CE TransNet的整个编解码路径中, 可有效捕捉多器官的复杂上下文关系. 实验结果表明, 提出方法在WORD和Synapse腹部CT多器官数据集上的平均DSC值分别高达82.42%和81.94%, 显著高于多种当前先进方法.
  • 图  1  CE TransNet网络结构示意图

    Fig.  1  The network architecture of CE TransNet

    图  2  CE transformer结构

    Fig.  2  The structure of CE transformer

    图  3  焦点调制层结构

    Fig.  3  The structure of focal modulation layer

    图  4  卷积下采样模块

    Fig.  4  Convolutional downsampling module

    图  5  门控注意力模块

    Fig.  5  Attention gate module

    图  6  卷积上采样模块

    Fig.  6  Convolutional upsampling module

    图  7  不同方法在WORD数据集上的部分2D分割结果比较

    Fig.  7  Some segmentation results comparison of different models on the WORD dataset

    图  8  本文方法在WORD数据集上的部分3D结果可视化展示

    Fig.  8  3D visualization of some segmentation results of our method on the WORD dataset

    图  9  不同方法在Synapse数据集上的2D分割结果比较

    Fig.  9  Segmentation results comparison of different methods on the Synapse dataset

    图  10  本文方法在Synapse数据集上的一些3D结果可视化展示

    Fig.  10  3D visualization of some segmentation results of our method on the Synapse dataset

    图  11  不同方法在WORD数据集上的分割结果统计性能比较. 红色星号表示提出方法显著优于对比方法($ p < 0.05 $)

    Fig.  11  Statistical performance comparison of segmentation results on WORD database by different methods. The red star denotes that the proposed method significantly outperforms the comparison method($ p < 0.05 $)

    图  12  不同特征提取模块的结构

    Fig.  12  The Structure of different feature extraction module

    图  13  不同特征提取模块生成的特征图在目标区域的强度分布

    Fig.  13  Intensity distribution of the target regions in the feature maps produced by applying different feature extraction modules

    图  14  不同特征提取模块生成的特征图在背景区域的强度分布

    Fig.  14  Intensity distribution of the background regions in the feature maps produced by applying different feature extraction modules

    图  15  不同特征提取模块获取的特征图示例

    Fig.  15  Examples of feature maps extracted by different feature extraction modules

    图  16  采用不同交叉连接的CE transformer结构

    Fig.  16  The CE transformer with different cross-connection configurations

    表  1  不同方法在WORD数据集上的平均分割性能比较

    Table  1  Average segmentation performance comparison of different methods on the WORD dataset

    方法 出版/年份 DSC (%)$ \uparrow $ mIoU (%)$ \uparrow $ NSD (%)$ \uparrow $ HD (mm)$ \downarrow $ ASD (mm)$ \downarrow $ Recall (%)$ \uparrow $ Precision (%)$ \uparrow $
    UNet[10] MICCAI/2015 76.93 65.35 62.03 17.16 4.44 85.13 78.53
    Att-Unet[41] Elsevier MIA/2019 77.83 66.74 65.41 16.43 3.91 84.05 83.86
    TransUNet[23] arXiv/2021 80.32 69.95 69.29 20.31 5.51 87.98 80.92
    UCTransNet[25] AAAI/2022 81.64 71.34 69.78 11.30 2.67 86.10 84.16
    Crosslink-Net[42] IEEE TIP/2022 78.99 68.15 65.33 13.13 2.88 81.62 83.10
    CPP-Net[43] IEEE TIP/2023 80.36 70.04 70.76 12.82 2.98 85.31 84.53
    SwinUnet[28] ECCV/2022 80.64 69.82 69.09 15.23 4.12 82.93 80.40
    TransNuSeg[44] MICCAI/2023 78.63 68.31 67.73 14.41 2.89 85.78 80.06
    ScribFormer[31] IEEE TMI/2024 81.21 71.07 73.08 11.78 2.91 85.34 84.43
    YOHO[45] IEEE TIP/2024 78.23 67.45 65.67 13.68 3.29 81.86 81.97
    OUR[46] MBEC/2023 80.71 70.06 71.38 12.06 2.92 87.38 84.77
    CSSNet[47] CMB/2024 79.41 69.02 67.29 14.69 3.16 86.38 80.32
    本文方法 82.42 72.48 74.34 10.91 2.62 86.47 85.35
    下载: 导出CSV

    表  3  不同方法在WORD数据集各器官上的NSD (%)比较

    Table  3  NSD (%) score comparison of different methods on the WORD dataset

    方法肝脏脾脏左肾右肾胆囊食管胰腺十二指肠 结肠 直肠膀胱左股骨右股骨
    UNet[10]74.1370.2468.6669.6657.0947.4350.9958.9341.2155.6360.8554.8672.7072.1075.94
    Att-UNet[41]81.0182.4574.5277.7264.7342.4566.9261.3744.5657.8063.4953.6770.7565.9073.88
    TransUNet[23]82.3686.5581.2381.5471.6736.1269.8665.2323.7162.0667.1560.1881.9784.8084.97
    UCTransNet[25]89.2786.5284.8577.4173.7750.3858.7468.6330.8164.4554.6161.1576.7985.0684.27
    Crosslink-Net[42]79.6584.0279.9580.5265.0935.6766.8831.5538.2560.1263.8256.3277.1379.6681.33
    CPP-Net[43]81.2483.7780.1681.4271.8548.9770.8060.5945.1963.4867.2159.6079.7983.8883.44
    SwinUnet[28]83.1385.7981.8373.9661.2943.8668.5068.0049.5863.7158.2560.6478.3079.4280.33
    TransNuSeg[44]79.7877.3778.1878.5060.6545.1570.9668.3547.7861.6960.9858.8975.9975.2376.48
    ScribFormer[31]82.3484.1983.1981.5170.1447.8871.3166.4948.5163.5966.8660.2580.0284.4384.86
    YOHO[45]77.1480.1676.2377.7562.8849.1361.6253.6048.5753.5759.2653.0174.2179.3078.65
    OUR[46]82.6584.5582.2181.8170.0747.5770.1665.8848.5164.5066.1860.0279.6182.6984.32
    CSSNet[47]81.3382.8380.4580.4168.1945.7068.8663.6947.7462.6564.1059.1875.0078.6776.56
    本文方法84.0286.7082.1282.9173.9364.3870.6469.1450.8765.1468.1261.9782.3486.5086.29
    下载: 导出CSV

    表  2  不同方法在WORD数据集各器官上的DSC (%)比较

    Table  2  DSC (%) score comparison of different methods on the WORD dataset

    方法肝脏脾脏左肾右肾胆囊食管胰腺十二指肠 结肠 直肠膀胱左股骨右股骨
    UNet[10]93.0988.0587.7989.1779.2752.6359.2570.0552.9671.6276.4471.5985.5487.5388.99
    Att-Unet[41]94.6792.2888.0989.3582.4343.5370.1269.3254.9074.1177.8072.1085.1685.2888.33
    TransUNet[23]94.8093.1990.0990.2587.0448.3072.6673.2652.2177.1180.0074.4089.3190.9791.24
    UCTransNet[25]94.6392.5890.1989.3887.3460.7873.9575.0959.3376.7478.7174.6089.0791.0091.15
    Crosslink-Net[42]94.5292.9890.8191.5383.8956.4470.7165.1548.0075.2177.4472.4785.3889.8090.56
    CPP-Net[43]94.5692.3589.7590.5987.4250.2273.0170.3656.7577.6179.8373.2888.3790.5590.74
    SwinUnet[28]94.9293.9989.2690.8285.1756.7168.0272.4857.5475.5679.5574.8188.8690.4691.40
    TransNuSeg[44]94.7892.9689.3487.8079.5853.3669.0471.4457.3775.6076.7776.2886.5583.8684.77
    ScribFormer[31]95.0093.2390.9890.1186.6257.5271.2973.6956.6677.8680.2474.1488.8490.7991.20
    YOHO[45]93.9291.8589.5990.1383.2051.0367.6065.1958.0671.3574.6071.2286.0889.7189.89
    OUR[46]95.0893.6090.2890.1086.4558.9268.6673.4155.0876.8678.5373.6288.5290.4191.22
    CSSNet[47]94.9692.8288.9287.5382.7055.7667.4573.1054.1676.6377.7772.0087.6889.6090.06
    本文方法95.4394.2091.2691.6787.0863.5272.1475.8059.8978.0580.8374.9388.3991.4391.75
    下载: 导出CSV

    表  4  不同方法在Synapse数据上的分割性能比较

    Table  4  Segmentation performance comparison of different methods on the Synapse dataset

    方法 出版/年份 平均 各器官DSC (%)
    DSC (%)$ \uparrow $ HD (mm)$ \downarrow $ mIoU (%)$ \uparrow $ 主动脉 胆囊 左肾 右肾 肝脏 胰腺 脾脏
    UNet[10] MICCAI/2015 70.11 44.69 59.39 84.00 56.70 72.41 62.64 86.98 48.73 81.48 67.96
    Att-Unet[41] Elsevier MIA/2019 71.70 34.47 61.38 82.61 61.94 76.07 70.42 87.54 46.70 80.67 67.66
    TransUNet[23] arXiv/2021 77.62 26.90 67.32 86.56 60.43 80.54 78.53 94.33 58.47 87.06 75.00
    UCTransNet[25] AAAI/2022 80.21 23.33 70.46 87.36 66.49 83.77 79.95 94.23 63.72 89.38 76.75
    Crosslink-Net[42] IEEE TIP/2022 76.60 18.20 64.83 86.25 53.35 84.62 79.63 92.72 58.56 86.17 71.49
    CPP-Net[43] IEEE TIP/2023 80.11 26.41 71.23 87.59 67.14 83.09 82.31 94.03 67.34 87.53 71.81
    SwinUnet[28] ECCV/2022 79.13 21.55 68.81 85.47 66.53 83.28 79.61 94.29 56.58 90.66 76.60
    TransNuSeg[44] MICCAI/2023 78.06 28.69 69.03 82.47 65.94 79.05 79.11 93.12 58.40 88.85 77.49
    ScribFormer[31] IEEE TMI/2024 80.08 20.78 70.63 87.48 65.15 86.90 82.09 94.26 60.48 88.93 75.37
    YOHO[45] IEEE TIP/2024 76.85 27.41 67.79 85.34 66.33 83.38 73.66 93.82 55.57 82.65 74.07
    OUR[46] MBEC/2023 80.06 27.54 69.72 88.32 65.96 87.02 82.50 94.31 60.23 88.41 73.76
    CSSNet[47] CMB/2024 78.75 29.81 68.01 86.80 64.12 82.54 79.04 94.05 58.98 89.47 75.04
    本文方法 81.94 22.54 71.42 89.79 67.97 88.54 84.12 94.56 63.09 91.56 75.90
    下载: 导出CSV

    表  5  不同特征提取模块的分割性能比较

    Table  5  Segmentation performance comparison on different feature extraction module

    特征提取模块FLOPS (G)$ \downarrow $Params (M)$ \downarrow $DSC (%)$ \uparrow $HD (mm)$ \downarrow $
    CE transformer6.546.2381.9422.54
    Swin transformer[35]11.2815.8479.9727.42
    CSWin transformer[36]11.2915.8780.3324.31
    卷积块15.2622.8479.5128.01
    ConvNeXt块[51]8.7511.3080.4823.39
    下载: 导出CSV

    表  6  采用不同$ N $值的分割性能比较

    Table  6  Segmentation performance comparison of applying different $ N $

    $ N $ FLOPS (G)$ \downarrow $ Params (M)$ \downarrow $ DSC (%)$ \uparrow $ HD (mm)$ \downarrow $
    阶段1 阶段2 阶段3 阶段4
    0 0 0 0 4.17 3.16 73.26 43.46
    1 1 1 1 5.16 4.59 77.43 35.78
    2 2 2 2 6.12 6.02 79.52 24.22
    1 2 4 2 6.54 6.23 81.94 22.54
    1 3 6 3 7.15 8.22 81.33 24.89
    2 4 8 4 8.33 9.92 81.43 25.42
    2 6 12 6 10.10 13.28 81.67 23.11
    下载: 导出CSV

    表  7  不同交叉连接策略的分割性能比较

    Table  7  Segmentation performance comparison of different cross-connection configurations

    交叉连接策略DSC (%)$ \uparrow $HD (mm)$ \downarrow $mIoU (%)$ \uparrow $
    无连接79.9727.8170.04
    CF连接80.8723.0770.83
    PF连接80.1924.2870.76
    全连接81.9422.5471.42
    下载: 导出CSV

    表  8  CE transformer中各子模块对网络性能的影响

    Table  8  Impact of each sub-module in CE transformer on network performance

    实验序号SAFACFPFDSC (%)$ \uparrow $HD (mm)$ \downarrow $
    1$ \checkmark$74.5940.70
    2$ \checkmark$$ \checkmark$78.4826.13
    3$ \checkmark$$ \checkmark$$ \checkmark$80.3123.31
    4$ \checkmark$$ \checkmark$$ \checkmark$$ \checkmark$81.9222.54
    下载: 导出CSV
  • [1] 方超伟, 李雪, 李钟毓, 焦李成, 张鼎文. 基于双模型交互学习的半监督医学图像分割. 自动化学报, 2023, 49(4): 805−819

    Fang Chao-Wei, Li Xue, Li Zhong-Yu, Jiao Li-Cheng, Zhang Ding-Wen. Interactive dual-model learning for semi-supervised medical image segmentation. Acta Automatica Sinica, 2023, 49(4): 805−819
    [2] Ji Y F, Bai H T, Ge C J, Yang J, Zhu Y, Zhang R M, et al. AMOS: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. New Orleans, USA: Curran Associates Inc., 2019. Article No. 2661
    [3] Ma J, Zhang Y, Gu S, Zhu C, Ge C, Zhang Y C, et al. AbdomenCT-1K: Is abdominal organ segmentation a solved problem?. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(10): 6695−6714 doi: 10.1109/TPAMI.2021.3100536
    [4] 毕秀丽, 陆猛, 肖斌, 李伟生. 基于双解码U型卷积神经网络的胰腺分割. 软件学报, 2022, 33(5): 1947−1958

    Bi Xiu-Li, Lu Meng, Xiao Bin, Li Wei-Sheng. Pancreas segmentation based on dual-decoding U-net. Journal of Software, 2022, 33(5): 1947−1958
    [5] Rayed E, Islam S M S, Niha S I, Jim J R, Kabir M, Mridha M F. Deep learning for medical image segmentation: State-of-the-art advancements and challenges. Informatics in Medicine Unlocked, 2024, 47: Article No. 101504 doi: 10.1016/j.imu.2024.101504
    [6] Li Z W, Liu F, Yang W J, Peng S H, Zhou J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(12): 6999−7019 doi: 10.1109/TNNLS.2021.3084827
    [7] Yao X J, Wang X Y, Wang S H, Zhang Y D. A comprehensive survey on convolutional neural network in medical image analysis. Multimedia Tools and Applications, 2022, 81(29): 41361−41405 doi: 10.1007/s11042-020-09634-7
    [8] Sarvamangala D R, Kulkarni R V. Convolutional neural networks in medical image understanding: A survey. Evolutionary Intelligence, 2022, 15(1): 1−22 doi: 10.1007/s12065-020-00540-3
    [9] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 3431−3440
    [10] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer, 2015. 234−241
    [11] 殷晓航, 王永才, 李德英. 基于U-Net结构改进的医学影像分割技术综述. 软件学报, 2021, 32(2): 519−550

    Yin Xiao-Hang, Wang Yong-Cai, Li De-Ying. Suvery of medical image segmentation technology based on U-Net structure improvement. Journal of Software, 2021, 32(2): 519−550
    [12] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, et al. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 6000−6010
    [13] Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X H, Unterthiner T, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In: Proceedings of the 9th International Conference on Learning Representations. OpenReview.net, 2021. (查阅网上资料, 未找到对应的出版地信息, 请确认)
    [14] Conze P H, Andrade-Miranda G, Singh V K, Jaouen V, Visvikis D. Current and emerging trends in medical image segmentation with deep learning. IEEE Transactions on Radiation and Plasma Medical Sciences, 2023, 7(6): 545−569 doi: 10.1109/TRPMS.2023.3265863
    [15] Yao W J, Bai J J, Liao W, Chen Y H, Liu M J, Xie Y. From CNN to transformer: A review of medical image segmentation models. Journal of Imaging Informatics in Medicine, 2024, 37(4): 1529−1547 doi: 10.1007/s10278-024-00981-7
    [16] Han K, Wang Y H, Chen H T, Chen X H, Guo J Y, Liu Z H, et al. A survey on vision transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(1): 87−110 doi: 10.1109/TPAMI.2022.3152247
    [17] Parvaiz A, Khalid M A, Zafar R, Ameer H, Ali M, Fraz M M. Vision Transformers in medical computer vision-A contemplative retrospection. Engineering Applications of Artificial Intelligence, 2023, 122: Article No. 106126 doi: 10.1016/j.engappai.2023.106126
    [18] Kirillov A, Mintun E, Ravi N, Mao H Z, Rolland C, Gustafson L, et al. Segment anything. arXiv preprint arXiv: 2304.02643, 2023.
    [19] Mazurowski M A, Dong H Y, Gu H X, Yang J C, Konz N, Zhang Y X. Segment anything model for medical image analysis: An experimental study. Medical Image Analysis, 2023, 89: Article No. 102918 doi: 10.1016/j.media.2023.102918
    [20] He S, Bao R N, Li J P, Grant P E, Ou Y M. Accuracy of segment-anything model (SAM) in medical image segmentation tasks. arXiv preprint arXiv: 2304.09324v1, 2023.
    [21] Zhang K D, Liu D. Customized segment anything model for medical image segmentation. arXiv preprint arXiv: 2304.13785, 2023.
    [22] Xiao H G, Li L, Liu Q Y, Zhu X H, Zhang Q H. Transformers in medical image segmentation: A review. Biomedical Signal Processing and Control, 2023, 84: Article No. 104791 doi: 10.1016/j.bspc.2023.104791
    [23] Chen J N, Lu Y Y, Yu Q H, Luo X D, Adeli E, Wang Y, et al. TransUNet: Transformers make strong encoders for medical image segmentation. arXiv preprint arXiv: 2102.04306, 2021.
    [24] Wang W X, Chen C, Ding M, Yu H, Zha S, Li J Y. TransBTS: Multimodal brain tumor segmentation using transformer. In: Proceedings of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention. Strasbourg, France: Springer, 2021. 109−119
    [25] Wang H N, Cao P, Wang J Q, Zaiane O R. UCTransNet: Rethinking the skip connections in U-Net from a channel-wise perspective with transformer. In: Proceedings of the 36th AAAI Conference on Artificial Intelligence. Vancouver, Canada: AAAI, 2022. 2441−2449 (查阅网上资料, 未找到对应的出版地信息, 请确认)
    [26] Xie E Z, Wang W H, Yu Z D, Anandkumar A, Alvarez J M, Luo P. SegFormer: Simple and efficient design for semantic segmentation with transformers. In: Proceedings of the 35th International Conference on Neural Information Processing Systems. New Orleans, USA: Curran Associates Inc., 2021. Article No. 924 (查阅网上资料, 未找到对应的出版地信息, 请确认)
    [27] Rahman M, Marculescu R. Medical image segmentation via cascaded attention decoding. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. Waikoloa, USA: IEEE, 2023. 6222−6231
    [28] Cao H, Wang Y Y, Chen J, Jiang D S, Zhang X P, Tian Q, et al. Swin-unet: Unet-like pure transformer for medical image segmentation. In: Proceedings of the Computer Vision – ECCV 2022 Workshops. Tel Aviv, Israel: Springer, 2023. 205−218
    [29] Valanarasu J M J, Oza P, Hacihaliloglu I, Patel V M. Medical transformer: Gated axial-attention for medical image segmentation. In: Proceedings of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention. Strasbourg, France: Springer, 2021. 36−46
    [30] Huang X H, Deng Z F, Li D D, Yuan X G, Fu Y. MISSFormer: An effective transformer for 2D medical image segmentation. IEEE Transactions on Medical Imaging, 2023, 42(5): 1484−1494 doi: 10.1109/TMI.2022.3230943
    [31] Li Z H, Zheng Y, Shan D D, Yang S Z, Li Q D, Wang B Z, et al. ScribFormer: Transformer makes CNN work better for scribble-based medical image segmentation. IEEE Transactions on Medical Imaging, 2024, 43(6): 2254−2265 doi: 10.1109/TMI.2024.3363190
    [32] Yao M, Zhang Y Z, Liu G F, Pang D. SSNet: A novel transformer and CNN hybrid network for remote sensing semantic segmentation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2024, 17: 3023−3037 doi: 10.1109/JSTARS.2024.3349657
    [33] Xu G A, Jia W J, Wu T, Chen L G, Gao G W. HAFormer: Unleashing the power of hierarchy-aware features for lightweight semantic segmentation. IEEE Transactions on Image Processing, 2024, 33: 4202−4214 doi: 10.1109/TIP.2024.3425048
    [34] Panayides A S, Amini A, Filipovic N D, Sharma A, Tsaftaris S A, Young A, et al. AI in medical imaging informatics: Current challenges and future directions. IEEE Journal of Biomedical and Health Informatics, 2020, 24(7): 1837−1857 doi: 10.1109/JBHI.2020.2991043
    [35] Liu Z, Lin Y T, Cao Y, Hu H, Wei Y X, Zhang Z, et al. Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE, 2021. 10012−10022
    [36] Dong X Y, Bao J M, Chen D D, Zhang W M, Yu N H, Yuan L, et al. CSWin transformer: A general vision transformer backbone with cross-shaped windows. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE, 2022. 12124−12134
    [37] Yang J W, Li C Y, Dai X Y, Gao J F. Focal modulation networks. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. New Orleans, USA: Curran Associates Inc., 2022. Article No. 304
    [38] Woo S, Debnath S, Hu R H, Chen X L, Liu Z, Kweon I S, et al. ConvNeXt V2: Co-designing and scaling ConvNets with masked autoencoders. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE, 2023. 16133−16142
    [39] Luo X D, Liao W J, Xiao J H, Chen J N, Song T, Zhang X F, et al. WORD: A large scale dataset, benchmark and clinical applicable study for abdominal organ segmentation from CT image. Medical Image Analysis, 2022, 82: Article No. 102642 doi: 10.1016/j.media.2022.102642
    [40] Synapse多器官分割数据集[Online], available: https://www.synapse.org/#!Synapse:syn3193805/wiki/89480, 2023年5月13日. (查阅网上资料, 请核对网址与文献是否相符)

    Synapse multi-organ segmentation dataset [Online], available: https://www.synapse.org/#!Synapse:syn3193805/wiki/89480, May 13, 2023.
    [41] Oktay O, Schlemper J, Le Folgoc L, Lee M, Heinrich M, Misawa K, et al. Attention U-Net: Learning where to look for the pancreas. arXiv preprint arXiv: 1804.03999, 2018.
    [42] Yu Q, Qi L, Gao Y, Wang W Z, Shi Y H. Crosslink-Net: Double-branch encoder network via fusing vertical and horizontal convolutions for medical image segmentation. IEEE Transactions on Image Processing, 2022, 31: 5893−5908 doi: 10.1109/TIP.2022.3203223
    [43] Chen S C, Ding C X, Liu M F, Cheng J, Tao D C. CPP-net: Context-aware polygon proposal network for nucleus segmentation. IEEE Transactions on Image Processing, 2023, 32: 980−994 doi: 10.1109/TIP.2023.3237013
    [44] He Z Q, Unberath M, Ke J, Shen Y Q. TransNuSeg: A lightweight multi-task transformer for nuclei segmentation. In: Proceedings of the 26th International Conference on Medical Image Computing and Computer Assisted Intervention. Vancouver, Canada: Springer, 2023. 206−215
    [45] Li H P, Liu D R, Zeng Y, Liu S C, Gan T, Rao N N, et al. Single-image-based deep learning for segmentation of early esophageal cancer lesions. IEEE Transactions on Image Processing, 2024, 33: 2676−2688 doi: 10.1109/TIP.2024.3379902
    [46] Hong Z F, Chen M Z, Hu W J, Yan S Y, Qu A P, Chen L N, et al. Dual encoder network with transformer-CNN for multi-organ segmentation. Medical & Biological Engineering & Computing, 2023, 61(3): 661−671
    [47] Shao Y Q, Zhou K Y, Zhang L C. CSSNet: Cascaded spatial shift network for multi-organ segmentation. Computers in Biology and Medicine, 2024, 179: Article No. 107955
    [48] Seidlitz S, Sellner J, Odenthal J, Özdemir B, Studier-Fischer A, Knödler S, et al. Robust deep learning-based semantic organ segmentation in hyperspectral images. Medical Image Analysis, 2022, 80: Article No. 102488 doi: 10.1016/j.media.2022.102488
    [49] Gravetter F J, Wallnau L B, Forzano L A B, Witnauer J E. Essentials of Statistics for the Behavioral Sciences (Tenth edition). Australia: Cengage Learning, 2021. 326−333
    [50] Zhou H Y, Gou J S, Zhang Y H, Han X G, Yu L Q, Wang L S, et al. nnFormer: Volumetric medical image segmentation via a 3D transformer. IEEE Transactions on Image Processing, 2023, 32: 4036−4045 doi: 10.1109/TIP.2023.3293771
    [51] Liu Z, Mao H Z, Wu C Y, Feichtenhofer C, Darrell T, Xie S N. A ConvNet for the 2020s. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE, 2022. 11976−11986
  • 加载中
计量
  • 文章访问数:  9
  • HTML全文浏览量:  8
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-07-15
  • 录用日期:  2024-12-23
  • 网络出版日期:  2025-01-24

目录

    /

    返回文章
    返回