2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于注意力机制和循环域三元损失的域自适应目标检测

周洋 韩冰 高新波 杨铮 陈玮铭

周洋, 韩冰, 高新波, 杨铮, 陈玮铭. 基于注意力机制和循环域三元损失的域自适应目标检测. 自动化学报, 2024, 50(11): 2188−2203 doi: 10.16383/j.aas.c220938
引用本文: 周洋, 韩冰, 高新波, 杨铮, 陈玮铭. 基于注意力机制和循环域三元损失的域自适应目标检测. 自动化学报, 2024, 50(11): 2188−2203 doi: 10.16383/j.aas.c220938
Zhou Yang, Han Bing, Gao Xin-Bo, Yang Zheng, Chen Wei-Ming. Domain adaptive object detection based on attention mechanism and cycle domain triplet loss. Acta Automatica Sinica, 2024, 50(11): 2188−2203 doi: 10.16383/j.aas.c220938
Citation: Zhou Yang, Han Bing, Gao Xin-Bo, Yang Zheng, Chen Wei-Ming. Domain adaptive object detection based on attention mechanism and cycle domain triplet loss. Acta Automatica Sinica, 2024, 50(11): 2188−2203 doi: 10.16383/j.aas.c220938

基于注意力机制和循环域三元损失的域自适应目标检测

doi: 10.16383/j.aas.c220938 cstr: 32138.14.j.aas.c220938
基金项目: 国家自然科学基金(62076190, 41831072, 62036007), 陕西省重点创新产业链基金(2022ZDLGY01-11), 西安市重点产业链技术攻关项目(23ZDCYJSGG0022-2023), 国家空间科学数据中心青年开放课题基金(NSSDC2302005)资助
详细信息
    作者简介:

    周洋:西安电子科技大学电子工程学院硕士研究生. 2020年获得西南石油大学电子信息工程专业学士学位. 主要研究方向为计算机视觉和域自适应目标检测. E-mail: yzhou_6@stu.xidian.edu.cn

    韩冰:西安电子科技大学电子工程学院教授. 主要研究方向为智能辅助驾驶系统, 视觉感知与认知, 空间物理与人工智能交叉. 本文通信作者. E-mail: bhan@xidian.edu.cn

    高新波:西安电子科技大学教授. 主要研究方向为机器学习, 图像处理, 计算机视觉, 模式识别和多媒体内容分析. E-mail: xbgao@ieee.org

    杨铮:西安电子科技大学电子工程学院博士研究生. 2017年获得西安电子科技大学智能科学与技术专业学士学位. 主要研究方向为深度学习, 目标跟踪和强化学习. E-mail: zhengy@stu.xidian.edu.cn

    陈玮铭:西安电子科技大学电子工程学院硕士研究生. 2019年获得西安电子科技大学机械设计制造及其自动化专业学士学位. 主要研究方向为计算机视觉, 目标检测和遥感技术. E-mail: wmchen@stu.xidian.edu.cn

Domain Adaptive Object Detection Based on Attention Mechanism and Cycle Domain Triplet Loss

Funds: Supported by National Natural Science Foundation of China (62076190, 41831072, 62036007), Key Industry Innovation Chain of Shaanxi Province (2022ZDLGY01-11), Key Industry Chain Technology Research Project of Xi'an (23ZDCYJSGG0022-2023), and Youth Open Project of National Space Science Data Center (NSSDC2302005)
More Information
    Author Bio:

    ZHOU Yang Master student at the School of Electronic Engineering, Xidian University. He received his bachelor degree in electronic and information engineering from Southwest Petroleum University in 2020. His research interest covers computer vision and domain adaptive detection

    HAN Bing Professor at the School of Electronic Engineering, Xidian University. Her research interest covers intelligent auxiliary drive system, visual perception and cognition, and cross-disciplinary research between space physics and artificial intelligence. Corresponding author of this paper

    GAO Xin-Bo Professor at Xidian University. His research interest covers machine learning, image processing, computer vision, pattern recognition, and multimedia content analysis

    YANG Zheng Ph.D. candidate at the School of Electronic Engineering, Xidian University. He received his bachelor degree in intelligent science and technology from Xidian University in 2017. His research interest covers deep learning, object tracking, and reinforcement learning

    CHEN Wei-Ming Master student at the School of Electronic Engineering, Xidian University. He received his bachelor degree in mechanical design manufacture and automation from Xidian University in 2019. His research interest covers computer vision, object detection, and remote sensing

  • 摘要: 目前大多数深度学习算法都依赖于大量的标注数据并欠缺一定的泛化能力. 无监督域自适应算法能提取到已标注数据和未标注数据间隐式共同特征, 从而提高算法在未标注数据上的泛化性能. 目前域自适应目标检测算法主要为两阶段目标检测器设计. 针对单阶段检测器中无法直接进行实例级特征对齐导致一定数量域不变特征的缺失, 提出结合通道注意力机制的图像级域分类器加强域不变特征提取. 此外, 对于域自适应目标检测中存在类别特征的错误对齐引起的精度下降问题, 通过原型学习构建类别中心, 设计了一种基于原型的循环域三元损失(Cycle domain triplet loss, CDTL)函数, 从而实现原型引导的精细类别特征对齐. 以单阶段目标检测算法作为检测器, 并在多种域自适应目标检测公共数据集上进行实验. 实验结果证明该方法能有效提升原检测器在目标域的泛化能力, 达到比其他方法更高的检测精度, 并且对于单阶段目标检测网络具有一定的通用性.
  • 图  1  基于注意力机制和循环域三元损失的域自适应目标检测算法流程

    Fig.  1  The pipeline of domain adaptive object detection based on attention mechanism and cycle domain triplet loss

    图  2  循环域自适应三元损失函数原理

    Fig.  2  Principle of cycle domain adaptive TripleLoss

    图  3  本文方法在CityScapes→FoggyCityScapes上的主观检测结果

    Fig.  3  The subjective results of our method on CityScapes→FoggyCityScapes

    图  4  本文方法在SunnyDay→DuskRainy和SunnyDay→NightRainy上的主观检测结果

    Fig.  4  The subjective results of our method on SunnyDay→DuskRainy and SunnyDay→NightRainy

    图  5  本文方法在KITTI→CityScapes和Sim10k→CityScapes上的消融实验结果

    Fig.  5  The ablation experimental results of our method on KITTI→CityScapes and Sim10k→CityScapes

    图  6  本文方法在VOC→Clipart1k上的主观结果

    Fig.  6  The subjective results of our method on VOC→Clipart1k

    图  7  不同循环迭代训练次数在YOLOv3和YOLOv5s检测器上的结果

    Fig.  7  The result of different cycle iterations on YOLOv3 and YOLOv5s

    表  1  不同方法在CityScapes→FoggyCityScapes数据集上的对比实验结果(%)

    Table  1  The results of different methods on the CityScapes→FoggyCityScapes dataset (%)

    方法检测器personridercartruckbusmotorbiketrainmAPmGP
    DAF[10]Faster R-CNN25.031.040.522.135.320.027.120.227.738.8
    SWDA[11]Faster R-CNN29.942.343.524.536.230.035.332.634.370.0
    C2F[14]Faster R-CNN34.046.952.130.843.234.737.429.938.679.1
    CAFA[16]Faster R-CNN41.938.756.722.641.524.635.526.836.081.9
    ICCR-VDD[21]Faster R-CNN33.444.051.733.952.034.236.834.740.0
    MeGA[20]Faster R-CNN37.749.052.425.449.234.539.046.941.891.1
    DAYOLO[28]YOLOv329.527.746.19.128.212.724.84.536.161.0
    本文方法(v3)YOLOv334.037.255.831.444.422.330.850.738.383.9
    MS-DAYOLO[31]YOLOv439.646.556.528.951.027.536.045.941.568.6
    A-DAYOLO[32]YOLOv532.835.751.318.834.511.825.616.228.3
    S-DAYOLO[34]YOLOv542.642.161.923.540.524.437.339.539.069.9
    本文方法(v5)YOLOv5s30.937.453.323.839.524.229.935.034.383.8
     注: “—”表示该方法没有进行此实验; (v3)表示检测器为YOLOv3; (v5)表示检测器为YOLOv5s; 加粗数值表示对比实验中的最佳结果.
    下载: 导出CSV

    表  2  不同方法在SunnyDay→DuskRainy数据集上的对比实验结果(%)

    Table  2  The results of different methods on the SunnyDay→DuskRainy dataset (%)

    方法检测器busbikecarmotorpersonridertruckmAP$\Delta{\rm{mAP}}$
    DAF[10]Faster R-CNN43.627.552.316.128.521.744.833.55.2
    SWDA[11]Faster R-CNN40.022.851.415.426.320.344.231.53.2
    ICCR-VDD[21]Faster R-CNN47.933.255.126.130.523.848.137.89.5
    本文方法(v3)YOLOv350.124.970.724.239.119.053.240.27.4
    本文方法(v5)YOLOv5s46.222.168.216.534.817.550.536.59.4
     注: $\Delta {\rm{mAP}}$表示mAP的涨幅程度.
    下载: 导出CSV

    表  3  不同方法在SunnyDay→NightRainy数据集上的对比实验结果(%)

    Table  3  The results of different methods on the SunnyDay→NightRainy dataset (%)

    方法检测器busbikecarmotorpersonridertruckmAP$\Delta {\rm{mAP}}$
    DAF[10]Faster R-CNN23.812.037.70.214.94.029.017.41.1
    SWDA[11]Faster R-CNN24.710.033.70.613.510.429.117.41.1
    ICCR-VDD[21]Faster R-CNN34.815.638.610.518.717.330.623.77.4
    本文方法(v3)YOLOv345.08.251.14.020.99.637.925.35.1
    本文方法(v5)YOLOv5s40.79.345.00.612.89.232.521.54.7
    下载: 导出CSV

    表  4  KITTI→CityScapes和Sim10k→CityScapes数据集上的对比实验结果(%)

    Table  4  The results of different methods on KITTI→CityScapes and Sim10k→CityScapes datasets (%)

    方法KITTI→CityScapesSim10k→CityScapes
    APGPAPGP
    DAF[10]38.521.039.022.5
    SWDA[11]37.919.542.330.8
    C2F[14]43.835.3
    CAFA[16]43.232.949.047.7
    MeGA[20]43.032.444.837.0
    DAYOLO[28]54.082.250.939.5
    本文方法(v3)61.129.460.837.1
    A-DAYOLO[32]37.744.9
    S-DAYOLO[34]49.352.9
    本文方法(v5)60.050.460.356.3
    下载: 导出CSV

    表  5  CityScapes→FoggyCityScapes数据集上基于YOLOv3的消融实验结果(%)

    Table  5  The results of ablation experiment on CityScapes→FoggyCityScapes dataset based on YOLOv3 (%)

    方法personridercartruckbusmotorbiketrainmAP
    SO29.835.044.720.432.414.828.321.628.4
    CADC34.438.054.724.445.021.232.149.137.2
    CDTL31.138.046.728.934.523.427.813.730.5
    CADC + CDTL34.037.255.831.444.422.330.850.738.3
    Oracle34.938.855.925.345.022.633.449.140.2
    下载: 导出CSV

    表  6  CityScapes→FoggyCityScapes数据集上基于YOLOv5s的消融实验结果(%)

    Table  6  The results of ablation experiment on CityScapes→FoggyCityScapes dataset based on YOLOv5s (%)

    方法personridercartruckbusmotorbiketrainmAP
    SO26.933.139.98.921.111.324.84.921.4
    CADC32.637.152.726.838.123.038.132.634.1
    CDTL29.736.743.213.125.517.128.713.126.2
    CADC + CDTL30.937.453.323.839.524.229.935.034.3
    Oracle34.837.957.524.442.723.133.240.836.8
    下载: 导出CSV

    表  7  SunnyDay→DuskRainy数据集上基于YOLOv3的消融实验结果(%)

    Table  7  The results of ablation experiment on SunnyDay→DuskRainy dataset based on YOLOv3 (%)

    方法busbikecarmotorpersonridertruckmAP
    SO43.714.368.412.031.510.948.732.8
    CADC50.022.670.823.238.418.7 53.539.6
    CDTL45.420.169.215.234.817.247.835.7
    CADC + CDTL50.1 24.970.7 24.2 39.119.053.240.2
    下载: 导出CSV

    表  8  SunnyDay→DuskRainy数据集上基于YOLOv5s的消融实验结果(%)

    Table  8  The results of ablation experiment on SunnyDay→DuskRainy dataset based on YOLOv5s (%)

    方法busbikecarmotorpersonridertruckmAP
    SO37.28.463.85.523.77.943.427.1
    CADC45.622.168.216.634.515.450.135.9
    CDTL41.613.165.57.629.710.244.930.4
    CADC + CDTL46.222.1 68.2 16.534.817.550.5 36.5
    下载: 导出CSV

    表  9  SunnyDay→NightRainy数据集上基于YOLOv3的消融实验结果(%)

    Table  9  The results of ablation experiment on SunnyDay→NightRainy dataset based on YOLOv3 (%)

    方法busbikecarmotorpersonridertruckmAP
    SO39.25.144.20.214.86.930.720.2
    CADC44.48.150.90.620.2 11.338.324.8
    CDTL40.48.245.80.616.27.233.421.7
    CADC + CDTL45.08.2 51.14.020.99.637.925.3
    下载: 导出CSV

    表  10  SunnyDay→NightRainy数据集上基于YOLOv5s的消融实验结果(%)

    Table  10  The results of ablation experiment on SunnyDay→NightRainy dataset based on YOLOv5s (%)

    方法busbikecarmotorpersonridertruckmAP
    SO25.43.236.30.29.14.420.814.2
    CADC38.78.342.70.312.36.432.020.1
    CDTL34.36.244.20.511.28.730.319.3
    CADC + CDTL40.79.345.0 0.6 12.8 9.232.5 21.5
    下载: 导出CSV

    表  11  KITTI→CityScapes和Sim10k→CityScapes数据集上的对比实验结果(%)

    Table  11  The results of different methods on KITTI→CityScapes and Sim10k→CityScapes datasets (%)

    方法KITTISim10k
    YOLOv3SO59.658.5
    CADC60.559.6
    CDTL60.560.8
    CADC + CDTL61.159.8
    Oracle64.764.7
    YOLOv5sSO54.053.1
    CADC59.558.6
    CDTL59.060.3
    CADC + CDTL60.059.0
    Oracle65.965.9
    下载: 导出CSV

    表  12  本文方法在VOC→Clipart1k上的实验(%)

    Table  12  The experiment of our method on VOC→Clipart1k (%)

    方法aerobcyclebirdboatbottlebuscarcatchaircowtabledoghrsbikeprsnplntsheepsofatraintvmAP
    I3Net23.766.225.319.323.755.235.713.637.835.525.413.924.160.356.339.813.634.556.041.835.1
    I3Net + CDTL23.361.627.817.124.754.339.812.341.434.132.215.527.677.957.037.45.5031.351.847.836.0
    I3Net + CDTL + ${\rm{CADC}}^*$31.260.431.819.427.063.340.713.741.138.427.218.025.567.854.937.215.536.454.847.837.6
    下载: 导出CSV

    表  13  本文方法在VOC→Comic2k上的实验(%)

    Table  13  The experiment of our method on VOC→Comic2k (%)

    方法bikebirdcarcatdogpersonmAP
    I3Net44.917.831.910.723.546.329.2
    I3Net + CDTL43.715.131.511.718.646.927.9
    I3Net + CDTL + CADC*47.816.033.815.124.443.530.1
    下载: 导出CSV

    表  14  本文方法在VOC→Watercolor2k上的实验(%)

    Table  14  The experiment of our method on VOC→Watercolor2k (%)

    方法bikebirdcarcatdogpersonmAP
    I3Net81.349.643.638.231.361.751.0
    I3Net + CDTL79.547.241.733.535.460.349.6
    I3Net + CDTL + CADC*84.145.346.632.931.461.450.3
    下载: 导出CSV

    表  15  像素级对齐对网络的影响(%)

    Table  15  The impact of pixel alignment to network (%)

    方法检测器C→FK→CS→C
    CDTL + CADCYOLOv335.959.858.4
    CDTL + CADC + $D_{{\rm{pixel}}}$YOLOv337.260.559.6
    CDTL + CADCYOLOv5s32.758.956.8
    CDTL + CADC + $D_{{\rm{pixel}}}$YOLOv5s34.159.558.6
    下载: 导出CSV

    表  16  通道注意力域分类器中损失函数的选择

    Table  16  The choice of loss function in channel attention domain classifier

    检测器$F_1$$F_2$$F_3$mAP (%)
    YOLOv3/v5sCECECE35.8/32.7
    YOLOv3/v5sCECEFL36.4/33.2
    YOLOv3/v5sCEFLFL37.2/34.1
    YOLOv3/v5sFLFLFL37.0/33.5
    下载: 导出CSV
  • [1] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, USA: NIPS, 2012. 1106−1114
    [2] Bottou L, Bousquet O. The tradeoffs of large scale learning. In: Proceedings of the 20th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2007. 161−168
    [3] Shen J, Qu Y R, Zhang W N, Yu Y. Wasserstein distance guided representation learning for domain adaptation. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. New Orleans, USA: AAAI, 2018. 4058−4065
    [4] 皋军, 黄丽莉, 孙长银. 一种基于局部加权均值的领域自适应学习框架. 自动化学报, 2013, 39(7): 1037−1052

    Gao Jun, Huang Li-Li, Sun Chang-Yin. A local weighted mean based domain adaptation learning framework. Acta Automatica Sinica, 2013, 39(7): 1037−1052
    [5] Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, et al. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 2016, 17(1): 2096−2030
    [6] Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial networks. Communications of the ACM, 2020, 63(11): 139−144 doi: 10.1145/3422622
    [7] 郭迎春, 冯放, 阎刚, 郝小可. 基于自适应融合网络的跨域行人重识别方法. 自动化学报, 2022, 48(11): 2744−2756

    Guo Ying-Chun, Feng Fang, Yan Gang, Hao Xiao-Ke. Cross-domain person re-identification on adaptive fusion network. Acta Automatica Sinica, 2022, 48(11): 2744−2756
    [8] 梁文琦, 王广聪, 赖剑煌. 基于多对多生成对抗网络的非对称跨域迁移行人再识别. 自动化学报, 2022, 48(1): 103−120

    Liang Wen-Qi, Wang Guang-Cong, Lai Jian-Huang. Asymmetric cross-domain transfer learning of person re-identification based on the many-to-many generative adversarial network. Acta Automatica Sinica, 2022, 48(1): 103−120
    [9] Ren S Q, He K M, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press, 2015. 91−99
    [10] Chen Y H, Li W, Sakaridis C, Dai D X, Van Gool L. Domain adaptive faster R-CNN for object detection in the wild. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 3339−3348
    [11] Saito K, Ushiku Y, Harada T, Saenko K. Strong-weak distribution alignment for adaptive object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 6949−6958
    [12] Lin T Y, Goyal P, Girshick R, He K M, Dollar P. Focal loss for dense object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318−327 doi: 10.1109/TPAMI.2018.2858826
    [13] Shen Z Q, Maheshwari H, Yao W C, Savvides M. SCL: Towards accurate domain adaptive object detection via gradient detach based stacked complementary losses. arXiv preprint arXiv: 1911.02559, 2019.
    [14] Zheng Y T, Huang D, Liu S T, Wang Y H. Cross-domain object detection through coarse-to-fine feature adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 13763−13772
    [15] Xu C D, Zhao X R, Jin X, Wei X S. Exploring categorical regularization for domain adaptive object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 11721−11730
    [16] Hsu C C, Tsai Y H, Lin Y Y, Yang M H. Every pixel matters: Center-aware feature alignment for domain adaptive object detector. In: Proceedings of the 16th European Conference on Computer Vision (ECCV). Glasgow, UK: Springer, 2020. 733−748
    [17] Chen C Q, Zheng Z B, Ding X H, Huang Y, Dou Q. Harmonizing transferability and discriminability for adapting object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 8866−8875
    [18] Zhu J Y, Park T, Isola P, Efros A A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 2242−2251
    [19] Deng J H, Li W, Chen Y H, Duan L X. Unbiased mean teacher for cross-domain object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 4089−4099
    [20] Xu M H, Wang H, Ni B B, Tian Q, Zhang W J. Cross-domain detection via graph-induced prototype alignment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 12352−12361
    [21] Wu A M, Liu R, Han Y H, Zhu L C, Yang Y. Vector-decomposed disentanglement for domain-invariant object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE, 2021. 9322−9331
    [22] Chen C Q, Zheng Z B, Huang Y, Ding X H, Yu Y Z. I.3Net: Implicit instance-invariant network for adapting one-stage object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2021. 12576−12585
    [23] 李威, 王蒙. 基于渐进多源域迁移的无监督跨域目标检测. 自动化学报, 2022, 48(9): 2337−2351

    Li Wei, Wang Meng. Unsupervised cross-domain object detection based on progressive multi-source transfer. Acta Automatica Sinica, 2022, 48(9): 2337−2351
    [24] Rodriguez A L, Mikolajczyk K. Domain adaptation for object detection via style consistency. In: Proceedings of the 30th British Machine Vision Conference. Cardiff, UK: BMVA Press, 2019.
    [25] Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C Y, et al. SSD: Single shot MultiBox detector. In: Proceedings of the 14th European Conference on Computer Vision (ECCV). Amsterdam, The Netherlands: Springer, 2016. 21−37
    [26] Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 779−788
    [27] Yolov8 [Online], available: https://github.com/ultralytics/yolov8, February 15, 2023
    [28] Zhang S Z, Tuo H Y, Hu J, Jing Z L. Domain adaptive YOLO for one-stage cross-domain detection. In: Proceedings of the 13th Asian Conference on Machine Learning. PMLR, 2021. 785−797
    [29] Redmon J, Farhadi A. YOLOv3: An incremental improvement. arXiv preprint arXiv: 1804.02767, 2018.
    [30] Hnewa M, Radha H. Integrated multiscale domain adaptive YOLO. IEEE Transactions on Image Processing, 2023, 32: 1857−1867 doi: 10.1109/TIP.2023.3255106
    [31] Bochkovskiy A, Wang C Y, Liao H Y M. YOLOv4: Optimal speed and accuracy of object detection. arXiv preprint arXiv: 2004.10934, 2020.
    [32] Vidit V, Salzmann M. Attention-based domain adaptation for single-stage detectors. Machine Vision and Applications, 2022, 33(5): Article No. 65 doi: 10.1007/s00138-022-01320-y
    [33] YOLOv5 [Online], available: https://github.com/ultralytics/yolov5, November 28, 2022
    [34] Li G F, Ji Z F, Qu X D, Zhou R, Cao D P. Cross-domain object detection for autonomous driving: A stepwise domain adaptative YOLO approach. IEEE Transactions on Intelligent Vehicles, 2022, 7(3): 603−615 doi: 10.1109/TIV.2022.3165353
    [35] Hu J, Shen L, Sun G. Squeeze-and-excitation networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 7132−7141
    [36] Wang Q L, Wu B G, Zhu P F, Li P H, Zuo W M, Hu Q H. ECA-Net: Efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 11531−11539
    [37] Lee H, Kim H E, Nam H. SRM: A style-based recalibration module for convolutional neural networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, South Korea: IEEE, 2019. 1854−1862
    [38] Wang M Z, Wang W, Li B P, Zhang X, Lan L, Tan H B, et al. InterBN: Channel fusion for adversarial unsupervised domain adaptation. In: Proceedings of the 29th ACM International Conference on Multimedia. Virtual Event: ACM, 2021. 3691−3700
    [39] Ding S Y, Lin L, Wang G R, Chao H Y. Deep feature learning with relative distance comparison for person re-identification. Pattern Recognition, 2015, 48(10): 2993−3003 doi: 10.1016/j.patcog.2015.04.005
    [40] Snell J, Swersky K, Zemel R. Prototypical networks for few-shot learning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 4080−4090
    [41] He K M, Fan H Q, Wu Y X, Xie S N, Girshick R. Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 9726−9735
    [42] Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R, et al. The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 3213−3223
    [43] Sakaridis C, Dai D X, Van Gool L. Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 2018, 126(9): 973−992 doi: 10.1007/s11263-018-1072-8
    [44] Yu F, Chen H F, Wang X, Xian W Q, Chen Y Y, Liu F C, et al. Bdd100K: A diverse driving dataset for heterogeneous multitask learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 2633−2642
    [45] Geiger A, Lenz P, Stiller C, Urtasun R. Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research, 2013, 32(11): 1231−1237 doi: 10.1177/0278364913491297
    [46] Johnson-Roberson M, Barto C, Mehta R, Sridhar S N, Rosaen K, Vasudevan R. Driving in the matrix: Can virtual worlds replace human-generated annotations for real world tasks? In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Singapore: IEEE, 2017. 746−753
    [47] Everingham M, Van Gool L, Williams C K I, Winn J, Zisserman A. The Pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 2010, 88(2): 303−338 doi: 10.1007/s11263-009-0275-4
    [48] Inoue N, Furuta R, Yamasaki T, Aizawa K. Cross-domain weakly-supervised object detection through progressive domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 5001−5009
  • 加载中
图(7) / 表(16)
计量
  • 文章访问数:  732
  • HTML全文浏览量:  296
  • PDF下载量:  126
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-12-05
  • 录用日期:  2023-05-18
  • 网络出版日期:  2023-08-18
  • 刊出日期:  2024-11-26

目录

    /

    返回文章
    返回