Domain Adaptive Object Detection Based on Attention Mechanism and Cycle Domain Triplet Loss
-
摘要: 目前大多数深度学习算法都依赖于大量的标注数据并欠缺一定的泛化能力. 无监督域适应算法能提取到已标注数据和未标注数据间隐式共同特征, 从而提高算法在未标注数据上的性能. 目前域适应目标检测算法主要为两阶段目标检测器设计. 针对单阶段检测器中无法直接进行实例级特征对齐导致一定数量域不变特征的缺失, 提出结合通道注意力机制的图像级域分类器加强域不变特征提取. 此外对于域适应目标检测中存在类别特征的错误对齐引起的精度下降问题, 通过原型学习构建类别中心, 设计了一种基于原型的循环域三元损失函数, 从而实现原型引导的精细类别特征对齐. 以单阶段目标检测算法作为检测器, 在多种域适应目标检测公共数据集上进行实验. 实验结果证明该方法能有效提升原检测器在目标域的泛化能力达到更高的检测精度, 并且对于单阶段目标检测网络具有一定的通用性.Abstract: Most current deep learning algorithms rely heavily on large amounts of annotated data and exist deficiency in generalization ability. The domain adaptation algorithm can extract the common implicit invariant features from the labeled data and unlabeled data, so that the algorithm can achieve good generalization performance on the unlabeled data. At present, domain adaptation algorithms have made excellent achievements for two-stage object detection. For the one-stage object detection, the difficulty of explicit aligning instance-level features leads to the absence of a number of domain invariant features. In this paper, a image-level domain classifier with channel attention mechanism is proposed to strengthen domain invariant feature extraction. In addition, by constructing the category centers through prototype learning, a cycle domain triplet loss function based on the category centers of different domains is designed, so as to realize the finer category feature alignment guided by the category. It can mitigate misalignment of category feature. One-stage object detection algorithms are used as detectors. Experiments are conducted on various domain object detection public datasets. The experimental results show that our method can improve the generalization ability on target domain and achieves the better results than other methods. Meanwhile, the experiment on different detector indicate ours method is universal for the one-stage object detection network.
-
表 1 CityScapes→Foggy CityScapes数据集上的对比实验, “−”代表该方法没有进行此实验
Table 1 The results of different methods on CityScapes→Foggy CityScapes, “−” represents the experiment is absent on this method
方法 检测器 person rider car truck bus motor bike train mAP mGP DAF[10] Faster-RCNN 25.0 31.0 40.5 22.1 35.3 20.0 27.1 20.2 27.7 38.8 SWDA[11] Faster-RCNN 29.9 42.3 43.5 24.5 36.2 30.0 35.3 32.6 34.3 70.0 C2F[14] Faster-RCNN 34.0 46.9 52.1 30.8 43.2 34.7 37.4 29.9 38.6 79.1 CAFA[16] Faster-RCNN 41.9 38.7 56.7 22.6 41.5 24.6 35.5 26.8 36.0 81.9 ICCR-VDD[21] Faster-RCNN 33.4 44.0 51.7 33.9 52.0 34.2 36.8 34.7 40.0 — MeGA[20] Faster-RCNN 37.7 49.0 52.4 25.4 49.2 34.5 39.0 46.9 41.8 91.1 DAYOLO[28] YOLOv3 29.5 27.7 46.1 9.1 28.2 12.7 24.8 4.5 36.1 61.0 本文方法(v3) YOLOv3 34.0 37.2 55.8 31.4 44.4 22.3 30.8 50.7 38.3 83.9 MS-DAYOLO[31] YOLOv4 39.6 46.5 56.5 28.9 51.0 27.5 36.0 45.9 41.5 68.6 A-DAYOLO[32] YOLOv5 32.8 35.7 51.3 18.8 34.5 11.8 25.6 16.2 28.3 — S-DAYOLO[34] YOLOv5 42.6 42.1 61.9 23.5 40.5 24.4 37.3 39.5 39.0 69.9 本文方法(v5) YOLOv5 30.9 37.4 53.3 23.8 39.5 24.2 29.9 35.0 34.3 83.8 表 2 SunnyDay→DuskRainy数据集上的对比实验
Table 2 The results of different methods on SunnyDay→DuskRainy
方法 检测器 bus bike car motor person rider truck mAP $\Delta$mAP DAF[10] Faster-RCNN 43.6 27.5 52.3 16.1 28.5 21.7 44.8 33.5 5.2 SWDA[11] Faster-RCNN 40.0 22.8 51.4 15.4 26.3 20.3 44.2 31.5 3.2 ICCR-VDD[21] Faster-RCNN 47.9 33.2 55.1 26.1 30.5 23.8 48.1 37.8 9.5 本文方法(v3) YOLOv3 50.1 24.9 70.7 24.2 39.1 19.0 53.2 40.2 7.4 本文方法(v5) YOLOv5 46.2 22.1 68.2 16.5 34.8 17.5 50.5 36.5 9.4 表 3 SunnyDay→NightRainy数据集上的对比实验
Table 3 The results of different methods on SunnyDay→NightRainy
方法 检测器 bus bike car motor person rider truck mAP $\Delta$mAP DAF[10] Faster-RCNN 23.8 12.0 37.7 0.2 14.9 4.0 29.0 17.4 1.1 SWDA[11] Faster-RCNN 24.7 10.0 33.7 0.6 13.5 10.4 29.1 17.4 1.1 ICCR-VDD[21] Faster-RCNN 34.8 15.6 38.6 10.5 18.7 17.3 30.6 23.7 7.4 本文方法(v3) YOLOv3 45.0 8.2 51.1 4.0 20.9 9.6 37.9 25.3 5.1 本文方法(v5) YOLOv5 40.7 9.3 45.0 0.6 12.8 9.2 32.5 21.5 4.7 表 4 KITTI→CityScapes和Sim10k→CityScapes数据集上的对比实验, “−”代表该方法没有进行此实验
Table 4 The results of different methods on KITTI→CityScapes and Sim10k→CityScapes, “−” represents the experiment is absent on this method
表 5 CityScapes→FoggyCityscapes数据集上基于YOLOv3的消融实验
Table 5 The results of ablation experiment on CityScapes→FoggyCityscapes based on YOLOv3
方法 person rider car truck bus motor bike train mAP SO 29.8 35.0 44.7 20.4 32.4 14.8 28.3 21.6 28.4 CADC 34.4 38.0 54.7 24.4 45.0 21.2 32.1 49.1 37.2 CDTL 31.1 38.0 46.7 28.9 34.5 23.4 27.8 13.7 30.5 CADC+CDTL 34.0 37.2 55.8 31.4 44.4 22.3 30.8 50.7 38.3 Oracle 34.9 38.8 55.9 25.3 45.0 22.6 33.4 49.1 40.2 表 6 CityScapes→FoggyCityscapes数据集上基于YOLOv5的消融实验
Table 6 The results of ablation experiment on CityScapes→FoggyCityscapes based on YOLOv5
方法 person rider car truck bus motor bike train mAP SO 26.9 33.1 39.9 8.9 21.1 11.3 24.8 4.9 21.4 CADC 32.6 37.1 52.7 26.8 38.1 23.0 38.1 32.6 34.1 CDTL 29.7 36.7 43.2 13.1 25.5 17.1 28.7 13.1 26.2 CADC+CDTL 30.9 37.4 53.3 23.8 39.5 24.2 29.9 35.0 34.3 Oracle 34.8 37.9 57.5 24.4 42.7 23.1 33.2 40.8 36.8 表 7 SunnyDay→DuskRainy数据集上基于YOLOv3的消融实验
Table 7 The results of ablation experiment on SunnyDay→DuskRainy based on YOLOv3
方法 bus bike car motor person rider truck mAP Source Only 43.7 14.3 68.4 12.0 31.5 10.9 48.7 32.8 CADC 50.0 22.6 70.8 23.2 38.4 18.7 53.5 39.6 CDTL 45.4 20.1 69.2 15.2 34.8 17.2 47.8 35.7 CADC+ CDTL 50.1 24.9 70.7 24.2 39.1 19.0 53.2 40.2 表 8 SunnyDay→DuskRainy数据集上基于YOLOv5的消融实验
Table 8 The results of ablation experiment on SunnyDay→DuskRainy based on YOLOv5
方法 bus bike car motor person rider truck mAP Source Only 37.2 8.4 63.8 5.5 23.7 7.9 43.4 27.1 CADC 45.6 22.1 68.2 16.6 34.5 15.4 50.1 35.9 CDTL 41.6 13.1 65.5 7.6 29.7 10.2 44.9 30.4 CADC+ CDTL 46.2 22.1 68.2 16.5 34.8 17.5 50.5 36.5 表 9 SunnyDay→NightRainy数据集上基于YOLOv3的消融实验
Table 9 The results of ablation experiment on SunnyDay→NightRainy based on YOLOv3
方法 bus bike car motor person rider truck mAP Source Only 39.2 5.1 44.2 0.2 14.8 6.9 30.7 20.2 CADC 44.4 8.1 50.9 0.6 20.2 11.3 38.3 24.8 CDTL 40.4 8.2 45.8 0.6 16.2 7.2 33.4 21.7 CADC+ CDTL 45.0 8.2 51.1 4.0 20.9 9.6 37.9 25.3 表 10 SunnyDay→NightRainy数据集上基于YOLOv5的消融实验
Table 10 The results of ablation experiment on SunnyDay→NightRainy based on YOLOv5
方法 bus bike car motor person rider truck mAP Source Only 25.4 3.2 36.3 0.2 9.1 4.4 20.8 14.2 CADC 38.7 8.3 42.7 0.3 12.3 6.4 32.0 20.1 CDTL 34.3 6.2 44.2 0.5 11.2 8.7 30.3 19.3 CADC+ CDTL 40.7 9.3 45.0 0.6 12.8 9.2 32.5 21.5 表 11 KITTI→CityScapes和Sim10k→CityScapes数据集上的对比实验
Table 11 The results of different methods on KITTI→CityScapes and Sim10k→CityScapes
KITTI Sim10k YOLOv3 Source Only 59.6 58.5 CADC 60.5 59.6 CDTL 60.5 60.8 CADC+ CDTL 61.1 59.8 Oracle 64.7 64.7 YOLOv5 Source Only 54.0 53.1 CADC 59.5 58.6 CDTL 59.0 60.3 CADC+ CDTL 60.0 59.0 Oracle 65.9 65.9 表 12 本文方法在VOC→Clipart1k上的实验
Table 12 The experiment on VOC→Clipart1k
方法 aero bcycle bird boat bottle bus car cat chair cow table dog hrs bike prsn plnt sheep sofa train tv mAP I3Net 23.7 66.2 25.3 19.3 23.7 55.2 35.7 13.6 37.8 35.5 25.4 13.9 24.1 60.3 56.3 39.8 13.6 34.5 56.0 41.8 35.1 I3Net+CDTL 23.3 61.6 27.8 17.1 24.7 54.3 39.8 12.3 41.4 34.1 32.2 15.5 27.6 77.9 57.0 37.4 5.50 31.3 51.8 47.8 36.0 I3Net+CDTL+$CADC^*$ 31.2 60.4 31.8 19.4 27.0 63.3 40.7 13.7 41.1 38.4 27.2 18.0 25.5 67.8 54.9 37.2 15.5 36.4 54.8 47.8 37.6 表 13 本文方法在VOC→Comic2k上的实验
Table 13 The experiment on VOC→Comic2k
方法 bike bird car cat dog person mAP I3Net 44.9 17.8 31.9 10.7 23.5 46.3 29.2 I3Net+CDTL 43.7 15.1 31.5 11.7 18.6 46.9 27.9 I3Net+CDTL+CADC* 47.8 16.0 33.8 15.1 24.4 43.5 30.1 表 14 本文方法在VOC→Watercolor2k上的实验
Table 14 The experiment on VOC→Watercolor2k
方法 bike bird car cat dog person mAP I3Net 81.3 49.6 43.6 38.2 31.3 61.7 51.0 I3Net+CDTL 79.5 47.2 41.7 33.5 35.4 60.3 49.6 I3Net+CDTL+CADC* 84.1 45.3 46.6 32.9 31.4 61.4 50.3 表 15 像素级对齐对网络的影响
Table 15 The impact of pixel alignment to network
方法 检测器 C→F K→C S→C CDTL+CADC YOLOv3 35.9 59.8 58.4 CDTL+CADC+$D_{pixel}$ YOLOv3 37.2 60.5 59.6 CDTL+CADC YOLOv5 32.7 58.9 56.8 CDTL+CADC+$D_{pixel}$ YOLOv5 34.1 59.5 58.6 表 16 通道注意力域分类器中损失函数的选择
Table 16 The choice of loss function in channel attention domain classifier(CADC)
检测器 $F_1$ $F_2$ $F_3$ mAP YOLOv3/v5 CE CE CE 35.8/32.7 YOLOv3/v5 CE CE FL 36.4/33.2 YOLOv3/v5 CE FL FL 37.2/34.1 YOLOv3/v5 FL FL FL 37.0/33.5 -
[1] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS), Lake Tahoe, Nevada, USA: IEEE, 2012. 1097−1105. [2] Bottou L, Bousquet O. The tradeoffs of large scale learning[J]. Advances in neural information processing systems, 2007, 20. [3] Shen J, Qu Y, Zhang W, et al. Wasserstein distance guided representation learning for domain adaptation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2018, 32. [4] 皋军, 黄丽莉, 孙长银. 一种基于局部加权均值的领域适应学习框架. 自动化学报, 2013, 39(7): 1037-1052. doi: 10.3724/SP.J.1004.2013.01037.GAO Jun, HUANG Li-Li, SUN Chang-Yin. A Local Weighted Mean Based Domain Adaptation Learning Framework. ACTA AUTOMATICA SINICA, 2013, 39(7): 1037-1052. doi: 10.3724/SP.J.1004.2013.01037 [5] Ganin Y, Ustinova E, Ajakan H, et al. Domain-adversarial training of neural networks[J]. The journal of machine learning research, 2016, 17(1): 2096-2030. [6] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144. doi: 10.1145/3422622 [7] 郭迎春, 冯放, 阎刚, 郝小可. 基于自适应融合网络的跨域行人重识别方法. 自动化学报, 2022, 48(11): 2744-2756 doi: 10.16383/j.aas.c220083.Guo Ying-Chun, Feng Fang, Yan Gang, Hao Xiao-Ke. Cross-domain person re-identification on adaptive fusion network. Acta Automatica Sinica, 2022, 48(11): 2744?2756 doi: 10.16383/j.aas.c220083 [8] 梁文琦, 王广聪, 赖剑煌. 基于多对多生成对抗网络的非对称跨域迁移行人再识别. 自动化学报, 2022, 48(1): 103-120 doi: 10.16383/j.aas.c190303.Liang Wen-Qi, Wang Guang-Cong, Lai Jian-Huang. Asymmetric cross-domain transfer learning of person re-identification based on the many-to-many generative adversarial network. Acta Automatica Sinica, 2022, 48(1): 103?120 doi: 10.16383/j.aas.c190303. [9] Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks[J]. Advances in neural information processing systems, 2015, 28. [10] Chen Y, Li W, Sakaridis C, et al. Domain adaptive faster r-cnn for object detection in the wild[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 3339−3348. [11] Saito K, Ushiku Y, Harada T, et al. Strong-Weak distribution alignment for adaptive object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 6956−6965. [12] Lin T Y, Goyal P, Girshick R, et al. Focal Loss for Dense Object Detection[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, PP(99): 2999-3007. [13] Shen Z, Maheshwari H, Yao W, et al. Scl: Towards accurate domain adaptive object detection via gradient detach based stacked complementary losses[J]. arXiv preprint arXiv: 1911.02559, 2019. [14] Zheng Y, Huang D, Liu S, et al. Cross-domain Object Detection through Coarse-to-Fine Feature Adaptation: IEEE, 10.1109/CVPR42600.2020.01378[P]. 2020. [15] Xu C D, Zhao X R, Jin X, et al. Exploring categorical regularization for domain adaptive object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 11724−11733. [16] HSU C C, TSAI Y H, LIN Y Y, et al. Every Pixel Matters: Center-Aware Feature Alignment for Domain Adaptive Object Detector[C]//VEDALDI A, BISCHOF H, BROX T, et al. Computer Vision – ECCV 2020. Cham: Springer International Publishing, 2020: 733−748. [17] Chen C, Zheng Z, Ding X, et al. Harmonizing transferability and discriminability for adapting object detectors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 8869−8878. [18] Zhu J Y, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2223−2232. [19] Deng J, Li W, Chen Y, et al. Unbiased mean teacher for cross-domain object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 4091−4101. [20] Xu M, Wang H, Ni B, et al. Cross-domain detection via graph-induced prototype alignment[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 12355−12364. [21] Wu A, Liu R, Han Y, et al. Vector-decomposed disentanglement for domain-invariant object detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 9342−9351. [22] Chen C, Zheng Z, Huang Y, et al. I3net: Implicit instance-invariant network for adapting one-stage object detectors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 12576−12585. [23] 李威, 王蒙. 基于渐进多源域迁移的无监督跨域目标检测[J]. 自动化学报, 2022, 48(8): 1-15. doi: 10.16383/j.aas.c190532Li Wei, Wang Meng. Unsupervised cross-domain object detection based on progressive multi-source transfer. Acta Automatica Sinica, 2022, 48(9): 2337?2351 doi: 10.16383/j.aas.c190532. [24] A. L. Rodriguez and K. Mikolajczyk, "Domain adaptation for object detection via style consistency," British Machine Vision Conference, 2019. [25] Liu W, Anguelov D, Erhan D, et al. Ssd: Single shot multibox detector[C]//European conference on computer vision. Springer, Cham, 2016: 21−37. [26] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 779−788. [27] Yolov8 [Online], available: https://github.com/ultralytics/yolov8, Feb 15, 2023 [28] Zhang S, Tuo H, Hu J, et al. Domain Adaptive YOLO for One-Stage Cross-Domain Detection[C]//Asian Conference on Machine Learning. PMLR, 2021: 785−797. [29] Redmon J, Farhadi A. Yolov3: An incremental improvement[J]. arXiv preprint arXiv: 1804.02767, 2018. [30] HNEWA M, RADHA H. Integrated Multiscale Domain Adaptive YOLO[J]. arXiv: 2202.03527, 2022 [31] Bochkovskiy A, Wang C Y, Liao H Y M. Yolov4: Optimal speed and accuracy of object detection[J]. arXiv preprint arXiv: 2004.10934, 2020. [32] Vidit V, Salzmann M. Attention-based domain adaptation for single-stage detectors[J]. Machine Vision and Applications, 2022, 33(5): 65. doi: 10.1007/s00138-022-01320-y [33] Yolov5[Online], available: https://github.com/ultralytics/yolov5, Nov 28, 2022 [34] LI G, JI Z, QU X, et al. Cross-Domain Object Detection for Autonomous Driving: A Stepwise Domain Adaptative YOLO Approach[J]. IEEE Transactions on Intelligent Vehicles, 2022: 1-1. [35] Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 7132−7141 [36] Wang Q, Wu B, Zhu P, et al. ECA-Net: Efficient channel attention for deep convolutional neural networks[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 11534−11542. [37] Lee H J, Kim H E, Nam H. Srm: A style-based recalibration module for convolutional neural networks[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 1854−1862. [38] Wang M, Wang W, Li B, et al. Interbn: Channel fusion for adversarial unsupervised domain adaptation[C] //Proceedings of the 29th ACM international conference on multimedia. 2021: 3691−3700. [39] Ding S, Lin L, Wang G, et al. Deep feature learning with relative distance comparison for person re-identification[J]. Pattern Recognition, 2015, 48(10): 2993-3003. doi: 10.1016/j.patcog.2015.04.005 [40] Snell J, Swersky K, Zemel R. Prototypical networks for few-shot learning[J]. Advances in neural information processing systems, 2017, 30. [41] He K, Fan H, Wu Y, et al. Momentum contrast for unsupervised visual representation learning[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 9729−9738. [42] Cordts M, Omran M, Ramos S, et al. The cityscapes dataset for semantic urban scene understanding[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 3213−3223. [43] Sakaridis C, Dai D, Van Gool L. Semantic foggy scene understanding with synthetic data[J]. International Journal of Computer Vision, 2018, 126(9): 973-992. doi: 10.1007/s11263-018-1072-8 [44] Yu F, Chen H, Wang X, et al. Bdd100k: A diverse driving dataset for heterogeneous multitask learning[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 2636−2645. [45] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research, 32(11): 1231-1237, 2013. doi: 10.1177/0278364913491297 [46] Johnson-Roberson M, Barto C, Mehta R, et al. Driving in the matrix: Can virtual worlds replace human-generated annotations for real world tasks?[J]. arXiv preprint arXiv: 1610.01983, 2016. [47] Everingham M, Van Gool L, Williams C K I, et al. The pascal visual object classes (voc) challenge[J]. International journal of computer vision, 2009, 88: 303-308. [48] Inoue N, Furuta R, Yamasaki T, et al. Cross-domain weakly-supervised object detection through progressive domain adaptation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 5001−5009.
计量
- 文章访问数: 520
- HTML全文浏览量: 218
- 被引次数: 0