2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于多对多生成对抗网络的非对称跨域迁移行人再识别

梁文琦 王广聪 赖剑煌

梁文琦, 王广聪, 赖剑煌. 基于多对多生成对抗网络的非对称跨域迁移行人再识别. 自动化学报, 2022, 48(1): 103−120 doi: 10.16383/j.aas.c190303
引用本文: 梁文琦, 王广聪, 赖剑煌. 基于多对多生成对抗网络的非对称跨域迁移行人再识别. 自动化学报, 2022, 48(1): 103−120 doi: 10.16383/j.aas.c190303
Liang Wen-Qi, Wang Guang-Cong, Lai Jian-Huang. Asymmetric cross-domain transfer learning of person re-identification based on the many-to-many generative adversarial network. Acta Automatica Sinica, 2022, 48(1): 103−120 doi: 10.16383/j.aas.c190303
Citation: Liang Wen-Qi, Wang Guang-Cong, Lai Jian-Huang. Asymmetric cross-domain transfer learning of person re-identification based on the many-to-many generative adversarial network. Acta Automatica Sinica, 2022, 48(1): 103−120 doi: 10.16383/j.aas.c190303

基于多对多生成对抗网络的非对称跨域迁移行人再识别

doi: 10.16383/j.aas.c190303
基金项目: 国家自然科学基金(61573387, 62076258), 广东省重点研发项目(2017B030306018), 广东省海洋经济发展项目(粤自然资合[2021] 34)资助
详细信息
    作者简介:

    梁文琦:中山大学计算机学院硕士研究生. 2018年获中山大学计算机科学与技术学士学位. 主要研究方向为行人再识别和深度学习. E-mail: liangwq8@mail2.sysu.edu.cn

    王广聪:中山大学计算机学院博士研究生. 2015年获吉林大学通信工程学院学士学位. 主要研究方向为行人再识别和深度学习. E-mail: wanggc3@mail2.sysu.edu.cn

    赖剑煌:中山大学教授. 1999年获得中山大学数学系博士学位. 目前在IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), IEEE Transactions on Neural Networks and Learning Systems (TNNLS), IEEE Transactions on Image Processing (TIP), IEEE Transactions on Systems, Man, and Cybernetics Part B — Cybernetics (TSMC-B), Pattern Recognition (PR), IEEE International Conference on Computer Vision (ICCV), IEEE Conference on Computer Vision and Pattern Recognition (CVPR),IEEE International Conference on Data Mining (ICDM)等国际权威刊物发表论文200多篇. 主要研究方向为图像处理, 计算机视觉, 模式识别. 本文通信作者. E-mail: stsljh@mail.sysu.edu.cn

Asymmetric Cross-domain Transfer Learning of Person Re-identification Based on the Many-to-many Generative Adversarial Network

Funds: Supported by National Natural Science Foundation of China (61573387, 62076258), Key Research Projects in Guangdong Province (2017B030306018), and Project of Department of Natural Resources of Guangdong Province ([2021] 34)
More Information
    Author Bio:

    LIANG Wen-Qi Master student at the School of Computer Science and Engineering, Sun Yat-sen University. She received her bachelor degree in intelligence science and technology from Sun Yat-sen University in 2018. Her research interest covers person re-identification and deep learning

    WANG Guang-Cong Ph.D. candidate at the School of Computer Science and Engineering, Sun Yat-sen University. He received his bachelor degree in communication engineering from Jilin University in 2015. His research interest covers person re-identification and deep learning

    LAI Jian-Huang Professor at Sun Yat-sen University. He received his Ph.D. degree in mathematics from Sun Yat-sen University in 1999. He has published over 200 scientific papers in international journals and conferences including IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), IEEE Transactions on Neural Networks and Learning Systems (TNNLS), IEEE Transactions on Image Processing (TIP), IEEE Transactions on Systems, Man, and Cybernetics Part B — Cybernetics (TSMC-B), Pattern Recognition (PR), IEEE International Conference on Computer Vision (ICCV), IEEE Conference on Computer Vision and Pattern Recognition (CVPR),IEEE International Conference on Data Mining (ICDM). His research interest covers digital image processing, computer vision, and pattern recognition. Corresponding author of this paper

  • 摘要: 无监督跨域迁移学习是行人再识别中一个非常重要的任务. 给定一个有标注的源域和一个没有标注的目标域, 无监督跨域迁移的关键点在于尽可能地把源域的知识迁移到目标域. 然而, 目前的跨域迁移方法忽略了域内各视角分布的差异性, 导致迁移效果不好. 针对这个缺陷, 本文提出了一个基于多视角的非对称跨域迁移学习的新问题. 为了实现这种非对称跨域迁移, 提出了一种基于多对多生成对抗网络(Many-to-many generative adversarial network, M2M-GAN)的迁移方法. 该方法嵌入了指定的源域视角标记和目标域视角标记作为引导信息, 并增加了视角分类器用于鉴别不同的视角分布, 从而使模型能自动针对不同的源域视角和目标域视角组合采取不同的迁移方式. 在行人再识别基准数据集Market1501、DukeMTMC-reID和MSMT17上, 实验验证了本文的方法能有效提升迁移效果, 达到更高的无监督跨域行人再识别准确率.
    1)  收稿日期 2019-04-16 录用日期 2019-09-02 Manuscript received April 16, 2019; accepted September 2, 2019 国家自然科学基金(61573387, 62076258), 广东省重点研发项目(2017B030306018), 广东省海洋经济发展项目(粤自然资合[2021] 34)资助 Supported by National Natural Science Foundation of China (61573387, 62076258), Key Research Projects in Guangdong Province (2017B030306018), and Contract of Department of Natural Resources of Guangdong Province ([2021] 34) 本文责任编委 刘青山 Recommended by Associate Editor LIU Qing-Shan 1. 中山大学计算机学院 广州 510006 2. 广州新华学院 广州 510520 3. 广东省信息安全技术重点实验室 广州 510006 4. 机器智能与先进计算教育部重点实验室 广州 510006
    2)  1. School of Computer Science and Engineering, Sun Yat-senUniversity, Guangzhou 510006 2. Guangzhou Xinhua University, Guangzhou 510520 3. Guangdong Province Key Laboratory of Computational Science, Guangzhou 510006 4. Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, Guangzhou 510006
  • 图  1  摄像机分布差异举例

    Fig.  1  Examples of distribution differences between different views

    图  2  本文提出的多视角对多视角迁移方式与现有迁移方式的比较

    Fig.  2  Comparison of our M2M transferring way and the existing methods

    图  4  视角嵌入

    Fig.  4  View embedding

    图  3  多对多生成对抗网络框架(省略了目标域$ \rightarrow $源域的生成过程、循环一致损失和身份保持损失)

    Fig.  3  Framework of our M2M-GAN (The generation process, the cycle consistency loss, and the identity preserve loss of target domain $ \rightarrow $ source domain are omitted)

    图  5  多对多生成对抗网络结构图

    Fig.  5  Network structures of our M2M-GAN

    图  6  不同参数对识别率的影响

    Fig.  6  Influence of different parameters on the matching rate

    图  7  其他数据集迁移到Market数据集的可视化例子

    Fig.  7  Visual examples of translations from otherdatasets to the Market1501 dataset

    图  8  其他数据集迁移到DukeMTMC-reID数据集的可视化例子

    Fig.  8  Visual examples of translations from otherdatasets to the DukeMTMC-reID dataset

    图  9  其他数据集迁移到MSMT17数据集的可视化例子

    Fig.  9  Visual examples of translations from other datasets to the MSMT17 dataset

    表  2  不同方法在Market1501数据集上的训练时间和模型参数量

    Table  2  Training time and model parameters ofdifferent methods on the Market1501 dataset

    方法训练时间模型参数量Rank1 (%)
    CycleGAN16 h106.3 M47.4
    $M \times N$ CycleGAN14 h$ \times 8 \times 6$106.3 M$ \times 8 \times 6$58.0
    M2M-GAN (本文)17 h106.6 M59.1
    下载: 导出CSV

    表  6  不同方法在DukeMTMC-reID数据集上的训练时间和模型参数量

    Table  6  Training time and model parameters ofdifferent methods on the DukeMTMC-reID dataset

    方法训练时间模型参数量Rank1 (%)
    CycleGAN16 h106.3 M43.1
    $M \times N$ CycleGAN14 h$ \times 6 \times 8$106.3 M$ \times 6 \times 8$49.9
    M2M-GAN (本文)17 h106.6 M52.0
    下载: 导出CSV

    表  1  不同风格迁移方法在Market1501数据集上的识别率(%)

    Table  1  Matching rates of different style translation methods on the Market1501 dataset (%)

    方法 (源域数据集)DukeMTMC-reIDMSMT17
    Rank1mAP Rank1mAP
    Pre-training50.423.6 51.525.5
    CycleGAN47.421.5 46.121.1
    M2M-GAN (本文)59.129.6 57.928.8
    下载: 导出CSV

    表  3  不同模块在Market1501数据集上的准确率分析(%)

    Table  3  Accuracy of different modules on the Market1501 dataset (%)

    视角嵌入
    模块
    视角分类
    模块
    身份保持
    模块
    Rank1mAP
    $ \times $$ \times $$ \times $35.712.5
    $ \times $$ \times $${\surd}$47.421.5
    ${\surd}$$ \times $${\surd}$48.022.0
    $ \times $${\surd}$${\surd}$48.622.1
    ${\surd}$${\surd}$${\surd}$59.129.6
    下载: 导出CSV

    表  4  不同无监督方法在Market1501数据集上的识别率(%) (源数据集为DukeMTMC-reID数据集)

    Table  4  Matching rates of different unsupervised methods on the Market1501 dataset (%) (The source dataset is the DukeMTMC-reID dataset)

    类型方法Rank1mAP
    手工特征LOMO[12]27.28.0
    Bow[39]35.814.8
    基于聚类
    的无监督学习
    PUL[29]45.520.5
    CAMEL[28]54.526.3
    跨域迁移
    学习
    PTGAN[34]38.6
    SPGAN+LMP[33]57.726.7
    TJ-AIDL[43]58.226.5
    ARN[44]70.239.4
    M2M-GAN (本文)59.129.6
    M2M-GAN (本文)+LMP[33]63.130.9
    下载: 导出CSV

    表  5  不同风格迁移方法在DukeMTMC-reID数据集上的识别率(%)

    Table  5  Matching rates of different style translation methods on the DukeMTMC-reID dataset (%)

    方法 (源域数据集) Market1501 MSMT17
    Rank1mAP Rank1mAP
    Pre-training38.121.4 53.532.5
    CycleGAN43.124.1 51.130.0
    M2M-GAN (本文)52.029.8 61.137.5
    下载: 导出CSV

    表  7  不同模块在DukeMTMC-reID数据集上的准确率分析(%)

    Table  7  Accuracy of different modules on theDukeMTMC-reID dataset (%)

    视角嵌入
    模块
    视角分类
    模块
    身份保持
    模块
    Rank1mAP
    $ \times $$ \times $$ \times $31.812.6
    $ \times $$ \times $${\surd}$43.124.1
    ${\surd}$$ \times $${\surd}$45.025.3
    $ \times $${\surd}$${\surd}$43.524.1
    ${\surd}$${\surd}$${\surd}$52.029.8
    下载: 导出CSV

    表  8  不同无监督方法在DukeMTMC-reID数据集上的识别率(%) (源数据集为Market1501数据集)

    Table  8  Matching rates of different unsupervised methods on the DukeMTMC-reID dataset (%) (The source dataset is the Market1501 dataset)

    类型方法Rank1mAP
    手工特征LOMO[12]12.34.8
    Bow[39]17.18.3
    基于聚类的无监督学习UMDL[45]18.57.3
    PUL[29]30.016.4
    跨域迁移学习PTGAN[34]27.4
    SPGAN+LMP[33]46.426.2
    TJ-AIDL[43]44.323.0
    ARN[44]60.233.4
    M2M-GAN (本文)52.029.8
    M2M-GAN (本文)+LMP[33]54.431.6
    下载: 导出CSV

    表  9  不同风格迁移方法在MSTM17数据集上的识别率(%)

    Table  9  Matching rates of different styletranslation methods on the MSTM17 dataset (%)

    方法 (源域数据集) Market1501 DukeMTMC-reID
    Rank1mAP Rank1mAP
    Pre-training14.24.5 20.26.7
    CycleGAN22.77.6 24.77.8
    M2M-GAN (本文)31.910.8 36.811.9
    下载: 导出CSV

    表  10  不同无监督方法在MSMT17上的识别率(%) (源数据集为Market1501数据集)

    Table  10  Matching rates of different unsupervised methods on the MSMT17 dataset (%) (The source dataset is the Market1501 dataset)

    类型方法Rank1mAP
    跨域迁移
    学习
    PTGAN[34]10.22.9
    M2M-GAN (本文)31.910.8
    M2M-GAN (本文)+LMP[33]32.29.7
    下载: 导出CSV
  • [1] 李幼蛟, 卓力, 张菁, 李嘉锋, 张辉. 行人再识别技术综述. 自动化学报, 2018, 44(9): 1554-1568

    Li You-Jiao, Zhuo Li, Zhang Jing, Li Jia-Feng, Zhang Hui. A survey of person re-identification. Acta Automatica Sinica, 2018, 44(9): 1554-1568
    [2] 齐美彬, 檀胜顺, 王运侠, 刘皓, 蒋建国. 基于多特征子空间与核学习的行人再识别. 自动化学报, 2016, 42(2): 299-308

    Qi Mei-Bin, Tan Sheng-Shun, Wang Yun-Xia, Liu Hao, Jiang Jian-Guo. Multi-feature subspace and kernel learning for person re-identification. Acta Automatica Sinica, 2016, 42(2): 299-308
    [3] 刘一敏, 蒋建国, 齐美彬, 刘皓, 周华捷. 融合生成对抗网络和姿态估计的视频行人再识别方法. 自动化学报, 2020, 46(3): 576-584

    Liu Yi-Min, Jiang Jian-Guo, Qi Mei-Bin, Liu Hao, Zhou Hua-Jie. Video-based person re-identification method based on GAN and pose estimation. Acta Automatica Sinica, 2020, 46(3): 576-584
    [4] Wang G C, Lai J H, Xie X H. P2SNet: Can an image match a video for person re-identification in an end-to-end way? IEEE Transactions on Circuits and Systems for Video Technology, 2018, 28(10): 2777-2787 doi: 10.1109/TCSVT.2017.2748698
    [5] Feng Z X, Lai J H, Xie X H. Learning view-specific deep networks for person re-identification. IEEE Transactions on Image Processing, 2018, 27(7): 3472-3483 doi: 10.1109/TIP.2018.2818438
    [6] Zhuo J X, Chen Z Y, Lai J H, Wang G C. Occluded person re-identification. In: Proceedings of the 2018 IEEE International Conference on Multimedia and Expo. San Diego, USA: IEEE, 2018. 1−6
    [7] Chen Y C, Zhu X T, Zheng W S, Lai J H. Person re-identification by camera correlation aware feature augmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(2): 392-408 doi: 10.1109/TPAMI.2017.2666805
    [8] Gong S G, Cristani M, Yan S C, Loy C C. Person Re-identification. London: Springer, 2014. 139−160
    [9] Chen Y C, Zheng W S, Lai J H, Pong C Y. An asymmetric distance model for cross-view feature mapping in person reidentification. IEEE Transactions on Circuits and Systems for Video Technology, 2017, 27(8): 1661-1675 doi: 10.1109/TCSVT.2016.2515309
    [10] Chen Y C, Zheng W S, Lai J H. Mirror representation for modeling view-specific transform in person re-identification. In: Proceedings of the 24th International Conference on Artificial Intelligence. Buenos Aires, Argentina: AAAI Press, 2015. 3402−3408
    [11] Zheng W S, Li X, Xiang T, Liao S C, Lai J H, Gong S G. Partial person re-identification. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. 4678−4686
    [12] Liao S C, Hu Y, Zhu X Y, Li S Z. Person re-identification by local maximal occurrence representation and metric learning. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 2197−2206
    [13] Wu A C, Zheng W S, Lai J H. Robust depth-based person re-identification. IEEE Transactions on Image Processing, 2017, 26(6): 2588-2603 doi: 10.1109/TIP.2017.2675201
    [14] Köstinger M, Hirzer M, Wohlhart P, Roth P M, Bischof H. Large scale metric learning from equivalence constraints. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, USA: IEEE, 2012. 2288−2295
    [15] Prosser B, Zheng W S, Gong S G, Xiang T. Person re-identification by support vector ranking. In: Proceedings of the British Machine Vision Conference. Aberystwyth, UK: British Machine Vision Association, 2010. 1−11
    [16] Zheng L, Bie Z, Sun Y F, Wang J D, Su C, Wang S J, et al. Mars: A video benchmark for large-scale person re-identification. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, the Netherlands: Springer, 2016. 868−884
    [17] Yi D, Lei Z, Liao S C, Li S Z. Deep metric learning for person re-identification. In: Proceedings of the 22nd International Conference on Pattern Recognition. Stockholm, Sweden: IEEE, 2014. 34−39
    [18] Cheng D, Gong Y H, Zhou S P, Wang J J, Zheng N N. Person re-identification by multi-channel parts-based CNN with improved triplet loss function. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 1335−1344
    [19] Zheng Z D, Zheng L, Yang Y. Pedestrian alignment network for large-scale person re-identification. IEEE Transactions on Circuits and Systems for Video Technology, 2019, 29(10): 3037-3045 doi: 10.1109/TCSVT.2018.2873599
    [20] Zhao L M, Li X, Zhuang Y T, Wang J D. Deeply-learned part-aligned representations for person re-identification. In: Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 3239−3248
    [21] Luo H, Jiang W, Zhang X, Fan X, Qian J J, Zhang C. AlignedReID++: Dynamically matching local information for person re-identification. Pattern Recognition, 2019, 94: 53-61 doi: 10.1016/j.patcog.2019.05.028
    [22] Zhong Z, Zheng L, Zheng Z D, Li S Z, Yang Y. Camera style adaptation for person re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 5157−5166
    [23] Su C, Li J N, Zhang S L, Xing J L, Gao W, Tian Q. Pose-driven deep convolutional model for person re-identification. In: Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 3980−3989
    [24] Su C, Yang F, Zhang S L, Tian Q, Davis L S, Gao W. Multi-task learning with low rank attribute embedding for person re-identification. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. 3739−3747
    [25] Song C F, Huang Y, Ouyang W L, Wang L. Mask-guided contrastive attention model for person re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 1179−1188
    [26] Kalayeh M M, Basaran E, Gökmen M, Kamasak M E, Shah M. Human semantic parsing for person re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 1062−1071
    [27] Wang G C, Lai J H, Huang P G, Xie X H. Spatial-temporal person re-identification. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Hawaii, USA: AAAI, 2019. 8933−8940
    [28] Yu H X, Wu A C, Zheng W S. Cross-view asymmetric metric learning for unsupervised person re-identification. In: Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 994−1002
    [29] Fan H H, Zheng L, Yan C G, Yang Y. Unsupervised person re-identification: Clustering and fine-tuning. ACM Transactions on Multimedia Computing, Communications, and Applications, 2018, 14(4): Article No. 83
    [30] Lin Y T, Dong X Y, Zheng L, Yan Y, Yang Y. A bottom-up clustering approach to unsupervised person re-identification. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Hawaii, USA: AAAI, 2019. 8738−8745
    [31] Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, et al. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 2016, 17(1): 2096-2030
    [32] Ma A J, Li J W, Yuen P C, Li P. Cross-domain person reidentification using domain adaptation ranking SVMs. IEEE Transactions on Image Processing, 2015, 24(5): 1599-1613 doi: 10.1109/TIP.2015.2395715
    [33] Deng W J, Zheng L, Ye Q X, Kang G L, Yang Y, Jiao J B. Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 994−1003
    [34] Wei L H, Zhang S L, Gao W, Tian Q. Person transfer GAN to bridge domain gap for person re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 79−88
    [35] Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Proceedings of the 27th Conference on Neural Information Processing Systems. Quebec, Canada: NIPS, 2014. 2672−2680
    [36] Zhu J Y, Park T, Isola P, Efros A A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 2242−2251
    [37] Lin T Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, et al. Microsoft COCO: Common objects in context. In: Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer, 2014. 740−755
    [38] He K M, Gkioxari G, Dollár P, Girshick R. Mask R-CNN. In: Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 2980−2988
    [39] Zheng L, Shen L Y, Tian L, Wang S J, Wang J D, Tian Q. Scalable person re-identification: A benchmark. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. 1116−1124
    [40] Zheng Z D, Zheng L, Yang Y. Unlabeled samples generated by GAN improve the person re-identification baseline in vitro. In: Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 3774−3782
    [41] Kingma D P, Ba J. Adam: A method for stochastic optimization. In: Proceedings of the 3rd International Conference on Learning Representations. San Diego, USA: ICLR, 2014. 1−13
    [42] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 770−778
    [43] Wang J Y, Zhu X T, Gong S G, Li W. Transferable joint attribute-identity deep learning for unsupervised person re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 2275−2284
    [44] Li Y J, Yang F E, Liu Y C, Yeh Y Y, Du X F, Wang Y C F. Adaptation and re-identification network: An unsupervised deep transfer learning approach to person re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City, USA: IEEE, 2018. 172−178
    [45] Peng P X, Xiang T, Wang Y W, Pontil M, Gong S G, Huang T J, et al. Unsupervised cross-dataset transfer learning for person re-identification. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 1306−1315
  • 加载中
图(9) / 表(10)
计量
  • 文章访问数:  1254
  • HTML全文浏览量:  339
  • PDF下载量:  287
  • 被引次数: 0
出版历程
  • 收稿日期:  2019-04-16
  • 录用日期:  2019-09-02
  • 网络出版日期:  2021-11-19
  • 刊出日期:  2022-01-25

目录

    /

    返回文章
    返回