2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于时空共现模式的视觉行人再识别

钱锦浩 宋展仁 郭春超 赖剑煌 谢晓华

钱锦浩, 宋展仁, 郭春超, 赖剑煌, 谢晓华. 基于时空共现模式的视觉行人再识别. 自动化学报, 2022, 48(2): 408−417 doi: 10.16383/j.aas.c200897
引用本文: 钱锦浩, 宋展仁, 郭春超, 赖剑煌, 谢晓华. 基于时空共现模式的视觉行人再识别. 自动化学报, 2022, 48(2): 408−417 doi: 10.16383/j.aas.c200897
Qian Jin-Hao, Song Zhan-Ren, Guo Chun-Chao, Lai Jian-Huang, Xie Xiao-Hua. Visual person re-identification based on spatial and temporal co-occurrence patterns. Acta Automatica Sinica, 2022, 48(2): 408−417 doi: 10.16383/j.aas.c200897
Citation: Qian Jin-Hao, Song Zhan-Ren, Guo Chun-Chao, Lai Jian-Huang, Xie Xiao-Hua. Visual person re-identification based on spatial and temporal co-occurrence patterns. Acta Automatica Sinica, 2022, 48(2): 408−417 doi: 10.16383/j.aas.c200897

基于时空共现模式的视觉行人再识别

doi: 10.16383/j.aas.c200897
基金项目: 国家自然科学基金 (62072482, 62076258), 广东省信息安全技术重点实验室开放课题基金 (2017B030314131) ,公安部科技强警基础工作专项项目(2019GABJC39)资助
详细信息
    作者简介:

    钱锦浩:中山大学计算机学院硕士研究生. 主要研究方向为行人再识别. E-mail: qianjh6@mail2.sysu.edu.cn

    宋展仁:中山大学计算机学院硕士研究生. 主要研究方向为行人再识别. E-mail: songzr3@mail2.sysu.edu.cn

    郭春超:中山大学计算机学院博士研究生. 主要研究方向为行人再识, 光学字符识别, 广告内容素材理解. E-mail: chunchaoguo@gmail.com

    赖剑煌:中山大学教授. 主要研究方向为计算机视觉与模式识别. E-mail: stsljh@mail.sysu.edu.cn

    谢晓华:中山大学计算机学院副教授. 主要研究方向为计算机视觉与模式识别. 本文通信作者. E-mail: xiexiaoh6@mail.sysu.edu.cn

Visual Person Re-identification Based on Spatial and Temporal Co-occurrence Patterns

Funds: Supported by National Natural Science Foundation of China (62072482, 62076258), the Opening Project of GuangDong Province Key Laboratory of Information Security Technology (2017B030314131) and the Key Laboratory of Video and Image Intelligent Analysis and Applicaiton Technology, Ministry of Public Security, China (2019GABJC39)
More Information
    Author Bio:

    QIAN Jin-Hao Master student at the School of Computer Science and Engineering, Sun Yat-sen University. His main research interest is person re-identification

    SONG Zhan-Ren Master student at the School of Computer Science and Engineering, Sun Yat-sen University. His main research interest is person re-identification

    GUO Chun-Chao Ph. D. candidate at the School of Computer Science and Engineering, Sun Yat-sen University. His research interest covers person re-identification, optical character recognition and advertising content material understanding

    LAI Jian-Huang Professor at Sun Yat-Sen University. His research interest covers computer vision and pattern recognition

    XIE Xiao-Hua Associate professor at the School of Computer Science and Engineering, Sun Yat-sen University. His research interest covers computer vision and pattern recognition. Corresponding author of this paper

  • 摘要: 基于视频图像的视觉行人再识别是指利用计算机视觉技术关联非重叠域摄像头网络下的相同行人, 在视频安防和商业客流分析中具有重要应用. 目前视觉行人再识别技术已经取得了相当不错的进展, 但依旧面临很多挑战, 比如摄像机的拍摄视角不同、遮挡现象和光照变化等所导致的行人表观变化和匹配不准确问题. 为了克服单纯视觉匹配困难问题, 本文提出一种结合行人表观特征跟行人时空共现模式的行人再识别方法. 所提方法利用目标行人的邻域行人分布信息来辅助行人相似度计算, 有效地利用时空上下文信息来加强视觉行人再识别. 在行人再识别两个权威公开数据集Market-1501和DukeMTMC-ReID上的实验验证了所提方法的有效性.
    1)  1 下载地址:https://github.com/Cysu/open-reid
  • 图  1  行人时空共现模式辅助视觉匹配示意图(每个圆角矩形框代表一个摄像头视域, 虚线框指定目标行人, 其他行人表示目标行人在相应视域内的邻域)

    Fig.  1  Illustration of spatiotemporal co-occurrence pattern aided pedestrian matching (Each rounded rectangle box represents a camera field. The dotted box specifies the target pedestrian, and other pedestrians indicate the target pedestrian's neighborhood in the corresponding view field)

    图  2  超参数对模型性能的影响, 纵坐标为rank-1准确率

    Fig.  2  Influence of hyper-parameters on model performance (rank-1 accuracy)

    表  1  本文方法与主流算法在Market-1501、DukeMTMC-ReID数据集上实验结果比较 (%)

    Table  1  Comparison with state-of-the-arts on Market-1501 and DukeMTMC-ReID data sets (%)

    类型算法Market-1501 DukeMTMC-ReID
    rank-1mAPrank-1mAP
    基于手工特征BoW+kissme[8]44.420.825.112.2
    KLFDA[36]46.5
    Null space[43]55.429.9
    WARCA[44]45.2
    基于姿态估计GLAD[45]89.973.9
    PIE[46]87.76979.862
    PSE[47]78.756
    基于掩模SPReID[48]92.581.3 84.471
    MaskReID[49]9075.378.861.9
    基于局部特征AlignedReID[50]90.677.7 81.267.4
    SCPNet[51]91.275.280.362.6
    PCB[40]93.881.683.369.2
    Pyramid[52]95.788.28979
    Batch dropblock[53]94.58588.775.8
    基于注意力机制MANCS[54]93.182.3 84.971.8
    DuATM[55]91.476.681.262.3
    HA-CNN[56]91.275.780.563.8
    基于GANCamstyle[57]88.168.7 75.353.5
    PN-GAN[58]89.472.673.653.2
    基于全局特征IDE[61]79.559.9
    SVDNet[60]82.362.176.756.8
    CAN[1]84.969.1
    MTMCReID[59]89.575.779.863.4
    本文方法96.289.2 89.280.1
    下载: 导出CSV

    表  2  用不同基准网络模型在数据集Market-1501和DukeMTMC-ReID上的消融实验 (%)

    Table  2  Ablation experiment for proposed method on Market-1501 and DukeMTMC-ReID data set on different baseline network models (%)

    算法
    模型
    Market-1501 DukeMTMC-ReID
    rank-1mAPrank-1mAP
    基准网络模型86.771.7 76.460.9
    基准网络模型+时空共现模式91.376.179.464.2
    基准网络模型(*)94.485.486.675.5
    基准网络模型(*)+时空共现模式方法96.289.289.280.1
    下载: 导出CSV

    表  3  不同行人邻域后处理策略在Market-1501和DukeMTMC-ReID数据集性能比较

    Table  3  Comparison of different post-processing strategies for pedestrian neighborhood on Market-1501 and DukeMTMC-ReID datasets

    后处理
    策略
    mAPrank-1rank-5rank-10
    Market-1501
    非极大值抑制88.896.098.999.4
    行人共现模式方法89.296.299.199.5
    DukeMTMC-ReID
    非极大值抑制79.187.99596.9
    行人共现模式方法80.189.295.497.3
    下载: 导出CSV
  • [1] Liu H, Feng J S, Qi M B, Jiang J G, Yan S C. End-to-end comparative attention networks for person re-identification. IEEE Transactions on Image Processing, 2017, 26(7): 3492-3506 doi: 10.1109/TIP.2017.2700762
    [2] Chen G Y, Lu J W, Yang M, Zhou J. Spatial-temporal attention-aware learning for video-based person re-identification. IEEE Transactions on Image Processing, 28(9), 4192-4205
    [3] Lv J M, Chen W H, Li Q, Yang C. Unsupervised cross-dataset person re-identification by transfer learning of spatial-temporal patterns. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 7948−7956
    [4] Cho Y J, Kim S A, Park J H, Lee K, Yoon K J. Joint person re-identification and camera network topology inference in multiple cameras. Computer Vision and Image Understanding, 2019, 180: 34-46 doi: 10.1016/j.cviu.2019.01.003
    [5] Zheng W S, Gong A G, Xiang T. Associating groups of people. In: Proceedings of the 2009 British Machine Vision Conference. London, UK: BMVA, 2009. 23.1−23.11
    [6] Meng J K, Wu S, Zheng W S. Weakly supervised person re-identification. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 760−769
    [7] Wang G R, Wang G C, Zhang X J, Lai J H, Yu Z T, Lin L. Weakly supervised person Re-ID: Differentiable graphical learning and a new benchmark. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(5): 2142-2156 doi: 10.1109/TNNLS.2020.2999517
    [8] Zheng L, Shen L Y, Tian L, Wang S J, Wang J D, Tian Q. Scalable person re-identification: A benchmark. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE, 2015. 1116−1124
    [9] Zheng Z D, Zheng L, Yang Y. Unlabeled samples generated by GAN improve the person Re-identification baseline in vitro. In: Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 3774−3782
    [10] Yang Y, Yang J M, Yan J J, Liao S C, Yi D, Li S Z. Salient color names for person re-identification. In: Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer, 2014. 536−551
    [11] Fogel I, Sagi D. Gabor filters as texture discriminator. Biological Cybernetics, 1989, 61(2): 103-113
    [12] Ojala T, Pietikainen M, Harwood I. A comparative study of texture measures with classification based on featured distributions. Pattern Recognition, 1996, 29(1): 51-59 doi: 10.1016/0031-3203(95)00067-4
    [13] Schmid C. Constructing models for content-based image retrieval. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Kauai, USA: IEEE, 2001. 39−45
    [14] Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). San Diego, USA: IEEE, 2005. 886−893
    [15] Lowe D G. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004, 60(2): 91-110 doi: 10.1023/B:VISI.0000029664.99615.94
    [16] Bay H, Ess A, Tuytelaars T, Van Gool L. Speeded-up robust features (SURF). Computer Vision and Image Understanding, 2008, 110(3): 346-359 doi: 10.1016/j.cviu.2007.09.014
    [17] Lin Y T, Zheng L, Zheng Z D, Wu Y, Hu Z L, Yan C G, Yang Y. Improving person re-identification by attribute and identity learning. Pattern Recognition, 2019, 95: 151-161 doi: 10.1016/j.patcog.2019.06.006
    [18] Li D W, Chen X T, Huang K Q. Multi-attribute learning for pedestrian attribute recognition in surveillance scenarios. In: Proceedings of the 3rd IAPR Asian Conference on Pattern Recognition (ACPR). Kuala Lumpur, Malaysia: IEEE, 2015. 111−115
    [19] Deng Y B, Luo P, Loy C C, Tang X O. Pedestrian attribute recognition at far distance. In: Proceedings of the 22nd ACM International Conference on Multimedia. Orlando, USA: ACM, 2014. 789−792
    [20] Chen H R, Wang Y W, Shi Y M, Yan K, Geng M Y, Tian Y H, et al. Deep transfer learning for person re-identification. In: Proceedings of the 4th IEEE Fourth International Conference on Multimedia Big Data (BigMM). Xi'an, China: IEEE, 2018. 1−5
    [21] Jing X Y, Zhu X K, Wu F, You X G, Liu Q L, Yue D, et al. Super-resolution person re-identification with semi-coupled low-rank discriminant dictionary learning. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015. 695−704
    [22] Ma F, Jing X Y, Zhu X, Tang Z M, Peng Z P. True-color and grayscale video person re-identification. IEEE Transactions on Information Forensics and Security, 2020, 15: 115-129 doi: 10.1109/TIFS.2019.2917160
    [23] Zhu X K, Jing X Y, Wu F, Feng H. Video-based person re-identification by simultaneously learning intra-video and inter-video distance metrics. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence. New York, USA: ACM, 2016. 3552−3558
    [24] Zhang W, He X Y, Yu X D, Lu W Z, Zha Z J, Tian Q. A multi-scale spatial-temporal attention model for person re-identification in videos. IEEE Transactions on Image Processing, 2020, 29: 3365-3373 doi: 10.1109/TIP.2019.2959653
    [25] Wu Y M, El Farouk Bourahla O, Li X, Wu, F, Tian Q, Zhou X. Adaptive graph representation learning for video person re-identification. IEEE Transactions on Image Processing, 2020, 29: 8821-8830 doi: 10.1109/TIP.2020.3001693
    [26] Wang G C, Lai J H, Huang P G, Xie X H. Spatial-temporal person re-identification. In: Proceedings of the 2019 AAAI Conference on Artificial Intelligence. Hawaii, USA: AAAI, 2019. 8933−8940
    [27] Varior R R, Haloi M, Wang G. Gated siamese convolutional neural network architecture for human re-identification. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016. 791−808
    [28] Schroff F, Kalenichenko D, Philbin J. FaceNet: A unified embedding for face recognition and clustering. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015. 815−823
    [29] Cheng D, Gong Y H, Zhou S P, Wang J J, Zheng N N. Person re-identification by multi-channel parts-based CNN with improved triplet loss function. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 1335−1344
    [30] Hermans A, Beyer L, Leibe B. In defense of the triplet loss for person re-identification [online] available: https://arxiv.org/abs/1703.07737, April 16, 2021
    [31] Zhu X K, Jing X Y, Zhang F, Zhang X Y, You X G, Cui X. Distance learning by mining hard and easy negative samples for person re-identification. Pattern Recognition, 2019, 95: 211-222 doi: 10.1016/j.patcog.2019.06.007
    [32] Chen W H, Chen X T, Zhang J G, Huang K Q. Beyond triplet loss: A deep quadruplet network for person re-identification. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017. 1320−1329
    [33] Xiao Q Q, Luo H, Zhang C. Margin sample mining loss: A deep learning based method for person re-identification [online] available: https://arxiv.org/abs/1710.00478, April 16, 2021
    [34] Fan X, Jiang W, Luo H, Fei M J. SphereReID: Deep hypersphere manifold embedding for person re-identification. Journal of Visual Communication and Image Representation, 2019, 60: 51-58 doi: 10.1016/j.jvcir.2019.01.010
    [35] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 770−778
    [36] Karanam S, Gou M R, Wu Z Y, Rates-Borras A, Camps O, Radke R J. A systematic evaluation and benchmark for person re-identification: Features, metrics, and datasets. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(3): 523-536 doi: 10.1109/TPAMI.2018.2807450
    [37] Zhong Z, Zheng L, Kang G L, Li S Z, Yang Y. Random erasing data augmentation. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence, AAAI 2020, The 32nd Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The 10th AAAI Symposium on Educational Advances in Artificial Intelligence. New York, USA: AAAI, 2020. 1300113008
    [38] Zheng Z D, Zheng L, Yang Y. A discriminatively learned CNN embedding  for  person reidentification. ACM  Transactions  on Multimedia  Computing, Communications,  and  Applications, 2018, 14(1): 13
    [39] Luo H, Jiang W, Gu Y Z, Liu F X, Liao X Y, Lai S Q, et al. A strong baseline and batch normalization neck for deep person re-identification. IEEE Transactions on Multimedia, 2020, 22(10): 2597-2609 doi: 10.1109/TMM.2019.2958756
    [40] Sun Y F, Zheng L, Yang Y, Tian Q, Wang S J. Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 501−518
    [41] Felzenszwalb P F, Girshick R B, McAllester D, Ramanan D. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(9), 1627-1645 doi: 10.1109/TPAMI.2009.167
    [42] Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S A, et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 2015, 115(3): 211-252 doi: 10.1007/s11263-015-0816-y
    [43] Zhang L, Xiang T, Gong S G. Learning a discriminative null space for person re-identification. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 1239−1248
    [44] Jose C, Fleuret F. Scalable metric learning via weighted approximate rank component analysis. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016. 875−890
    [45] Wei L H, Zhang S L, Yao H T, Gao W, Tian Q. GLAD: Global-local-alignment descriptor for pedestrian retrieval. In: Proceedings of the 25th ACM International Conference on Multimedia. Mountain View, USA: ACM, 2017. 420−428
    [46] Zheng L, Huang Y J, Lu H C, Yang Y. Pose-invariant embedding for deep person re-identification. IEEE Transactions on Image Processing, 2019, 28(9): 4500-4509 doi: 10.1109/TIP.2019.2910414
    [47] Sarfraz M S, Schumann A, Eberle A, Stiefelhagen R. A pose-sensitive embedding for person re-identification with expanded cross neighborhood re-ranking. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 420−429
    [48] Kalayeh M M, Basaran E, Gokmen E, Kamasak M E, Shah M. Human semantic parsing for person re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 1062−1071
    [49] Qi L, Huo J, Wang L, Shi Y H, Gao Y. A mask based deep ranking neural network for person retrieval. In: Proceedings of the 2019 IEEE International Conference on Multimedia and Expo (ICME). Shanghai, China: IEEE, 2019. 496−501
    [50] Zhang X, Luo H, Fan X, Xiang W L, Sun Y X, Xiao Q Q, et al. AlignedReID: Surpassing human-level performance in person reidentification [online] available: https://arxiv.org/abs/1711.08184, April 16, 2021
    [51] Fan X, Luo H, Zhang X, He L X, Zhang C, Jiang W. SCPNet: Spatial-channel parallelism network for joint holistic and partial person re-identification. In: Proceedings of the 14th Asian Conference on Computer Vision. Perth, Australia: Springer, 2018. 19−34
    [52] Zheng F, Sun X, Jiang X Y, Guo X W, Yu Z Q, Huang F Y. Pyramidal person re-identification via multi-loss dynamic training. In: Proceedings of of the 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 8514−8522
    [53] Dai Z Z, Chen M Q, Gu X D, Zhu S Y, Tan P. Batch DropBlock network for person re-identification and beyond. In: Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea: IEEE, 2019. 3690−3700
    [54] Wang C, Zhang Q, Huang C, Liu W Y, Wang X G. Mancs: A multi-task attentional network with curriculum sampling for person re-identification. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 384−400
    [55] Si J L, Zhang H G, Li C G, Kuen J, Kong X F, Kot A C, et al. Dual attention matching network for context-aware feature sequence based person re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 5363−5372
    [56] Li W, Zhu X T, Gong S G. Harmonious attention network for person re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 2285−2294
    [57] Zhong Z, Zheng L, Zheng Z D, Li S Z, Yang Y. CamStyle: A novel data augmentation method for person re-identification. IEEE Transactions on Image Processing, 2019, 28(3): 1176-1190 doi: 10.1109/TIP.2018.2874313
    [58] Qian X L, Fu Y W, Xiang T, Wang W X, Qiu J, Wu Y, et al. Pose-normalized image generation for person re-identification. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 661−678
    [59] Ristani E, Tomasi C. Features for multi-target multi-camera tracking and re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 6036−6046
    [60] Sun Y F, Zheng L, Deng W J, Wang S J. SVDNet for pedestrian retrieval. In: Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 3820−3828
    [61] Gray D, Tao H. Viewpoint invariant pedestrian recognition with an ensemble of localized features. In: Proceedings of the 10th European Conference on Computer Vision. Marseille, France: Springer, 2008. 262−275
  • 加载中
图(2) / 表(3)
计量
  • 文章访问数:  1465
  • HTML全文浏览量:  601
  • PDF下载量:  336
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-10-26
  • 录用日期:  2021-04-16
  • 网络出版日期:  2021-06-08
  • 刊出日期:  2022-02-18

目录

    /

    返回文章
    返回