2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于激光雷达的无人驾驶3D多目标跟踪

熊珍凯 程晓强 吴幼冬 左志强 刘家胜

熊珍凯, 程晓强, 吴幼冬, 左志强, 刘家胜. 基于激光雷达的无人驾驶3D多目标跟踪. 自动化学报, 2023, 49(10): 2073−2083 doi: 10.16383/j.aas.c210783
引用本文: 熊珍凯, 程晓强, 吴幼冬, 左志强, 刘家胜. 基于激光雷达的无人驾驶3D多目标跟踪. 自动化学报, 2023, 49(10): 2073−2083 doi: 10.16383/j.aas.c210783
Xiong Zhen-Kai, Cheng Xiao-Qiang, Wu You-Dong, Zuo Zhi-Qiang, Liu Jia-Sheng. LiDAR-based 3D multi-object tracking for unmanned vehicles. Acta Automatica Sinica, 2023, 49(10): 2073−2083 doi: 10.16383/j.aas.c210783
Citation: Xiong Zhen-Kai, Cheng Xiao-Qiang, Wu You-Dong, Zuo Zhi-Qiang, Liu Jia-Sheng. LiDAR-based 3D multi-object tracking for unmanned vehicles. Acta Automatica Sinica, 2023, 49(10): 2073−2083 doi: 10.16383/j.aas.c210783

基于激光雷达的无人驾驶3D多目标跟踪

doi: 10.16383/j.aas.c210783
基金项目: 国家自然科学基金(62036008, 62173243, 61933014), 中国船舶集团自立科技研发专项基金(202118J), 安徽理工大学高层次人才基金(2023yjrc55)资助
详细信息
    作者简介:

    熊珍凯:安徽理工大学新能源与智能网联汽车学院教授, 中国船舶集团有限公司第七一三研究所研究员. 2012年获得哈尔滨工程大学博士学位. 主要研究方向为特种车辆和无人系统. E-mail: zhkxiong@sina.com

    程晓强:工程师. 2021年获得天津大学电气自动化与信息工程学院硕士学位. 主要研究方向为3D目标检测和多目标跟踪. E-mail: chengxq@tju.edu.cn

    吴幼冬:中国船舶集团有限公司第七一三研究所研究员. 主要研究方向为智能控制, 特种车辆和无人系统. E-mail: wyd@sina.com

    左志强:天津大学电气自动化与信息工程学院教授. 2004年获得北京大学博士学位. 主要研究方向为自动驾驶和多智能体系统. 本文通信作者. E-mail: zqzuo@tju.edu.cn

    刘家胜:中国船舶集团有限公司第七一三研究所研究员. 主要研究方向为特种车辆和无人系统. E-mail: ljs@sina.com

LiDAR-based 3D Multi-object Tracking for Unmanned Vehicles

Funds: Supported by National Natural Science Foundation of China (62036008, 62173243, 61933014), Science and Technology Research Project of China State Shipbuilding Corporation Limited (202118J), and Scientific Research Foundation for High-level Talents of Anhui University of Science and Technology (2023yjrc55)
More Information
    Author Bio:

    XIONG Zhen-Kai Professor at the College of New Energy and Intelligent Connected Vehicle, Anhui University of Science and Technology, and the 713 Research Institute, China State Shipbuilding Corporation Limited. He received his Ph.D. degree from Harbin Engineering University in 2012. His research interest covers special vehicle and unmanned systems

    CHENG Xiao-Qiang Engineer. He received his master degree from the School of Electrical and Information Engineering, Tianjin University in 2021. His research interest covers 3D object detection and multi-object tracking

    WU You-Dong Professor at the 713 Research Institute, China State Shipbuilding Corporation Limited. His research interest covers intelligent control, special vehicle, and unmanned systems

    ZUO Zhi-Qiang Professor at the School of Electrical and Information Engineering, Tianjin University. He received his Ph.D. degree from Peking University in 2004. His research interest covers autonomous vehicles and multi-agent systems. Corresponding author of this paper

    LIU Jia-Sheng Professor at the 713 Research Institute, China State Shipbuilding Corporation Limited. His research interest covers special vehicle and unmanned systems

  • 摘要: 无人驾驶汽车行驶是连续时空的三维运动, 汽车周围的目标不可能突然消失或者出现, 因此, 对于感知层而言, 稳定可靠的多目标跟踪(Multi-object tracking, MOT)意义重大. 针对传统的目标关联和固定生存周期(Birth and death memory, BDM)管理的不足, 提出基于边界交并比(Border intersection over union, BIoU)度量的目标关联和自适应生存周期管理策略. BIoU综合了欧氏距离和交并比(Intersection over union, IoU)的优点, 提高了目标关联的精度. 自适应生存周期管理将目标轨迹置信度与生存周期相联系, 显著减少了目标丢失和误检. 在KITTI多目标跟踪数据集上的实验验证了该方法的有效性.
  • 图  1  IoU度量和欧氏距离度量失效情况

    Fig.  1  Invalid cases about IoU metrics and Euclidean distance metrics

    图  2  基于BIoU和自适应生存周期的3D多目标跟踪

    Fig.  2  3D multi-object tracking based on BIoU and adaptive birth and death memory

    图  3  边界交并比示意图

    Fig.  3  Schematic diagram of BIoU

    图  4  自适应生存周期

    Fig.  4  Adaptive birth and death memory

    图  5  基于激光雷达的3D多目标跟踪整体流程

    Fig.  5  Overall pipeline for LiDAR-based 3D multi-object tracking

    图  6  改进方法与基准方法的跟踪比较 (误检)

    Fig.  6  Tracking comparison between our improved method and baseline (false detection)

    图  7  改进方法与基准方法的跟踪比较 (漏检)

    Fig.  7  Tracking comparison between our improved method and baseline (missed detection)

    表  1  模型参数

    Table  1  Model parameters

    参数 说明
    $ \gamma $ 0.05 BIoU惩罚因子
    $ \alpha $ 0.5 生存周期尺度系数
    $ \beta $ 4 生存周期偏移系数
    $ F_{\max} $ 3 (Car)
    5 (Others)
    最大生存周期
    对Car目标为3
    对其他类别目标为5
    $ F_{\min} $ 3 目标轨迹的最小跟踪周期
    该值与AB3DMOT相同
    ${ {BIoU} }_{\rm{thres} }$ $ -0.01 $ BIoU阈值
    小于阈值认为匹配失败
    下载: 导出CSV

    表  2  KITTI数据集上对3类目标 (汽车、行人、骑自行车的人) 跟踪性能对比

    Table  2  Tracking performance comparison about three kinds of objects (Car, Pedestrian, Cyclist) on KITTI dataset

    类别 方法 MOTA (%) $ \uparrow $1 MOTP (%) $ \uparrow $ MT (%) $ \uparrow $ ML (%) $ \downarrow $2 IDS$ \downarrow $ FRAG$ \downarrow $
    Car FANTrack[21] 76.52 84.81 73.14 9.25 1 54
    DiTNet[22] 81.08 87.83 79.35 4.21 20 120
    AB3DMOT[6] 85.70 86.99 75.68 3.78 2 24
    本文 85.69 86.96 76.22 3.78 2 24
    Pedestrian AB3DMOT[6] 59.76 67.27 40.14 20.42 52 371
    本文 59.93 67.22 42.25 20.42 52 377
    Cyclist AB3DMOT[6] 74.75 79.89 62.42 14.02 54 403
    本文 76.43 79.89 64.49 11.63 54 409
    1$ \uparrow $表示指标数值越大性能越好;
    2$ \downarrow $表示指标数值越小性能越好.
    下载: 导出CSV

    表  3  消融实验

    Table  3  Ablation experiments

    类别 BIoU $ F_{\rm{Amax}} $ MOTA (%) $ \uparrow $ MOTP (%) $ \uparrow $ MT (%) $ \uparrow $ ML (%) $ \downarrow $ IDS$ \downarrow $ FRAG$ \downarrow $
    Car 81.55 86.72 79.46 4.32 2 21
    $ \checkmark $ 81.69 86.69 80.00 4.32 3 22
    $ \checkmark $ 84.09 86.98 75.68 4.32 2 24
    $ \checkmark $ $ \checkmark $ 84.31 86.96 76.22 4.32 2 24
    Pedestrian 57.54 67.19 42.96 21.13 99 411
    $ \checkmark $ 57.73 67.18 46.48 20.42 99 417
    $ \checkmark $ 59.59 67.24 40.14 21.13 59 372
    $ \checkmark $ $ \checkmark $ 59.77 67.21 44.37 16.90 61 393
    Cyclist 73.44 85.40 75.00 14.29 0 5
    $ \checkmark $ 77.82 85.61 75.00 14.29 0 5
    $ \checkmark $ 77.97 85.41 71.43 17.86 0 7
    $ \checkmark $ $ \checkmark $ 82.94 85.51 75.00 10.71 0 7
    下载: 导出CSV
  • [1] Zhao Z Q, Zheng P, Xu S T, Wu X D. Object detection with deep learning: A review. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(11): 3212-3232 doi: 10.1109/TNNLS.2018.2876865
    [2] Simon M, Milz S, Amenda K, Gross H M. Complex-YOLO: An Euler-region-proposal for real-time 3D object detection on point clouds. In: Proceedings of the European Conference on Computer Vision (ECCV). Munich, Germany: Springer, 2018. 197−209
    [3] Shi S S, Wang X G, Li H S. PointRCNN: 3D object proposal generation and detection from point cloud. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 770−779
    [4] Bewley A, Ge Z Y, Ott L, Ramos F, Upcroft B. Simple online and real time tracking. In: Proceedings of the IEEE International Conference on Image Processing (ICIP). Phoenix, USA: IEEE, 2016. 3464−3468
    [5] Wojke N, Bewley A, Paulus D. Simple online and realtime tracking with a deep association metric. In: Proceedings of the IEEE International Conference on Image Processing (ICIP). Beijing, China: IEEE, 2017. 3645−3649
    [6] Weng X S, Wang J R, Held D, Kitani K. 3D multi-object tracking: A baseline and new evaluation metrics. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas, USA: IEEE, 2020. 10359−10366
    [7] Rezatofighi H, Tsoi N, Gwak J, Sadeghian A, Reid I, Savarese S. Generalized intersection over union: A metric and a loss for bounding box regression. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 658−666
    [8] Zheng Z H, Wang P, Liu W, Li J Z, Ye R G, Ren D W. Distance-IoU Loss: Faster and better learning for bounding box regression. In: Proceedings of the American Association for Artificial Intelligence (AAAI). New York, USA: IEEE, 2020. 12993−13000
    [9] Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Providence, USA: IEEE, 2012. 3354−3361
    [10] Luo W H, Xing J L, Milon A, Zhang X Q, Liu W, Kim T K. Multiple object tracking: A literature review. Artificial Intelligence, 2021, 293: 103448 doi: 10.1016/j.artint.2020.103448
    [11] Leal-Taixé L, Canton-Ferrer C, Schindler K. Learning by tracking: Siamese CNN for robust target association. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Vegas, USA: IEEE, 2016. 33−40
    [12] 孟琭, 杨旭. 目标跟踪算法综述. 自动化学报, 2019, 45(7): 1244-1260 doi: 10.16383/j.aas.c180277

    Meng L, Yang X. A Survey of Object Tracking Algorithms. Acta Automatic Sinica, 2019: 45(7): 1244-1260 doi: 10.16383/j.aas.c180277
    [13] Azim A, Aycard O. Detection, classification and tracking of moving objects in a 3D environment. In: Proceedings of the IEEE Intelligent Vehicles Symposium (IV). Madrid, Spain: IEEE, 2012. 802−807
    [14] Song S Y, Xiang Z Y, Liu J L. Object tracking with 3D LiDAR via multi-task sparse learning. In: Proceedings of the IEEE International Conference on Mechatronics and Automation (ICMA). Beijing, China: IEEE, 2015. 2603−2608
    [15] Sharma S, Ansari J A, Murthy J K, Krishna K M. Beyond pixels: Leveraging geometry and shape cues for online multi-object tracking. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Brisbane, Australia: IEEE, 2018. 3508−3515
    [16] 侯建华, 张国帅, 项俊.基于深度学习的多目标跟踪关联模型设计.自动化学报, 2020, 46(12): 2690-2700 doi: 10.16383/j.aas.c180528

    Hou J H, Zhang G S, Xiang J. Designing affinity model for multiple object tracking based on deep learning. Acta Automatic Sinica, 2020, 46(12): 2690-2700 doi: 10.16383/j.aas.c180528
    [17] 张伟俊, 钟胜, 徐文辉, WU Ying. 融合显著性与运动信息的相关滤波跟踪算法. 自动化学报, 2021, 47(7): 1572-1588 doi: 10.16383/j.aas.c190122

    Zhang W J, Zhong S, Xu W H, Wu Y. Correlation filter based visual tracking integrating saliency and motion cues. Acta Automatic Sinica, 2021, 47(7): 1572-1588 doi: 10.16383/j.aas.c190122
    [18] Wu H, Han W K, Wen C L, Li X, Wang C. 3D multi-object tracking in point clouds based on prediction confidence-guided data association. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(6): 5668-5677 doi: 10.1109/TITS.2021.3055616
    [19] 李绍明, 储珺, 冷璐, 涂序继. 目标跟踪中基于IoU和中心点距离预测的尺度估计. 自动化学报, DOI: 10.16383/j.aas.c210356

    Li Shao-Ming, Chu Jun, Leng Lu, Tu Xu-Ji. Accurate scale estimation with IoU and distance between centroids for object tracking. Acta Automatica Sinica, DOI: 10.16383/j.aas.c210356
    [20] 彭丁聪. 卡尔曼滤波的基本原理及应用. 软件导刊, 2009, 8(11): 32-34

    Peng D C. Basic Principle and Application of Kalman Filter. Software Guide, 2009, 8(11): 32-34
    [21] Baser E, Balasubramanian V, Bhattacharyya P, Czarnecki K. FANTrack: 3D multi-object tracking with feature association network. In: Proceedings of the IEEE Intelligent Vehicles Symposium (IV). Paris, France: IEEE, 2019. 1426−1433
    [22] Wang S K, Cai P D, Wang L J, Liu M. DiTNet: End-to-end 3D object detection and track ID assignment in spatio-temporal world. IEEE Robotics and Automation Letters, 2021: 6(2): 3397-3404 doi: 10.1109/LRA.2021.3062016
  • 加载中
图(7) / 表(3)
计量
  • 文章访问数:  1945
  • HTML全文浏览量:  773
  • PDF下载量:  497
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-08-17
  • 录用日期:  2022-05-25
  • 网络出版日期:  2022-07-18
  • 刊出日期:  2023-10-24

目录

    /

    返回文章
    返回