• 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

一种基于双支协同滤波网络的目标跟踪方法

张文安 乔小龙 林安迪 杨旭升

张文安, 乔小龙, 林安迪, 杨旭升. 一种基于双支协同滤波网络的目标跟踪方法. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250590
引用本文: 张文安, 乔小龙, 林安迪, 杨旭升. 一种基于双支协同滤波网络的目标跟踪方法. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250590
Zhang Wen-An, Qiao Xiao-Long, Lin An-Di, Yang Xu-Sheng. A target tracking method based on dual-branch collaborative filtering network. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250590
Citation: Zhang Wen-An, Qiao Xiao-Long, Lin An-Di, Yang Xu-Sheng. A target tracking method based on dual-branch collaborative filtering network. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250590

一种基于双支协同滤波网络的目标跟踪方法

doi: 10.16383/j.aas.c250590 cstr: 32138.14.j.aas.c250590
基金项目: 国家自然科学基金(U25A20456, 62473335, W2421117), 杭州市科技发展计划项目(2022AIZD0080)资助
详细信息
    作者简介:

    张文安:浙江工业大学信息工程学院教授. 主要研究方向为多源信息融合估计和网络化系统. E-mail: wazhang@zjut.edu.cn

    乔小龙:浙江工业大学信息工程学院硕士研究生. 主要研究方向为多源信息融合估计和深度学习. E-mail: 211123030055@zjut.edu.cn

    林安迪:浙江工业大学信息工程学院博士研究生. 主要研究方向为多源信息融合估计. E-mail: 201706061126@zjut.edu.cn

    杨旭升:浙江工业大学信息工程学院副教授. 主要研究方向为多源信息融合估计和目标定位. 本文通信作者. E-mail: xsyang@zjut.edu.cn

A Target Tracking Method Based on Dual-Branch Collaborative Filtering Network

Funds: Supported by National Natural Science Foundation of China (U25A20456, 62473335, W2421117) and Hangzhou Science and Technology Development Plan Project (2022AIZD0080)
More Information
    Author Bio:

    ZHANG Wen-An Professor at the College of Information Engineering, Zhejiang University of Technology. His research interest covers multi-source information fusion estimation and networked systems

    QIAO Xiao-Long Master student at the College of Information Engineering, Zhejiang University of Technology. His research interest covers multi-source information fusion estimation and deep learning

    LIN An-Di Ph. D. candidate at the College of Information Engineering, Zhejiang University of Technology. His main research interest is multi-sensor in-formation fusion estimation

    YANG Xu-Sheng Associate professor at the College of Information Engineering, Zhejiang University of Technology. His research interest covers multi-sensor information fusion estimation and target positioning. Corresponding author of this paper

  • 摘要: 针对时序−状态相关性提取不足引起的目标跟踪性能下降问题, 提出了一种基于双支协同滤波网络(Dual-Branch Collaborative Filtering Network, DBCF-Net)的目标跟踪方法. 首先, 为实现运动模型和过程噪声参数的动态调整, 分别设计了非马尔可夫信息网络和状态相关信息网络, 以学习运动目标状态演化过程中的时序依赖性及其状态变量间的局部相关性. 其次, 设计了一种基于最大均值差异(Maximum Mean Discrepancy, MMD)的网络权重协同更新机制, 通过差异化分支网络输出特征来增强分支网络间的学习互补性, 从而提升DBCF-Net对未知运动模式的适应能力. 进而, 融合贝叶斯滤波与神经网络的优势, 引入无偏量测转换到DBCF-Net以增强目标跟踪的鲁棒性. 最后, 通过目标跟踪实验验证了DBCF-Net的有效性.
  • 图  1  (a) S曲线运动模式中时序相关性示意. (b)状态变量间的局部相关性示意.

    Fig.  1  (a) Illustration of temporal correlation in the S-curve motion pattern (b) Illustration of local correlation among state variables

    图  2  双支协同滤波网络框图

    Fig.  2  DBCF-Net block diagram

    图  3  双支协同网络内部框图

    Fig.  3  Dual-branch collaborative network internal block diagram

    图  4  训练数据轨迹图

    Fig.  4  Training data trajectory chart

    图  5  展示了6条轨迹的跟踪结果, 其中放大的子图中包含了30个采样的轨迹片段. 主图中每隔2.5秒(25个采样点)标记一次采样点, 在子图中每隔0.5秒标记一次采样点.

    Fig.  5  The tracking results of six trajectories are shown, where the enlarged subplot contains 30 sampled trajectory segments. In the main plot, sampling points are marked at intervals of 2.5 seconds (corresponding to 25 sampling points), while in the subplot, they are marked every 0.5 seconds.

    图  6  6条测试轨迹的目标状态估计的均方根误差

    Fig.  6  RMSE of the target state estimation for the six test trajectories

    图  7  消融实验跟踪结果可视化图

    Fig.  7  Visualization of the tracking results obtained in the ablation experiments

    表  1  测试轨迹运动参数

    Table  1  Test trajectory maneuver

    轨迹序号 初始状态 第一段 第二段 第三段
    1 $ [-17000.0\;{\mathrm{m}},\;2600.0\;{\mathrm{m}},\;200.0\;{\mathrm{m}}/{\mathrm{s}},\;120.0\;{\mathrm{m}}/{\mathrm{s}}] $ $ 20\text{s},\;\text{CV} $ $ 25\text{s},\;\text{CT},\;\omega=3.6^\circ/\text{s} $ $ 30\text{s},\;\text{CT},\;\omega=-6.4^\circ/\text{s} $
    2 $ [-6860.0\;{\mathrm{m}},\;24320.0\;{\mathrm{m}},\;90.0\;{\mathrm{m}}/{\mathrm{s}},\;-130.0\;{\mathrm{m}}/{\mathrm{s}}] $ $ 25\text{s},\;\text{CT},\;\omega=1.0^\circ/\text{s} $ $ 25\text{s},\;\text{CT},\;\omega=-1.6^\circ/\text{s} $ $ 25\text{s},\;\text{CT},\;\omega=-6.4^\circ/\text{s} $
    3 $ [17155.0\;{\mathrm{m}},\;-9300.0\;{\mathrm{m}},\;-169.0\;{\mathrm{m}}/{\mathrm{s}},\;140.0\;{\mathrm{m}}/{\mathrm{s}}] $ $ 10\text{s},\;\text{CV} $ $ 50\text{s},\;\text{CT},\;\omega=8.00^\circ/\text{s} $ $ 15\text{s},\;\text{CV} $
    4 $ [13345.0\;{\mathrm{m}},\;-11300.0\;{\mathrm{m}},\;69.0\;{\mathrm{m}}/{\mathrm{s}},\;140.0\;{\mathrm{m}}/{\mathrm{s}}] $ $ 25\text{s},\;\text{CV} $ $ 30\text{s},\;\text{CT},\;\omega=-7.0^\circ/\text{s} $ $ 20\text{s},\;\text{CT},\;\omega=6.48^\circ/\text{s} $
    5 $ [19134.0\;{\mathrm{m}},\;19144.0\;{\mathrm{m}},\;-235.0\;{\mathrm{m}}/{\mathrm{s}},\;-33.0\;{\mathrm{m}}/{\mathrm{s}}] $ $ 20\text{s},\;\text{CT},\;\omega=6.08^\circ/\text{s} $ $ 30\text{s},\;\text{CV} $ $ 25\text{s},\;\text{CT},\;\omega=-9.01^\circ/\text{s} $
    6 $ [9360.0\;{\mathrm{m}},\;-8740.0\;{\mathrm{m}},\;-140.0\;{\mathrm{m}}/{\mathrm{s}},\;-1.0\;{\mathrm{m}}/{\mathrm{s}}] $ $ 20\text{s},\;\text{CT},\;\omega=9.08^\circ/\text{s} $ $ 30\text{s},\;\text{CT},\;\omega=-8.1^\circ/\text{s} $ $ 25\text{s},\;\text{CT},\;\omega=1.08^\circ/\text{s} $
    下载: 导出CSV

    表  2  不同方法在测试轨迹上的的平均均方根误差(ARMSE)

    Table  2  The ARMSE of states for different methods on the test trajectory

    方法 轨迹1 轨迹2 轨迹3 轨迹4 轨迹5 轨迹6
    IMM-EKF 位置(m) 4.872 5.208 12.942 4.942 5.969 4.236
    速度(m/s) 9.606 6.569 21.949 8.336 11.263 8.919
    IMM-UKF 位置(m) 5.089 5.267 5.564 5.082 6.310 4.437
    速度(m/s) 10.149 6.689 11.320 8.707 12.177 9.404
    DeepMTT 位置(m) 6.061 5.576 7.240 4.889 9.473 5.797
    速度(m/s) 3.676 4.493 6.904 4.400 7.595 6.045
    KalmanNet 位置(m) 11.302 14.067 6.641 5.863 17.151 4.977
    速度(m/s) 12.279 13.168 15.105 13.652 14.708 9.856
    DBCF-Net 位置(m) 2.678 4.400 3.339 3.365 4.364 2.682
    速度(m/s) 3.806 4.430 4.900 3.956 5.103 3.938
    下载: 导出CSV

    表  3  消融实验测试轨迹运动参数

    Table  3  Test trajectory maneuver parameters of ablation experiment

    轨迹序号 初始状态 第一段 第二段 第三段
    1 [−19280.0 m, 18250.0 m, 180.0 m/s, 50.0 m/s] $ 5\;\text{s},\;\text{CV} $ $ 20\;\text{s},\;\text{CT},\; $ $ \omega=-9.0^\circ/\text{s} $ $ 15\;\text{s},\;\text{CT},\; $ $ \omega= 8.4^\circ/\text{s} $
    2 [−16900.0 m, 15500.0 m, 220.0 m/s, 300.0 m/s] $ 5\;\text{s},\;\text{CV} $ $ 15\;\text{s},\;\text{CT},\; $ $ \omega=5.0^\circ/\text{s} $ $ 20\;\text{s},\;\text{CT},\; $ $ \omega=-3.4^\circ/\text{s} $
    下载: 导出CSV

    表  4  消融实验测试轨迹ARMSE值

    Table  4  ARMSE of ablation experiment test trajectory

    方法 轨迹1 轨迹2
    DBCF-Net 位置(m) 5.106 5.317
    速度(m/s) 6.161 7.265
    Single1 位置(m) 6.758 8.169
    速度(m/s) 8.908 9.801
    Single2 位置(m) 6.920 8.409
    速度(m/s) 10.813 7.631
    No MMD 位置(m) 7.233 9.209
    速度(m/s) 9.666 9.189
    下载: 导出CSV
  • [1] Cortina E, Otero D, Attellis C E. Maneuvering target tracking using extended Kalman filter. IEEE Transactions on Aerospace and Electronic Systems, 1991, 27(1): 155−158 doi: 10.1109/7.68158
    [2] Julier S J, Uhlmann J K. Unscented filtering and nonlinear estimation. Proceedings of the IEEE, 2004, 92(3): 401−422 doi: 10.1109/JPROC.2003.823141
    [3] Solaiman S, Alsuwat E, Alharthi R. Simultaneous Tracking and Recognizing Drone Targets with Millimeter-Wave Radar and Convolutional Neural Network. Applied System Innovation, 2023, 6(4): 68 doi: 10.3390/asi6040068
    [4] Yan B, Wei Y, Liu S, Huang W, Feng R, Chen X. A review of current studies on the unmanned aerial vehicle-based moving target tracking methods. Defence Technology, 2025, 51: 201−219 doi: 10.1016/j.dt.2025.01.013
    [5] Alhafnawi M, Bany S H, Masadeh A, Al-Obiedollah H, Ayyash M, El-Khazali R, et al. A Survey of Indoor and Outdoor UAV-Based Target Tracking Systems: Current Status, Challenges, Technologies, and Future Directions. IEEE Access, 2023, 11: 68324−68339 doi: 10.1109/ACCESS.2023.3292302
    [6] Yasmeen A, Daescu O. Recent Research Progress on Ground-to-Air Vision-Based Anti-UAV Detection and Tracking Methodologies: A Review. Drones, 2025, 9(1): 58 doi: 10.3390/drones9010058
    [7] Yang Y, Moran B, Wang X, Brown T C, Williams S, Pan Q. Experimental analysis of a game-theoretic formulation of target tracking. Automatica, 2020, 114: 1−10
    [8] Yi W, Fang Z, Li W, Hoseinnezhad R, Kong L. Multi-frame track before-detect algorithm for maneuvering target tracking. IEEE Transactions on Vehicular Technology, 2020, 69(4): 4104−4118 doi: 10.1109/TVT.2020.2976095
    [9] Blom H A P, Bar-Shalom Y. The interacting multiple model algorithm for systems with Markovian switching coefficients. IEEE Transactions on Automatic Control, 1988, 33(8): 780−783 doi: 10.1109/9.1299
    [10] Daeipour E, Bar-Shalom Y. IMM tracking of maneuvering targets in the presence of glint. IEEE Transactions on Aerospace and Electronic Systems, 1998, 34(3): 996−1003 doi: 10.1109/7.705913
    [11] Wu W, Cheng P. A nonlinear IMM algorithm for maneuvering target tracking. IEEE Transactions on Aerospace and Electronic Systems, 1994, 30(3): 875−886 doi: 10.1109/7.303756
    [12] Li W, Jia Y, Du J, Yu F. Gaussian mixture phd smoother for jump markov models in multiple maneuvering targets tracking, in: Proceedings of the 2011 American Control Conference, San Francisco, USA, 2011. Pp. 3024-3029.
    [13] Xu L, Li X R, Hybrid grid multiple-model estimation with application to maneuvering target tracking, in: 2010 13th International Conference on Information Fusion, Edinburgh, UK, 2010, pp.1−7.
    [14] Liu J, Wang Z, Xu M. DeepMTT: A deep learning maneuvering target-tracking algorithm based on bidirectional LSTM network. Information Fusion, 2020, 53: 289−304 doi: 10.1016/j.inffus.2019.06.012
    [15] Yang X, Qiao D. Attention-based bidirectional LSTM Network for Target Tracking. In: Proceedings of IEEE International Conference on Electronic Technology, Communication and Information. Changchun, China, 2021. 151-156
    [16] Zhai B, Yi W, Li M, Ju H, Kong L. Data-driven XGBoost-based filter for target tracking. International Radar Conference. China, Nanjing, 2019. 6683-6687.
    [17] 张文安, 林安迪, 杨旭升, 俞立, 杨小牛. 融合深度学习的贝叶斯滤波综述. 自动化学报, 2024, 50(8): 1502 doi: 10.16383/j.aas.c230457

    Zhang Wen-An, Lin An-Di, Yang Xu-Sheng, Yu Li, Yang Xiao-Niu. A survey on Bayesian filtering integrated with deep learning. Acta Automatica Sinica, 2024, 50(8): 1502 doi: 10.16383/j.aas.c230457
    [18] Chen C, Lu C X, Wang B, Trigoni N, Markham A. DynaNet: Neural Kalman dynamical model for motion estimation and prediction. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(12): 5479−5491 doi: 10.1109/TNNLS.2021.3112460
    [19] 杨旭升, 李福祥, 胡佛, 等. 基于肌电-惯性融合的人体运动估计: 高斯滤波网络方法. 自动化学报, 2024, 50(5): 991 doi: 10.16383/j.aas.c230581

    Yang Xu-Sheng, Li Fu-Xiang, Hu Fo, Zhang Wen-An. Human motion estimation based on EMG-inertial fusion: A Gaussian filtering network approach. Acta Automatica Sinica, 2024, 50(5): 991 doi: 10.16383/j.aas.c230581
    [20] Revach G, Shlezinger N, Ni X, Escoriza A L, van Sloun R J G, Eldar Y C. KalmanNet: Neural Network Aided Kalman Filtering for Partially Known Dynamics. IEEE Transactions on Signal Processing, 2022, 70: 1532−1547 doi: 10.1109/TSP.2022.3158588
    [21] CHOI G, PARK J, SHLEZINGER N, Eldar Y C, Lee N. Split KalmanNet: A robust model-based deep learning approach for state estimation. IEEE Transactions on Vehicular Technology, 2023, 72(9): 12326−12331 doi: 10.1109/TVT.2023.3270353
    [22] Escoriza A L, Revach G, Shlezinger N, van Sloun R J G. Data-Driven Kalman-Based Velocity Estimation for Autonomous Racing, In: Proceedings of IEEE International Conference on Autonomous Systems. Montreal, QC, Canada, 2021. 1-5
    [23] 杨旭升, 王雪儿, 汪鹏君, 张文安. 基于渐进无迹卡尔曼滤波网络的人体肢体运动估计. 自动化学报, 2023, 49(8): 1723 doi: 10.16383/j.aas.c220523

    Yang Xu-Sheng, Wang Xue-Er, Wang Peng-Jun, Zhang Wen-An. Human limb motion estimation based on progressive unscented Kalman filter network. Acta Automatica Sinica, 2023, 49(8): 1723 doi: 10.16383/j.aas.c220523
    [24] Ding X, Guo Y, Ding G, Han J. ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks. In: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019. 1911-1920
  • 加载中
计量
  • 文章访问数:  6
  • HTML全文浏览量:  4
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-10-31
  • 录用日期:  2025-12-31
  • 网络出版日期:  2026-03-17

目录

    /

    返回文章
    返回