2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于判别性局部联合稀疏模型的多任务跟踪

黄丹丹 孙怡

黄丹丹, 孙怡. 基于判别性局部联合稀疏模型的多任务跟踪. 自动化学报, 2016, 42(3): 402-415. doi: 10.16383/j.aas.2016.c150416
引用本文: 黄丹丹, 孙怡. 基于判别性局部联合稀疏模型的多任务跟踪. 自动化学报, 2016, 42(3): 402-415. doi: 10.16383/j.aas.2016.c150416
HUANG Dan-Dan, SUN Yi. Tracking via Multitask Discriminative Local Joint Sparse Appearance Model. ACTA AUTOMATICA SINICA, 2016, 42(3): 402-415. doi: 10.16383/j.aas.2016.c150416
Citation: HUANG Dan-Dan, SUN Yi. Tracking via Multitask Discriminative Local Joint Sparse Appearance Model. ACTA AUTOMATICA SINICA, 2016, 42(3): 402-415. doi: 10.16383/j.aas.2016.c150416

基于判别性局部联合稀疏模型的多任务跟踪

doi: 10.16383/j.aas.2016.c150416
详细信息
    作者简介:

    黄丹丹   大连理工大学信息与通信工程学院博士研究生.2007年获得长春理工大学学士学位.主要研究方向为视频序列中的目标检测与目标跟踪.E-mail:dlut_huang@163.com

    通讯作者:

    孙怡   大连理工大学信息与通信工程学院教授.1986年获得大连理工大学学士学位.主要研究方向为图像处理, 模式识别与无线通信.本文通信作者.E-mail:lslwf@dlut.edu.cn

Tracking via Multitask Discriminative Local Joint Sparse Appearance Model

More Information
    Author Bio:

      Ph. D. candidate at the School of Information and Communication Engineering, Dalian University of Technology. She received her bachelor degree from Changchun University of Science and Technology in 2007. Her research interest covers object detection and object tracking in video sequences.E-mail:

    Corresponding author: SUN Yi    Professor at the School of Information and Communication Engineering, Dalian University of Technology. She received her bachelor degree from Dalian University of Technology in 1986. Her research interest covers image processing, pattern recognition, and wireless communication. Corresponding author of this paper.E-mail:lslwf@dlut.edu.cn
  • 摘要: 目标表观建模是基于稀疏表示的跟踪方法的研究重点, 针对这一问题, 提出一种基于判别性局部联合稀疏表示的目标表观模型, 并在粒子滤波框架下提出一种基于该模型的多任务跟踪方法(Discriminative local joint sparse appearance model based multitask tracking method, DLJSM).该模型为目标区域内的局部图像分别构建具有判别性的字典, 从而将判别信息引入到局部稀疏模型中, 并对所有局部图像进行联合稀疏编码以增强结构性.在跟踪过程中, 首先对目标表观建立上述模型; 其次根据目标表观变化的连续性对采样粒子进行初始筛选以提高算法的效率; 然后求解剩余候选目标状态的联合稀疏编码, 并定义相似性函数衡量候选状态与目标模型之间的相似性; 最后根据最大后验概率估计目标当前的状态.此外, 为了避免模型频繁更新而引入累积误差, 本文采用每5帧判断一次的方法, 并在更新时保留首帧信息以减少模型漂移.实验测试结果表明DLJSM方法在目标表观发生巨大变化的情况下仍然能够稳定准确地跟踪目标, 与当前最流行的13种跟踪方法的对比结果验证了DLJSM方法的高效性.
  • 图  1  联合字典的学习过程

    Fig.  1  The flowchart of dictionary learning

    图  2  目标表观建模示意图

    Fig.  2  The sketch map of modeling the target appearance

    图  3  多余粒子的去除

    Fig.  3  The elimination of extra particles

    图  4  DLJSM跟踪算法流程图

    Fig.  4  The flowchart of DLJSM tracking algorithm

    图  5  目标的光流与尺度发生剧烈变化时的跟踪结果

    Fig.  5  Tracking results when targets undergo drastic changes of illumination and scale

    图  6  目标发生较大形变时的跟踪结果

    Fig.  6  Tracking results when targets0 appearance deform

    图  7  目标发生遮挡时的跟踪结果

    Fig.  7  Tracking results when targets are occluded

    图  8  目标快速运动时的跟踪结果

    Fig.  8  Tracking results when targets undergo rapid movement

    图  9  目标处于复杂背景时的跟踪结果

    Fig.  9  Tracking results when targets are in complex background

    图  10  所有跟踪方法在全部测试视频上的跟踪性能

    Fig.  10  Performance of all the tracking methods in test sequences

    图  11  OPE曲线

    Fig.  11  One-pass evaluation curves

    表  1  DLJSM算法与非稀疏跟踪方法的结果对比

    Table  1  Comparison of the results between DLJSM algorithm and the methods not based on sparse representation

    中心误差(pixel)F -参数
    IVTVTDFragMILTLDDLJSMIVTVTDFragMILTLDDLJSM
    Girl29.623.881.631.3-14.40.7030.7400.1340.681-0.836
    Singerl9.13.742.1241.027.53.20.6420.8980.3940.0210.4440.904
    Faceoccll.29.589.518.616.06.30.8910.9030.9400.8380.7860.938
    Car44.0144.8180.5142.1-4.50.9370.3410.2630.262-0.939
    Sylv5.921.545.16.95.65.10.8370.6720.8090.8370.8350.867
    Race176.482.2221.4310.6-2.70.0250.3720.0530.013-0.721
    Jumping34.8111.921.241.8-5.20.2730.1750.4290.255-0.787
    Animal10.511.845.7252.6-9.70.7360.7650.1200.014-0.748
    下载: 导出CSV

    表  2  DLJSM算法与基于单个稀疏跟踪方法的结果对比

    Table  2  Comparison of the results between DLJSM algorithm and the methods based on single sparse representation

    中心误差(pixel)F -参数
    l1APG-l1SCMALSALSKDLJSMl1APG-l1SCMALSALSKDLJSM
    Animal23.123.920.2289.510.29.70.5830.6190.6520.0460.7320.748
    David20.113.79.811.411.89.30.6050.6520.7590.7070.7130.772
    Car1133.72.92.12.373.32.00.5010.8570.8950.8970.090.897
    Singer15.63.83.75.17.73.20.7800.8700.9100.8870.7420.904
    Race214.7203.928.7245.5217.22.70.0490.0590.6280.0620.0170.721
    Jumping38.016.46.112.363.55.20.2560.5820.7670.7480.2140.787
    Skatingl137.560.537.064.5106.48.10.2210.4750.6280.5800.3350.789
    下载: 导出CSV

    表  3  DLJSM算法与基于联合稀疏表示跟踪方法的结果对比

    Table  3  Comparison of the results between DLJSM algorithm and the methods based on joint sparse representation

    中心误差(pixel)F -参数
    MTTMTMVDSSMDLJSMMTTMTMVDSSMDLJSM
    Car1117.427.72.02.00.6120.5140.8960.897
    David21.410.210.49.30.5650.7450.6630.772
    Race-41.24.32.7-0.1630.6950.721
    Skatingl-81.973.88.1-0.4510.5690.789
    Animal19.419.523.79.70.6300.6350.5740.748
    Stone3.312.543.92.80.7460.500.1660.720
    下载: 导出CSV
  • [1] Yilmaz A, Javed O, Shah M. Object tracking:a survey. ACM Computing Surveys (CSUR), 2006, 38(4):Article No. 13
    [2] Wu Y, Lim J, Yang M H. Online object tracking:a benchmark. In:Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA:IEEE, 2013. 2411-2418
    [3] Smeulders A W M, Chu D M, Cucchiara R, Calderara S, Dehghan A, Shah M. Visual tracking:an experimental survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(7):1442-1468 doi: 10.1109/TPAMI.2013.230
    [4] Adam A, Rivlin E, Shimshoni I. Robust fragments-based tracking using the integral histogram. In:Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York, USA:IEEE, 2006. 798-805
    [5] Kwon J, Lee K M. Visual tracking decomposition. In:Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition. San Francisco, USA:IEEE, 2010. 1269-1276
    [6] Ross D A, Lim J, Lin R S, Yang M H. Incremental learning for robust visual tracking. International Journal of Computer Vision, 2008, 77(1-3):125-141 doi: 10.1007/s11263-007-0075-7
    [7] Babenko B, Yang M H, Belongie S. Visual tracking with online multiple instance learning. In:Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami FL, USA:IEEE, 2009. 983-990
    [8] Kalal Z, Matas J, Mikolajczyk K. P-N learning:bootstrapping binary classifiers by structural constraints. In:Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition. San Francisco, USA:IEEE, 2010. 49-56
    [9] Mei X, Ling H B. Robust visual tracking using L1 minimization. In:Proceedings of the 12th IEEE International Conference on Computer Vision. Kyoto, Japan:IEEE, 2009. 1436-1443
    [10] Bao C L, Wu Y, Ling H B, Ji H. Real time robust L1 tracker using accelerated proximal gradient approach. In:Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, USA:IEEE, 2012. 1830-1837
    [11] Zhang S P, Yao H X, Zhou H Y, Sun X, Liu S H. Robust visual tracking based on online learning sparse representation. Neurocomputing, 2013, 100:31-40 doi: 10.1016/j.neucom.2011.11.031
    [12] Wang D, Lu H C, Yang M H. Online object tracking with sparse prototypes. IEEE Transactions on Image Processing, 2013, 22(1):314-325 doi: 10.1109/TIP.2012.2202677
    [13] Wang L F, Yan H P, Lv K, Pan C H. Visual tracking via kernel sparse representation with multikernel fusion. IEEE Transactions on Circuits and Systems for Video Technology, 2014, 24(7):1132-1141 doi: 10.1109/TCSVT.2014.2302496
    [14] Liu B Y, Huang J Z, Yang L, Kulikowsk C. Robust tracking using local sparse appearance model and k-selection. In:Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. Providence, USA:IEEE, 2011. 1313-1320
    [15] Jia X, Lu H C, Yang M H. Visual tracking via adaptive structural local sparse appearance model. In:Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, USA:IEEE, 2012. 1822-1829
    [16] Xie Y, Zhang W S, Li C H, Lin S Y, Qu Y Y, Zhang Y H. Discriminative object tracking via sparse representation and online dictionary learning. IEEE Transactions on Cybernetics, 2014, 44(4):539-553 doi: 10.1109/TCYB.2013.2259230
    [17] Zhong W, Lu H C, Yang M H. Robust object tracking via sparsity-based collaborative model. In:Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, USA:IEEE, 2012. 1838-1845
    [18] Zhang T Z, Ghanem B, Liu S, Ahuja N. Robust visual tracking via multi-task sparse learning. In:Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, USA:IEEE, 2012. 2042-2049
    [19] Hong Z B, Mei X, Prokhorov D, Tao D C. Tracking via robust multi-task multi-view joint sparse representation. In:Proceedings of the 2013 IEEE International Conference on Computer Vision. Sydney, NSW:IEEE, 2013. 649-656
    [20] Dong W H, Chang F L, Zhao Z J. Visual tracking with multifeature joint sparse representation. Journal of Electronic Imaging, 2015, 24(1):013006 doi: 10.1117/1.JEI.24.1.013006
    [21] Zhuang B H, Lu H C, Xiao Z Y, Wang D. Visual tracking via discriminative sparse similarity map. IEEE Transactions on Image Processing, 2014, 23(4):1872-1881 doi: 10.1109/TIP.2014.2308414
    [22] Zhang T Z, Liu S, Xu C S, Yan S C, Ghanem B, Ahuja N, Yang M H. Structural sparse tracking. In:Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA:IEEE, 2015. 150-158
    [23] 王梦.基于复合稀疏模型的多任务视频跟踪算法研究[硕士学位论文], 上海交通大学, 中国, 2014.

    Wang Meng. Multi-Task Visual Tracking Using Composite Sparse Model[Master dissertation], Shanghai Jiao Tong University, China, 2014.
    [24] Yuan X T, Liu X B, Yan S C. Visual classification with multitask joint sparse representation. IEEE Transactions on Image Processing, 2012, 21(10):4349-4360 doi: 10.1109/TIP.2012.2205006
    [25] Doucet A, de Freitas N, Gordon N. Sequential Monte Carlo Methods in Practice. New York:Springer-Verlag, 2001.
    [26] Zhang T Z, Liu S, Ahuja N, Yang M H, Ghanem B. Robust visual tracking via consistent low-rank sparse learning. International Journal of Computer Vision, 2015, 111(2):171-190 doi: 10.1007/s11263-014-0738-0
  • 加载中
图(11) / 表(3)
计量
  • 文章访问数:  3151
  • HTML全文浏览量:  191
  • PDF下载量:  832
  • 被引次数: 0
出版历程
  • 收稿日期:  2015-06-29
  • 录用日期:  2015-10-23
  • 刊出日期:  2016-03-20

目录

    /

    返回文章
    返回