2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

一种基于加权时空上下文的鲁棒视觉跟踪算法

徐建强 陆耀

徐建强, 陆耀. 一种基于加权时空上下文的鲁棒视觉跟踪算法. 自动化学报, 2015, 41(11): 1901-1912. doi: 10.16383/j.aas.2015.c150073
引用本文: 徐建强, 陆耀. 一种基于加权时空上下文的鲁棒视觉跟踪算法. 自动化学报, 2015, 41(11): 1901-1912. doi: 10.16383/j.aas.2015.c150073
XU Jian-Qiang, LU Yao. Robust Visual Tracking via Weighted Spatio-temporal Context Learning. ACTA AUTOMATICA SINICA, 2015, 41(11): 1901-1912. doi: 10.16383/j.aas.2015.c150073
Citation: XU Jian-Qiang, LU Yao. Robust Visual Tracking via Weighted Spatio-temporal Context Learning. ACTA AUTOMATICA SINICA, 2015, 41(11): 1901-1912. doi: 10.16383/j.aas.2015.c150073

一种基于加权时空上下文的鲁棒视觉跟踪算法

doi: 10.16383/j.aas.2015.c150073
基金项目: 

国家自然科学基金(61273273,61271374),高等学校博士学科点专项科研基金(20121101110034)资助

详细信息
    作者简介:

    徐建强 北京理工大学计算机学院博士研究生.主要研究方向为目标跟踪,计算机视觉,模式识别.E-mail:xujq@bit.edu.cn

    通讯作者:

    陆耀 北京理工大学计算机学院教授.主要研究方向为神经网络,图像和信号处理,模式识别.本文通信作者.E-mail:vis_ly@bit.edu.cn

Robust Visual Tracking via Weighted Spatio-temporal Context Learning

Funds: 

Supported by National Natural Science Foundation of China (61273273, 61271374) and Research Fund for the Doctoral Program of Higher Education of China (20121101110034)

  • 摘要: 由于光照及外观变化、复杂背景、目标旋转与遮挡等因素的影响, 给实现鲁棒的视觉跟踪带来困难. 有效利用上下文(Context)中包含的有用信息有助于提升上述条件下视觉跟踪的鲁棒性. 时空上下文 (Spatio-temporal context, STC)算法是新近提出的一种基于时空上下文的目标跟踪算法, 它利用目标周围的稠密上下文信息, 取得了良好的跟踪效果. STC的不足是其同等对待整个上下文区域, 没有对上下文做进一步的区分, 减弱了上下文的作用. 本文采用动态分区处理思想, 根据上下文中不同区域与跟踪目标运动相似度大小, 赋予不同权值, 提出了基于加权时空上下文(Weighted spatio-temporal context, WSTC)的鲁棒视觉跟踪算法. 最后在公共数据集上进行的对比实验表明, 本文所提出的算法具有更好的跟踪效果和鲁棒性.
  • [1] Babenko B, Yang M H, Belongie S. Robust object tracking with online multiple instance learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8):1619-1632
    [2] Wang Li-Jia, Jia Song-Min, Li Xiu-Zhi, Wang Shuang. Person following for mobile robot using improved multiple instance learning. Acta Automatica Sinica, 2014, 40(12):2916-2925(王丽佳, 贾松敏, 李秀智, 王爽. 基于改进在线多示例学习算法的机器人目标跟踪. 自动化学报, 2014, 40(12):2916-2925)
    [3] [3] Ross D A, Lim J, Lin R S, Yang M H. Incremental learning for robust visual tracking. International Journal of Computer Vision, 2008, 77(1-3):125-141
    [4] [4] Zhang K H, Zhang L, Yang M H. Fast compressive tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(10):2002-2015
    [5] [5] Kwon J, Lee K M. Visual tracking decomposition. In:Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). San Francisco, CA, USA:IEEE, 2010. 1269-1276
    [6] [6] Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7):1409-1422
    [7] Li Zhen-Xing, Liu Jin-Mang, Li Song, Bai Dong-Ying, Ni Peng. Group targets tracking algorithm based on box particle filter. Acta Automatica Sinica, 2015, 41(4):785-798(李振兴, 刘进忙, 李松, 白东颖, 倪鹏. 基于箱式粒子滤波的群目标跟踪算法. 自动化学报, 2015, 41(4):785-798)
    [8] [8] Zhou X Z, Lu Y, Lu J W, Zhou J. Abrupt motion tracking via intensively adaptive Markov chain Monte Carlo sampling. IEEE Transactions on Image Processing, 2012, 21(2):789-801
    [9] [9] Zhou T F, Lu Y, Di H J. Nearest neighbor field driven stochastic sampling for abrupt motion tracking. In:Proceedings of the 2014 International Conference on Multimedia and Expo (ICME). Chengdu China:IEEE, 2014. 1-6
    [10] Grabner H, Matas J, Van Gool L, Cattin P. Tracking the invisible:learning where the object might be. In:Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). San Francisco, CA, USA:IEEE, 2010. 1285-1292
    [11] Dinh T B, Vo N, Medioni G. Context tracker:exploring supporters and distracters in unconstrained environments. In:Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Colorado Springs, CO, USA:IEEE, 2011. 1177-1184
    [12] Wen L Y, Cai Z W, Zhen L, Dong Y, Li S Z. Online spatio-temporal structural context learning for visual tracking. In:Proceedings of the 2012 European Conference on Computer Vision (ECCV). Florence, Italy:Springer, 2012. 716-729
    [13] Yang M, Wu Y, Hua G. Context-aware visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(7):1195-1209
    [14] Zhang K H, Zhang L, Liu Q S, Zhang D, Yang M H. Fast visual tracking via dense spatio-temporal context learning. In:Proceedings of the 2014 European Conference on Computer Vision (ECCV). Czech Republic:Springer, 2014. 127-141
    [15] Sundaram N, Brox T, Keutzer K. Dense point trajectories by GPU-accelerated large displacement optical flow. In:Proceedings of the 2010 European Conference on Computer Vision (ECCV). Florence, Italy:Springer, 2010. 438-451
    [16] Nourani-Vatani N, Borges P V K, Roberts J M. A study of feature extraction algorithms for optical flow tracking. In:Proceedings of the 2012 Australasian Conference on Robotics and Automation. Victoria University of Wellington, New Zealand, 2012.
    [17] Kalal Z, Mikolajczyk K, Matas J. Forward-backward error:automatic detection of tracking failures. In:Proceedings of the 2012 International Conference on Pattern Recognition (ICPR). Istanbul Turkey:IEEE, 2010. 2756-2759
    [18] Wu Y, Lim J, Yang M H. Online object tracking:a benchmark. In:Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Portland, OR, USA:IEEE, 2013. 2411-2418
    [19] Zhang K H, Zhang L, Yang M H. Real-time compressive tracking. In:Proceedings of the 2012 European Conference on Computer Vision (ECCV). Florence, Italy:Springer, 2012. 864-877
    [20] Zhang T X, Ghanem B, Liu S, Ahuja N. Robust visual tracking via multi-task sparse learning. In:Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Providence, RI, USA:IEEE, 2012. 2042-2049
    [21] Laura S L, Erik L M. Distribution fields for tracking. In:Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Providence, RI, USA:IEEE, 2012. 1910-1917
    [22] Grabner H, Grabner M, Bischof H. Real-time tracking via on-line boosting. In:Proceedings of the 2006 British Machine Vision Conference. 2006, 47-56
    [23] Oron S, Bar-Hillel A, Levi D, Avidan S. Locally orderless tracking. In:Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Providence, RI, USA:IEEE, 2012. 1940-1947
    [24] Adam A, Rivlin E, Shimshoni I. Robust fragments-based tracking using the integral histogram. In:Proceedings of the 2006 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2006. 798-805
    [25] Bao C L, Wu Y, Ling H B, Ji H. Real time robust L1 tracker using accelerated proximal gradient approach. In:Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Providence, RI, USA:IEEE, 2012. 1830-1837
  • 加载中
计量
  • 文章访问数:  2376
  • HTML全文浏览量:  142
  • PDF下载量:  1155
  • 被引次数: 0
出版历程
  • 收稿日期:  2015-02-04
  • 修回日期:  2015-07-11
  • 刊出日期:  2015-11-20

目录

    /

    返回文章
    返回