2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

一种基于词袋模型的新的显著性目标检测方法

杨赛 赵春霞 徐威

杨赛, 赵春霞, 徐威. 一种基于词袋模型的新的显著性目标检测方法. 自动化学报, 2016, 42(8): 1259-1273. doi: 10.16383/j.aas.2016.c150387
引用本文: 杨赛, 赵春霞, 徐威. 一种基于词袋模型的新的显著性目标检测方法. 自动化学报, 2016, 42(8): 1259-1273. doi: 10.16383/j.aas.2016.c150387
YANG Sai, ZHAO Chun-Xia, XU Wei. A Novel Salient Object Detection Method Using Bag-of-features. ACTA AUTOMATICA SINICA, 2016, 42(8): 1259-1273. doi: 10.16383/j.aas.2016.c150387
Citation: YANG Sai, ZHAO Chun-Xia, XU Wei. A Novel Salient Object Detection Method Using Bag-of-features. ACTA AUTOMATICA SINICA, 2016, 42(8): 1259-1273. doi: 10.16383/j.aas.2016.c150387

一种基于词袋模型的新的显著性目标检测方法

doi: 10.16383/j.aas.2016.c150387
基金项目: 

国家自然科学基金 61272220

详细信息
    作者简介:

    赵春霞 南京理工大学计算机科学与工程学院教授.主要研究方向为智能机器人技术和图像处理.E-mail:zhaochx@mail.njust.edu.cn;

    徐威 南京理工大学计算机科学与工程学院博士研究生.2009年获得南京理工大学计算机科学与技术学院学士学位.主要研究方向为图像处理和计算机视觉.E-mail:xuwei904@163.com

    通讯作者:

    杨赛 南通大学电气工程学院讲师.2015年获得南京理工大学计算机科学与工程学院博士学位,主要研究方向为计算机视觉与机器学习.本文通信作者.E-mail:yangsai166@126.com

A Novel Salient Object Detection Method Using Bag-of-features

Funds: 

National Natural Science Foundation of China 61272220

More Information
    Author Bio:

    Processor at the School of Computer Science and Engineering, Nanjing University of Science and Technology. Her research interest covers intelligent robotics and image processing.E-mail:

    Ph. D candidate at the School of Computer Science and Engineering, Nanjing University of Science and Technology. He received his bachelor degree from the School of Computer Science and Technology, Nanjing University of Science and Technology in 2009. His research interest covers image processing and computer vision.E-mail:

    Corresponding author: YANG Sai Lecturer at the School of Electrical Engineering, Nantong University. She received her Ph. D. degree from the School of Computer Science and Engineering, Nanjing University of Science and Technology in 2015. Her research interest covers computer vision and machine learning.
  • 摘要: 提出一种基于词袋模型的新的显著性目标检测方法.该方法首先利用目标性计算先验概率显著图,然后在图像的超像素区域内建立词袋模型,并基于此特征计算条件概率显著图,最后根据贝叶斯推断将先验概率和条件概率显著图进行合成.在ASD、SED以及SOD显著性目标公开数据库上与目前16种主流方法进行对比,实验结果表明本文方法具有更高的精度和更好的查全率,能够一致高亮地凸显图像中的显著性目标.
  • 图  1  背景超像素示意图

    Fig.  1  Illustration of background's superpixels

    图  2  不同超像素数目下的平均F

    Fig.  2  F-measure under different superpixel numbers

    图  3  不同单词数目下的平均F值

    Fig.  3  F-measure under different visual words numbers

    图  4  ASD数据库上本文方法与其他16种流行算法的PR曲线

    Fig.  4  Precision-recall curves of our method and sixteen state-of-the-art methods on ASD database

    图  5  SED1数据库上本文方法与其他16种流行算法的PR曲线

    Fig.  5  Precision-recall curves of our method and sixteen state-of-the-art methods on SED1 database

    图  6  SED2数据库上本文方法与其他16种流行算法的PR曲线

    Fig.  6  Precision-recall curves of our method and sixteen state-of-the-art methods on SED2 database

    图  7  SOD数据库上本文方法与其他16种流行算法的PR曲线

    Fig.  7  Precision-recall curves of our method and sixteen state-of-the-art methods on SOD database

    图  8  本文方法与16种流行算法的平均查准率、平均查全率、F度量值对比图

    Fig.  8  Precision,recall and F-measure of our method and sixteen state-of-the-art methods

    图  9  ASD数据库上本文方法与16种流行算法的Fβ-K曲线

    Fig.  9  Fβ-K curves of our method and sixteen state-of-the-art methods on ASD database

    图  10  SED1数据库上本文方法与16种流行算法的Fβ-K曲线

    Fig.  10  Fβ-K curves of our method and sixteen state-of-the-art methods on SED1 database

    图  11  SED2数据库上本文方法与16种流行算法的Fβ-K曲线

    Fig.  11  Fβ-K curves of our method and sixteen state-of-the-art methods on SED2 database

    图  12  SOD数据库上本文方法与16种流行算法的Fβ-K曲线

    Fig.  12  Fβ-K curves of our method and sixteen state-of-the-art methods on SOD database

    图  13  本文方法与其他16种流行算法的MAE值对比图

    Fig.  13  MAE of our method and sixteen state-of-the-art methods

    图  14  SD数据库上本文方法与基于像素的典型显著性检测算法的视觉效果对比图

    Fig.  14  Visual comparison with detection methods based on pixels on ASD database

    图  15  ED1数据库上本文方法与基于像素的典型显著性检测算法的视觉效果对比图

    Fig.  15  Visual comparison with detection methods based on pixels on SED1 database

    图  16  ED2数据库上本文方法与基于像素的典型显著性检测算法的视觉效果对比图

    Fig.  16  Visual comparison with detection methods based on pixels on SED2 database

    图  17  OD数据库上本文方法与基于像素的典型显著性检测算法的视觉效果对比图

    Fig.  17  Visual comparison with detection methods based on pixels on SOD database

    图  18  SD数据库上本文方法与基于区域的典型显著性检测算法的视觉效果对比图

    Fig.  18  Visual comparison with detection methods based on regions on ASD database

    图  19  ED1数据库上本文方法与基于区域的典型显著性检测算法的视觉效果对比图

    Fig.  19  Visual comparison with detection methods based on regions on SED1 database

    图  20  ED2数据库上本文方法与基于区域的典型显著性检测算法的视觉效果对比图

    Fig.  20  Visual comparison with detection methods based on regions on SED2 database

    图  21  OD数据库上本文方法与基于区域的典型显著性检测算法的视觉效果对比图

    Fig.  21  Visual comparison with detection methods based on regions on SOD database

    图  22  SD数据库上本文方法与基于贝叶斯模型的的视觉效果对比图

    Fig.  22  Visual comparison with detection methods based on Bayesian model on ASD database

    图  23  ED1数据库上本文方法与基于贝叶斯模型的的视觉效果对比图

    Fig.  23  Visual comparison with detection methods based on Bayesian model on SED1 database

    图  24  ED2数据库上本文方法与基于贝叶斯模型的的视觉效果对比图

    Fig.  24  Visual comparison with detection methods based on Bayesian model on SED2 database

    图  25  OD数据库上本文方法与基于贝叶斯模型的的视觉效果对比图

    Fig.  25  Visual comparison with detection methods based on Bayesian model on SOD database

  • [1] 江晓莲, 李翠华, 李雄宗. 基于视觉显著性的两阶段采样突变目标跟踪算法. 自动化学报, 2014, 40(6):1098-1107 http://www.aas.net.cn/CN/abstract/abstract18379.shtml

    Jiang Xiao-Lian, Li Cui-Hua, Li Xiong-Zong. Saliency based tracking method for abrupt motions via two-stage sampling. Acta Automatica Sinica, 2014, 40(6):1098-1107 http://www.aas.net.cn/CN/abstract/abstract18379.shtml
    [2] 马儒宁, 涂小坡, 丁军娣, 杨静宇. 视觉显著性凸显目标的评价. 自动化学报, 2012, 38(5):870-876 doi: 10.3724/SP.J.1004.2012.00870

    Ma Ru-Ning, Tu Xiao-Po, Ding Jun-Di, Yang Jing-Yu. To evaluate salience map towards popping out visual objects. Acta Automatica Sinica, 2012, 38(5):870-876 doi: 10.3724/SP.J.1004.2012.00870
    [3] Cheng M M, Zhang G X, Mitra N J, Huang X L, Hu S M. Global contrast based salient region detection. In:Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI:IEEE, 2011.409-416 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4691604/
    [4] Jiang H Z, Wang J D, Yuan Z J, Liu T, Zheng N N. Automatic salient object segmentation based on context and shape prior. In:Proceedings of the 22nd British Machine Vision Conference. Norwich, UK:British Machine Vision Association, 2011.110.1-110.12
    [5] Shen X H, Wu Y. A unified approach to salient object detection via low rank matrix recovery. In:Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, USA:IEEE, 2012.853-860
    [6] Koch C, Ullman S. Shifts in selective visual attention:towards the underlying neural circuitry. Human Neurobiology, 1985, 4(4):219-227 http://cseweb.ucsd.edu/classes/fa09/cse258a/papers/koch-ullman-1985.pdf
    [7] Achanta R, Hemami S, Estrada F, Süsstrunk S. Frequency-tuned salient region detection. In:Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, USA:IEEE, 2009.1597-1604 http://www.hindawi.com/journals/mpe/2016/8740593/ref/
    [8] Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11):1254-1259 doi: 10.1109/34.730558
    [9] Harel J, Koch C, Perona P. Graph-based visual saliency. In:Proceedings of the 21st Annual Conference on Neural Information Processing Systems. Vancouver B.C, Canada:MIT Press, 2007.545-552
    [10] Ma Y F, Zhang H J. Contrast-based image attention analysis by using fuzzy growing. In:Proceedings of the 11th ACM International Conference on Multimedia. New York, USA:ACM, 2003.374-381
    [11] Achanta R, Estrada F, Wils P, Süsstrunk S. Salient region detection and segmentation. In:Proceedings of the 6th International Conference on Computer Vision Systems. Santorin, Greece:Springer-Verlag, 2008.66-75
    [12] Zhai Y, Shah M. Visual attention detection in video sequences using spatiotemporal cues. In:Proceedings of the 14th ACM International Conference on Multimedia. New York, USA:ACM, 2006.815-824
    [13] Goferman S, Zelnik-Manor L, Tal A. Context-aware saliency detection. In:Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition. San Francisco, USA:IEEE, 2010.2376-2383 http://www.oalib.com/references/16882599
    [14] Cheng M M, Warrell J, Lin W Y, Zheng S, Vineet V, Crook N. Efficient salient region detection with soft image abstraction. In:Proceedings of the 2013 IEEE International Conference on Computer Vision. Sydney, Australia:IEEE, 2013.1529-1536
    [15] Gopalakrishnan V, Hu Y Q, Rajan D. Salient region detection by modeling distributions of color and orientation. IEEE Transactions on Multimedia, 2009, 11(5):892-905 doi: 10.1109/TMM.2009.2021726
    [16] Margolin R, Tal A, Zelnik-Manor L. What makes a patch distinct? In:Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA:IEEE, 2013.1139-1146
    [17] Alexe B, Deselaers T, Ferrari V. Measuring the objectness of image windows. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11):2189-2202 doi: 10.1109/TPAMI.2012.28
    [18] 刘培娜, 刘国军, 郭茂祖, 刘扬, 李盼. 非负局部约束线性编码图像分类算法. 自动化学报, 2015, 41(7):1235-1243 http://www.aas.net.cn/CN/abstract/abstract18697.shtml

    Liu Pei-Na, Liu Guo-Jun, Guo Mao-Zu, Liu Yang, Li Pan. Image classification based on non-negative locality-constrained linear coding. Acta Automatica Sinica, 2015, 41(7):1235-1243 http://www.aas.net.cn/CN/abstract/abstract18697.shtml
    [19] 贺建军, 张俊星, 贾思齐, 刘文鹏, 许爽, 崔艳秋. 一种新高斯过程分类算法. 控制与决策, 2014, 29(9):1587-1592 http://www.cnki.com.cn/Article/CJFDTOTAL-KZYC201409008.htm

    He Jian-Jun, Zhang Jun-Xing, Jia Si-Qi, Liu Wen-Peng, Xu Shuang, Cui Yan-Qiu. A new Gaussian process classification algorithm. Control and Decision, 2014, 29(9):1587-1592 http://www.cnki.com.cn/Article/CJFDTOTAL-KZYC201409008.htm
    [20] Alpert S, Galun M, Basri R, Brandt A. Image segmentation by probabilistic bottom-up aggregation and cue integration. In:Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, MN:IEEE, 2007.1-8 http://cn.bing.com/academic/profile?id=1536658092&encoded=0&v=paper_preview&mkt=zh-cn
    [21] Li X H, Lu H C, Zhang L H, Ruan X, Yang M H. Saliency detection via dense and sparse reconstruction. In:Proceedings of the 2013 IEEE International Conference on Computer Vision. Sydney, Australia:IEEE, 2013.2976-2983 http://www.researchgate.net/publication/268520429_Saliency_Detection_Based_on_Conditional_Random_Field_and_Image_Segmentation
    [22] Li G B, Yu Y Z. Visual saliency based on multiscale deep features. In:Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA:IEEE, 2015.5455-5463
    [23] Zhang L Y, Marks T K, Tong M H, Shan H H, Cottrell G W. SUN:a Bayesian framework for saliency using natural statistics. Journal of Vision, 2008, 8(7):1-20 doi: 10.1167/8.7.1
    [24] Rahtu E, Kannda J, Salo M, Heikkilä. Segmenting salient objects from images and videos. In:Proceedings of the 11th European Conference on Computer Vision. Crete, Greece:Springer-Verlag, 2010.366-379
    [25] Xie Y L, Lu H B, Yang M H. Bayesian saliency via low and mid level cues. IEEE Transactions on Image Processing, 2013, 22(5):1689-1698 doi: 10.1109/TIP.2012.2216276
  • 加载中
图(25)
计量
  • 文章访问数:  2806
  • HTML全文浏览量:  373
  • PDF下载量:  747
  • 被引次数: 0
出版历程
  • 收稿日期:  2015-06-23
  • 录用日期:  2015-10-10
  • 刊出日期:  2016-08-01

目录

    /

    返回文章
    返回