2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于融合显著图和高效子窗口搜索的红外目标分割

刘松涛 刘振兴 姜宁

刘松涛, 刘振兴, 姜宁. 基于融合显著图和高效子窗口搜索的红外目标分割. 自动化学报, 2018, 44(12): 2210-2221. doi: 10.16383/j.aas.2018.c170142
引用本文: 刘松涛, 刘振兴, 姜宁. 基于融合显著图和高效子窗口搜索的红外目标分割. 自动化学报, 2018, 44(12): 2210-2221. doi: 10.16383/j.aas.2018.c170142
LIU Song-Tao, LIU Zhen-Xing, JIANG Ning. Target Segmentation of Infrared Image Using Fused Saliency Map and Efficient Subwindow Search. ACTA AUTOMATICA SINICA, 2018, 44(12): 2210-2221. doi: 10.16383/j.aas.2018.c170142
Citation: LIU Song-Tao, LIU Zhen-Xing, JIANG Ning. Target Segmentation of Infrared Image Using Fused Saliency Map and Efficient Subwindow Search. ACTA AUTOMATICA SINICA, 2018, 44(12): 2210-2221. doi: 10.16383/j.aas.2018.c170142

基于融合显著图和高效子窗口搜索的红外目标分割

doi: 10.16383/j.aas.2018.c170142
基金项目: 

中国博士后科学基金 2016T90979

中国博士后科学基金 2015M572694

国家自然科学基金 61303192

详细信息
    作者简介:

    刘振兴  海军大连舰艇学院信息系统系讲师.2002年、2007年和2010年获得海军大连舰艇学院学士、硕士和博士学位.主要研究方向为光电工程和电子对抗.E-mail:liuzhenxing@msn.com

    姜宁  海军大连舰艇学院信息系统系教授.1987年获得海军电子工程学院学士学位, 1996年和2000年获得大连理工大学硕士和博士学位, 南京理工大学博士后.主要研究方向为电子对抗和信息作战.E-mail:jiangning68@sohu.com

    通讯作者:

    刘松涛  海军大连舰艇学院信息系统系副教授.2000年、2003年和2006年获得海军航空工程学院学士、硕士和博士学位, 大连理工大学博士后.主要研究方向为图像处理, 光电工程和电子对抗, 本文通信作者.E-mail:navylst@163.com

Target Segmentation of Infrared Image Using Fused Saliency Map and Efficient Subwindow Search

Funds: 

China Postdoctoral Science Foundation 2016T90979

China Postdoctoral Science Foundation 2015M572694

National Natural Science Foundation of China 61303192

More Information
    Author Bio:

     Lecturer in the Department of Information System, Dalian Naval Academy. He received his bachelor degree, master degree and Ph. D. degree from Dalian Naval Academy in 2002, 2007 and 2010, respectively. His research interest covers optoelectronic engineering and electronic countermeasures

     Professor in the Department of Information System, Dalian Naval Academy. He received his bachelor degree from Naval Electronic Engineering Academy in 1987, and master degree and Ph. D. degree from Dalian University of Technology in 1996 and 2000, respectively, postdoctoral from Nanjing University of Technology. His research interest covers electronic countermeasure and information operation

    Corresponding author: LIU Song-Tao  Associate professor in the Department of Information System, Dalian Naval Academy. He received his bachelor degree, master degree and Ph. D. degree from Naval Aeronautical Engineering Institute, in 2000, 2003 and 2006, respectively, postdoctoral from Dalian University of Technology. His research interest covers image processing, optoelectronic engineering, and electronic countermeasures. Corresponding author of this paper
  • 摘要: 为了快速精确地分割红外图像目标,提出一种基于融合显著图和高效子窗口搜索的红外目标分割方法.在获取图像超像素的基础上,提取每个区域增强的Sigma特征,并考虑邻域对比度、背景对比度、空间距离和区域大小的影响,构建局部显著图,接着利用全局核密度估计构建全局显著图,然后融合局部和全局显著图实现图像显著性检测,最后应用高效子窗口搜索方法检测和筛选目标,实现红外目标分割.实验结果表明,新方法的显著图结果目标区域一致高亮且边缘清晰,背景杂波抑制效果好,可实现快速精确的目标分割.
    1)  本文责任编委 刘跃虎
  • 图  1  红外目标分割算法流程图

    Fig.  1  Flow chart of infrared target segmentation algorithm

    图  2  考虑不同影响因子的显著图效果

    Fig.  2  Saliency maps considering different impact factorss

    图  3  增强Sigma特征和灰度特征的显著图

    Fig.  3  Saliency map of enhanced Sigma feature and gray feature

    图  4  局部和全局显著图的重要性

    Fig.  4  The importance of local and global saliency map

    图  5  14种方法的显著图

    Fig.  5  Saliency maps of fourteen saliency detection methods

    图  6  14种方法的Pr和Roc曲线

    Fig.  6  Pr and Roc curves of fourteen saliency detection methods

    图  7  不同分割方法的分割结果

    Fig.  7  The segmentation results of different segmentation methods

    表  1  不同分割方法的分割精度和计算耗时

    Table  1  Segmentation precision and computational time of different segmentation methods

    新方法 文献[11] 文献[23] 文献[32] 文献[21] 文献[25]
    分割精度(F值) 0.7268 0.2288 0.5970 0.5709 0.1988 0.5536
    计算耗时(s) 0.65 0.31 6.7 4.29 2.6 0.13
    下载: 导出CSV
  • [1] 刘松涛, 杨绍清.基于元胞自动机的红外弱小目标图像分割.红外与毫米波学报, 2008, 27(1):42-46 doi: 10.3321/j.issn:1001-9014.2008.01.010

    Liu Song-Tao, Yang Shao-Qing. Segmentation of infrared weak and small target image based on cellular automata. Journal of Infrared and Millimeter Waves, 2008, 27(1):42-46 doi: 10.3321/j.issn:1001-9014.2008.01.010
    [2] Rajchl M, Lee M C H, Oktay O, Kamnitsas K, Passerat-Palmbach J, Bai W J, et al. DeepCut:object segmentation from bounding box annotations using convolutional neural networks. IEEE Transactions on Medical Imaging, 2016, 36(2):674-683 http://d.old.wanfangdata.com.cn/NSTLQK/NSTL_QKJJ029696105/
    [3] Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11):1254-1259 doi: 10.1109/34.730558
    [4] Ma Y F, Zhang H J. Contrast-based image attention analysis by using fuzzy growing. In: Proceedings of the 11th ACM International Conference on Multimedia. Berkeley, CA, USA: ACM, 2003. 374-381 http://www.mendeley.com/catalog/contrastbased-image-attention-analysis-using-fuzzy-growing/
    [5] Hou X D, Zhang L Q. Saliency detection: a spectral residual approach. In: Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, Minnesota, USA: IEEE, 2007. 1-8 http://www.mendeley.com/catalog/saliency-detection-spectral-residual-approach/
    [6] Guo C L, Ma Q, Zhang L M. Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform. In: Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, AK, USA: IEEE, 2008. 1-8 http://www.mendeley.com/catalog/spatiotemporal-saliency-detection-using-phase-spectrum-quaternion-fourier-transform/
    [7] Jung C, Kim C. A unified spectral-domain approach for saliency detection and its application to automatic object segmentation. IEEE Transactions on Image Processing, 2012, 21(3):1272-1283 doi: 10.1109/TIP.2011.2164420
    [8] He X, Jing H Y, Han Q, Niu X M. Salient region detection combining spatial distribution and global contrast. Optical Engineering, 2012, 51(4):Article No. 047007 http://d.old.wanfangdata.com.cn/NSTLQK/NSTL_QKJJ0226257555/
    [9] Zhang Y B, Han J W, Guo L. Image saliency detection based on histogram. Journal of Computational Information Systems, 2014, 10(6):2417-2424
    [10] Xu L F, Li H L, Zeng L Y, Ngan K N. Saliency detection using joint spatial-color constraint and multi-scale segmentation. Journal of Visual Communication and Image Representation, 2013, 24(4):465-476 doi: 10.1016/j.jvcir.2013.02.007
    [11] Liu S T, Shen T S, Dai Y. Infrared image segmentation method based on spatial coherence histogram and maximum entropy. In: Proceedings of the 2014 SPIE 9275, Infrared, Millimeter-Wave, and Terahertz Technologies Ⅲ. Beijing, China: SPIE, 2014. 1-8 http://www.deepdyve.com/lp/spie/infrared-image-segmentation-method-based-on-spatial-coherence-JDWSMYXGT0
    [12] Li J, Levine M D, Aa X J, He H G. Saliency detection based on frequency and spatial domain analyses. In: Proceedings of the 2011 British Machine Vision Conference. Dundee, Scotland, UK: BMVA, 2011. 1-11
    [13] Cheng M M, Zhang G X, Mitra N J, Huang X L, Hu S M. Global contrast based salient region detection. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. Providence, Rhode Island, USA: IEEE, 2011. 409-416 https://www.ncbi.nlm.nih.gov/pubmed/26353262
    [14] Perazzi F, Krähenbuhl P, Pritch Y, Hornung A. Saliency filters: contrast based filtering for salient region detection. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, Rhode Island, USA: IEEE, 2012. 733-740 http://dl.acm.org/citation.cfm?id=2355041
    [15] Liu T, Sun J, Zheng N N, Tang X O, Shum H Y. Learning to detect a salient object. In: Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, Minnesota, USA: IEEE, 2007. 1-8 http://ieeexplore.ieee.org/xpls/icp.jsp?arnumber=4270072
    [16] Yang J M, Yang M H. Top-down visual saliency via joint CRF and dictionary learning. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Rhode Island, USA: IEEE, 2012. 2296-2303 https://www.mendeley.com/catalogue/topdown-visual-saliency-via-joint-crf-dictionary-learning/
    [17] Kocak A, Cizmeciler K, Erdem A, Erdem E. Top down saliency estimation via superpixel-based discriminative dictionaries. In: Proceedings of the 2014 British Machine Vision Conference. Nottingham, UK: BMVA, 2014. 1-12
    [18] Jiang H Z, Wang J D, Yuan Z J, Wu Y, Zheng N N, Li S P. Salient object detection: a discriminative regional feature integration approach. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, Oregon, USA: IEEE, 2013. 2083-2090 doi: 10.1007/s11263-016-0977-3
    [19] Zhang L, Tong M H, Marks T K, Shan H, Cottrell G W. SUN:a Bayesian framework for saliency using natural statistics. Journal of Vision, 2008, 8(7):Article No. 32 doi: 10.1167/8.7.32
    [20] Cholakkal H, Johnson J, Rajan D. Backtracking ScSPM image classifier for weakly supervised top-down saliency. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016. 5278-5287 http://ieeexplore.ieee.org/document/7780939/
    [21] Peng H P, Li B, Ling H B, Hu W M, Xiong W H, Maybank S J. Salient object detection via structured matrix decomposition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4):818-832 doi: 10.1109/TPAMI.2016.2562626
    [22] Lampert C H, Blaschko M B, Hofmann T S. Beyond sliding windows: object localization by efficient subwindow search. In: Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, Alaska, USA: IEEE, 2008. 1-8
    [23] Ye L, Yuan J S, Xue P, Tian Q. Saliency density maximization for object detection and localization. In: Proceedings of the 10th Asian Conference on Computer Vision. Queenstown, New Zealand: Springer-Verlag, 2010. 396-408 https://www.mendeley.com/catalogue/saliency-density-maximization-object-detection-localization/
    [24] Gidaris S, Komodakis N. LocNet: improving localization accuracy for object detection. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016. 789-798
    [25] Achanta R, Hemami S, Estrada F, Susstrunk S. Frequency-tuned salient region detection. In: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL, USA: IEEE, 2009. 1597-1604 http://www.mendeley.com/catalog/frequencytuned-salient-region-detection/
    [26] Liu Z, Shi R, Shen L Q, Xue Y Z, Ngan K N, Zhang Z Y. Unsupervised salient object segmentation based on kernel density estimation and two-phase graph cut. IEEE Transactions on Multimedia, 2012, 14(4):1275-1289 doi: 10.1109/TMM.2012.2190385
    [27] Erdem E, Erdem A. Visual saliency estimation by nonlinearly integrating features using region covariances. Journal of Vision, 2013, 13(4):Article No. 11 doi: 10.1167/13.4.11
    [28] Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11):2274-2282 doi: 10.1109/TPAMI.2012.120
    [29] Tuzel O, Porikli F, Meer P. Region covariance: a fast descriptor for detection and classification. In: Proceedings of the 9th European Conference on Computer Vision. Graz, Austria: Springer, 2006. 589-600 doi: 10.1007/11744047_45
    [30] 刘松涛, 常春, 沈同圣.基于区域协方差的图像特征融合方法.电光与控制, 2015, 22(2):7-11, 16 doi: 10.3969/j.issn.1671-637X.2015.02.002

    Liu Song-Tao, Chang Chun, Shen Tong-Sheng. An image feature fusion method based on region covariance. Electronics Optics and Control, 2015, 22(2):7-11, 16 doi: 10.3969/j.issn.1671-637X.2015.02.002
    [31] Hong X P, Chang H, Shan S G, Chen X L, Gao W. Sigma set: a small second order statistical region descriptor. In: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL: USA, 2009. 1802-1809 http://www.mendeley.com/catalog/sigma-set-small-second-order-statistical-region-descriptor/
    [32] 刘松涛, 黄金涛, 刘振兴.基于显著图生成和显著密度最大化的高效子窗口搜索目标检测方法.电光与控制, 2015, 22(12):9-14 http://www.cnki.com.cn/Article/CJFDTOTAL-DGKQ201512002.htm

    Liu Song-Tao, Huang Jin-Tao, Liu Zhen-Xing. An ESS target detection method based on itti's saliency map and maximum saliency density. Electronics Optics and Control, 2015, 22(12):9-14 http://www.cnki.com.cn/Article/CJFDTOTAL-DGKQ201512002.htm
    [33] Harel J, Koch C, Perona P. Graph-based visual saliency. In: Proceedings of the 2006 Conference Advances in Neural Information Processing Systems. Vancouver, Canada: MIT, 2006. 545-552 https://ieeexplore.ieee.org/document/6287326?reload=true&arnumber=6287326
    [34] Li J, Levine M D, Aa X J, Xu X, He H G. Visual saliency based on scale-space analysis in the frequency domain. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(4):996-1010 doi: 10.1109/TPAMI.2012.147
    [35] Achanta R, Süsstrunk S. Saliency detection using maximum symmetric surround. In: Proceedings of the 17th IEEE International Conference on Image Processing. Hong Kong, China: IEEE, 2010. 2653-2656 https://www.mendeley.com/catalogue/saliency-detection-using-maximum-symmetric-surround/
    [36] Rahtu E, Kannala J, Salo M. Segmenting salient objects from images and videos. In: Proceedings of the 11th European Conference on Computer Vision: Part V. Crete, Greece: Springer, 2010. 366-379 https://www.mendeley.com/catalogue/segmenting-salient-objects-images-videos/
    [37] Hou X D, Harel J, Koch C. Image signature:highlighting sparse salient regions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(1):194-201 doi: 10.1109/TPAMI.2011.146
    [38] Murray N, Vanrell M, Otazu X, Parraga C A. Saliency estimation using a non-parametric low-level vision model. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. Providence, Rhode Island, USA: IEEE, 2011. 433-440 https://www.mendeley.com/catalogue/saliency-estimation-using-nonparametric-lowlevel-vision-model/
    [39] Alpert S, Galun M, Basri R, Brandt A. Image segmentation by probabilistic bottom-up aggregation and cue integration. In: Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, Minnesota, USA: IEEE, 2007. 1-8 https://www.ncbi.nlm.nih.gov/pubmed/21690639
  • 加载中
图(7) / 表(1)
计量
  • 文章访问数:  1935
  • HTML全文浏览量:  274
  • PDF下载量:  457
  • 被引次数: 0
出版历程
  • 收稿日期:  2017-03-17
  • 录用日期:  2017-11-06
  • 刊出日期:  2018-12-20

目录

    /

    返回文章
    返回