2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

Interactive Multi-label Image Segmentation With Multi-layer Tumors Automat

Chan Sixian Zhou Xiaolong Zhang Zhuo Chen Shengyong

产思贤, 周小龙, 张卓, 陈胜勇. 一种基于超像素的肿瘤自动攻击交互式分割算法. 自动化学报, 2017, 43(10): 1829-1840. doi: 10.16383/j.aas.2017.e160186
引用本文: 产思贤, 周小龙, 张卓, 陈胜勇. 一种基于超像素的肿瘤自动攻击交互式分割算法. 自动化学报, 2017, 43(10): 1829-1840. doi: 10.16383/j.aas.2017.e160186
Chan Sixian, Zhou Xiaolong, Zhang Zhuo, Chen Shengyong. Interactive Multi-label Image Segmentation With Multi-layer Tumors Automat. ACTA AUTOMATICA SINICA, 2017, 43(10): 1829-1840. doi: 10.16383/j.aas.2017.e160186
Citation: Chan Sixian, Zhou Xiaolong, Zhang Zhuo, Chen Shengyong. Interactive Multi-label Image Segmentation With Multi-layer Tumors Automat. ACTA AUTOMATICA SINICA, 2017, 43(10): 1829-1840. doi: 10.16383/j.aas.2017.e160186

一种基于超像素的肿瘤自动攻击交互式分割算法

doi: 10.16383/j.aas.2017.e160186
基金项目: 

the National Natural Science Foundation of China 11302195

Hubei Key Laboratory of Intelligent Vision Based Monitoring for Hydroelectric Engineering 2014KLA09

the National Natural Science Foundation of China 61273286

the National Natural Science Foundation of China 61325019

the National Natural Science Foundation of China U1509207

the National Natural Science Foundation of China 61403342

Interactive Multi-label Image Segmentation With Multi-layer Tumors Automat

Funds: 

the National Natural Science Foundation of China 11302195

Hubei Key Laboratory of Intelligent Vision Based Monitoring for Hydroelectric Engineering 2014KLA09

the National Natural Science Foundation of China 61273286

the National Natural Science Foundation of China 61325019

the National Natural Science Foundation of China U1509207

the National Natural Science Foundation of China 61403342

More Information
    Author Bio:

    Xiaolong Zhou received the Ph.D.degree in mechanical engineering from the Department of Mechanical and Biomedical Engineering, City University of Hong Kong, Hong Kong, in 2013.He joined Zhejiang University of Technology, Zhejiang, China in February 2014 where he currently serves as an Associate Professor at the College of computer Science and Technology.From April 2015 to May 2016, he worked as a Senior Research Fellow at the School of computing, University of Portsmouth, Portsmouth, UK.He serves as an IEEE member and an ACM member.He received the T.J.Tarn Best Paper Award on ROBIO2012 and ICRA2016 CEB award for Best Reviewers.His research interests include visual tracking, gaze estimation, 3D reconstruction and their applications in various fields.He has authored over 50 peer-reviewed international journals and conference papers.He has served as a Program committee Member on ROBIO2015, ICIRA2015, SMC2015, HSI2016, ICIA2016, and ROBIO2016.E-mail:zxl@zjut.edu.cn

    Zhuo Zhang received the B.E.degree in computer science and technology from Zhejiang University of Technology in 2015.He is currently pursuing his M.E.degree in computer science and technology at Zhejiang University of Technology.His research interests include machine learning and visual object detection.E-mail:imzhuo@foxmail.com

    Shengyong Chen received the Ph.D.degree in computer vision from City University of Hong Kong, Hong Kong, in 2003.He is currently a Professor of Tianjin University of Technology and Zhejiang University of Technology, China.He received a fellowship from the Alexander von Humboldt Foundation of Germany and worked at University of Hamburg in 2006-2007.His research interests include computer vision, robotics, and image analysis.Dr.Chen is a Fellow of IET and senior member of IEEE and CCF.He has published over 100 scientific papers in international journals.He received the National Outstanding Youth Foundation Award of China in 2013.E-mail:sy@ieee.org

    Corresponding author: Sixian Chan received the bachelor degree from Anhui University of Architecture in 2012.He is a Ph.D.candidate at the computer Science and Technology Department of Zhejiang University of Technology.His research interests include image processing, machine learning, and video tracking.Corresponding author of this paper.E-mail:sxchan@163.com
  • 摘要: 交互式分割对于选取图像中感兴趣的对象很有意义.在图像处理领域中有着很重要的地位,具有广泛的应用,至今仍然是一个研究的热点问题.然而,逐像素执行交互式分割通常是耗时的.本文提出了一种新的分割方法.在Growcut算法框架下,提出基于超像素的肿瘤自动攻击(TA)分割.其中,超像素可以提供强大的边界信息来引导分割,且可以由过分割算法很容易来获取.TA与细胞自动攻击算法(CA)有着相似的原理,给定少量的用户标记目标的超像素,可以由TA完成目标分割任务,处理速度比Growcut算法快.此外,为获得最佳效果,应用了水平集和多层TA方法来进行边界的调整.在VOC挑战分割数据集上进行的实验表明,所提出的分割算法性能表现优异,高效和精准,且能处理多目标分割任务.
    Recommended by Associate Editor Qianchuan Zhao
  • Fig.  1  The final results of our proposed interactive segmentation system. (a), (b) and (c) are single object segmentations and (d) is the multi-object segmentation.

    Fig.  2  The process of our interactive segmentation algorithm.

    Fig.  3  The framework of proposed algorithm.

    Fig.  4  The super-pixel neighborhood in our algorithm.

    Fig.  5  Object's boundary optimized by the level set.

    Fig.  6  The results given by the multi-layer TA.

    Fig.  7  Plotting overlap score vs. no. of seeds.

    Fig.  8  Multi-labels segmentation results.

    Fig.  9  The process of the convergence and corresponding results.

    Fig.  10  The segmentation results with different neighborhood measure. (a), (g) and (b), (h) are the original images and the ground truth. (c), (i) and (d), (j) are the results obtained by using the feature space neighborhood measure. (e), (k) and (f), (l) are our results by using the new neighborhood measure.

    Fig.  11  Failure examples. The result demonstrates that our method is a little sensitive to the color.

    Fig.  12  The results compared with different segmentation methods. The first row is the result of the BGPA [17]. The second row is the result of the graph-cut [6]. The third row is the result of the regioncut [10]. The last row is the result of our method.

    Fig.  13  The results compared with different segmentation methods. The first row is the result of the BGPA [17]. The second row is the result of the graph-cut [6]. The third row is the result of the regioncut [10]. The last row is the result of our method.

    Fig.  14  The results compared with different segmentation methods. The first row is the result of the BGPA [17]. The second row is the result of the graph-cut [6]. The third row is the result of the regioncut [10]. The last row is the result of our method.

    Algorithm 1. Search super-pixel neighborhoods algorithm
    $//$ For each tumor, $ s$ is the super-pixel; $p$ is the pixel on the overlap boundary.
    $//$ Neighborhood $S$ is the final neighborhoods information of each super-pixel.
    for $\forall\, s\in{S}$ do
        $//$ Search the nearest super-pixel
        $Temp\_S \leftarrow {L}(s)$ $//$ record the number of the neighbor
        $Temp\_P \leftarrow P(boundary)$ $//$ record the overlap pixel between
        super-pixels
        for $\forall\, s\in {L(s)}$ do
            if $Temp\_P > threshold $ then
                $Neighborhood\_S \leftarrow s$
            end if
        end for
    end for
    下载: 导出CSV
    Algorithm 2. Tumors automata evolution rule
    $//$ For each tumor
    for $\forall \, p\in P$ do
         $//$ Copy previous state
         $l^{t+1}=l^t$
         $\theta_p^{t+1}=\theta_p^t$
         $//$ Neighbors try to attack current tumor
         for $\forall q\in N(p)$ do
             if $g(\|\vec C_p-\vec C_q\|_2)\cdot \theta_q^t > \theta_p^t$ then
                 $l_p^{t+1}=l_q^t$
                 $\theta_p^{t+1}=g(\|\vec C_p - \vec C_q\|_2)\cdot \theta_q^t$
             end if
         end for
    end for
    下载: 导出CSV

    Table  Ⅰ  Accuracy Rates on the Harder Dataset [23] Based on Different Methods

    Method Mean accuracy rate (%)
    Pixel-based Growcut 83.55
    RW 85.23
    Graph-cut 84.34
    Super-pixel based BGPA 87.38
    Regioncut 89.26
    Our method 89.85
    下载: 导出CSV

    Table  Ⅱ  Comparison of Segmentation Efficiency

    Methods Time of convergence (s) Iteration Loop number of single iteration Total time (s)
    Graph-cut 2.97 345 number of pixels of image ($180\times 271$) 3.28
    Regioncut 1.45 169 number of pixels of image ($180\times 271$) 5.82
    Proposed algorithm 1.03 14 number of pixels of image ($300$) 4.75
    下载: 导出CSV
  • [1] V. Kolmogorov and R. Zabin, "What energy functions can be minimized via graph cuts, " IEEE Trans. Patt. Anal. Mach. Intell., vol. 26, no. 2, pp. 147-159, Feb. 2004. doi: 10.1109/TPAMI.2004.1262177
    [2] M. A. G. Carvalho and A. L. Costa, "Combining hierarchical structures on graphs and normalized cut for image segmentation, " New Frontiers in Graph Theory, Y. G. Zhang, Ed. Rijeka, Yugoslavia: InTech Open Access Publisher, 2012.
    [3] J. Carreira and C. Sminchisescu, "Constrained parametric min-cuts for automatic object segmentation, " in Proc. 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 2010, pp. 3241-3248.
    [4] D. Kuettel and V. Ferrari, "Figure-ground segmentation by transferring window masks, " in Proc. 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 2012, pp. 558-565.
    [5] C. Rother, V. Kolmogorov, and A. Blake, "'grabcut':Interactive foreground extraction using iterated graph cuts, " ACM Trans. Graph., vol. 23, no. 3, pp. 309-314, Aug. 2004. doi: 10.1145/1015706
    [6] X. Bai and G. Sapiro, "A geodesic framework for fast interactive image and video segmentation and matting, " University of Minnesota, Minnesota, USA, Tech. Rep. 2171, 2007.
    [7] L. Yu and C. S. Li, "Low depth of field image automatic segmentation based on graph cut, " J. Autom., no. 10, pp. 1471-1481, 2014.
    [8] O. Sener, K. Ugur, and A. A. Alatan, "Error-tolerant interactive image segmentation using dynamic and iterated graph-cuts, " in Proc. 2nd ACM International Workshop on Interactive Multimedia on Mobile and Portable Devices, New York, NY, USA, 2012, pp. 9-16.
    [9] L. Grady, "Random walks for image segmentation, " IEEE Trans. Patt. Anal. Mach. Intell., vol. 28, no. 11, pp. 1768-1783, Nov. 2006. doi: 10.1109/TPAMI.2006.233
    [10] O. J. Arndt, B. Scheuermann, and B. Rosenhahn, "'Regioncut'-interactive multi-label segmentation utilizing cellular automaton, " in Proc. 2013 IEEE Workshop on Applications of Computer Vision (WACV), Tampa, FL, USA, 2013, pp. 309-316.
    [11] V. Vezhnevets and V. Konouchine, ""GrowCut": Interactive multi-label N-D image segmentation by cellular automata, " in Proc. Graphicon, Novosibirsk Akademgorodok, Russia, 2005, pp. 150-156.
    [12] J. Von Neumann and A. W. Burks, Theory of Selfreproducing Automata. Champaign, IL, USA:University of Illinois Press, 1966.
    [13] A. Blake, C. C. E. Rother, and P. Anandan, "Foreground extraction using iterated graph cuts, " U. S. Patent 7 660 463, Feb. 9, 2010.
    [14] R. Dondera, V. Morariu, Y. L. Wang, and L. Davis, "Interactive video segmentation using occlusion boundaries and temporally coherent superpixels, " in Proc. 2014 IEEE Winter Conference on Applications of Computer Vision (WACV), Steamboat Springs, CO, USA, 2014, pp. 784-791.
    [15] M. Ghafarianzadeh, M. B. Blaschko, and G. Sibley, "Unsupervised spatio-temporal segmentation with sparse spectral clustering, " in Proc. British Machine Vision Conference (BMVC), Nottingham, UK, 2014.
    [16] I. Gallo, A. Zamberletti, and L. Noce, "Interactive object class segmentation for mobile devices, " in Proc. 27th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Rio de Janeiro, Brazil, 2014, pp. 73-79.
    [17] Z. G. Li, X. M. Wu, and S. F. Chang, "Segmentation using superpixels: A bipartite graph partitioning approach, " in Proc. 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 2012, pp. 789-796.
    [18] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, "Slic superpixels compared to state-of-the-art superpixel methods, " IEEE Trans. Patt. Anal. Mach. Intell., vol. 34, no. 11, pp. 2274-2282, Nov. 2012. doi: 10.1109/TPAMI.2012.120
    [19] C. M. Li, C. Y. Xu, C. F. Gui, and M. D. Fox, "Distance regularized level set evolution and its application to image segmentation, " IEEE Trans. Image Process., vol. 19, no. 12, pp. 3243-3254, Dec. 2010. doi: 10.1109/TIP.2010.2069690
    [20] P. L. Rosin, "Image processing using 3-state cellular automata, " Comp. Vision Image Understand., vol. 114, no. 7, pp. 790-802, Jul. 2010. doi: 10.1016/j.cviu.2010.02.005
    [21] Y. Qin, H. C. Lu, Y. Q. Xu, and H. Wang, "Saliency detection via cellular automata, " in Proc. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 2015, pp. 110-119.
    [22] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, "The Pascal visual object classes challenge 2009 (VOC2009), " in Summary Presentation at the 2009 PASCAL VOC Workshop, 2009.
    [23] V. Gulshan, C. Rother, A. Criminisi, A. Blake, and A. Zisserman, "Geodesic star convexity for interactive image segmentation, " in Proc. 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 2010, pp. 3129-3136.
    [24] J. Santner, T. Pock, and H. Bischof, "Interactive multi-label segmentation, " in Asian Conference on Computer Vision, R. Kimmel, R. Klette, and A. Sugimoto, Eds. Berlin, Heidelberg, Germany: Springer, 2010, pp. 397-410.
    [25] Y. Y. Boykov and M. P. Jolly, "Interactive graph cuts for optimal boundary & region segmentation of objects in ND images, " in Proc. 8th IEEE International Conference on Computer Vision, Vancouver, BC, USA, vol. 1, pp. 105-112, Jul. 2001.
    [26] P. A. V. de Miranda, A. X. Falcáo, and J. K. Udupa, "Synergistic arc-weight estimation for interactive image segmentation using graphs, " Comp. Vision Image Understand., vol. 114, no. 1, pp. 85-99, Jan. 2010. doi: 10.1016/j.cviu.2009.08.001
  • 加载中
图(14) / 表(4)
计量
  • 文章访问数:  1928
  • HTML全文浏览量:  214
  • PDF下载量:  542
  • 被引次数: 0
出版历程
  • 收稿日期:  2016-06-14
  • 录用日期:  2017-02-06
  • 刊出日期:  2017-10-20

目录

    /

    返回文章
    返回