An Adaptive Method for Moving Object Detection in Atmospheric Turbulence Environment
-
摘要: 在远距离成像过程中,图像序列受到湍流的影响会出现像素点亮度的随机起伏、闪烁和图像中物体的位置漂移,这使得传统的背景建模方法在湍流环境下难以准确检测运动目标.针对图像受到湍流影响在不同区域表现出的不同性质,提出分层次决策判别方法.首先,用高斯模型建模背景平坦区域,用双高斯模型建模背景中的物体边缘区域,设定判别式对每个像素点进行判别,并在线更新模型参数;然后,针对由于湍流影响出现的亮度突变点建立自适应判别模型,结合前一层次的判别结果,构造判别条件消除亮度突变点,分割得到目标点;最后,通过连通区域约束得到目标区域.实验结果表明,本文方法在不同湍流强度下对不同数量和不同运动方向的目标取得了良好的检测效果.Abstract: Image sequences acquired from remote distance remote distance are easily influenced by atmospheric turbulence, and subject to random changes of intensity, twinkle of pixels and shifts of object positions. Traditional background modeling methods are not able to detect the object of interest correctly in the turbulent environment. In this paper, a decision-making approach with multiple hierarchies is proposed to model the effect of atmospheric turbulence. Gaussian and double Gaussian distributions are utilized to model and distinguish the pixels in the flat and edge regions of the background, respectively. All parameters are updated online. Moreover, adaptive threshold is employed to discriminate between abrupt changing pixels and pixels in the object region by fusing the results of the first level. Finally, the region of the moving object is obtained by the connected component constraint. Comparison with other methods shows that the method performs well under different situations, such as different turbulence strength variations, different numbers of objects, and different moving directions.1) 本文责任编委 赖剑煌
-
序号 分辨率 帧频(Hz) 帧数 大气湍流强弱/信噪比(dB) 图像序列描述 序列1 $720 \times 576$ 30 425 弱/26.70 有一个向右缓慢移动的目标, 目标较小, 速度较慢. 序列2 $672 \times 480$ 30 397 弱/25.59 有多个左右缓慢移动的目标和多个左右快速移动的目标. 序列3 $672 \times 480$ 30 390 弱/28.08 有多个左右缓慢移动的目标和一个向左快速移动的目标. 序列4 $672 \times 480$ 30 401 强/19.59 有多个向右快速移动的目标. 序列5 $672 \times 480$ 30 400 强/15.31 只有少部分图像帧存在向右快速移动的目标. 序列6 $672 \times 480$ 30 493 强/16.66 只有第120帧前有向右快速移动的目标. 表 2 序列1中背景模型更新与否得到的评价值对比
Table 2 Comparison between updating models and not updating models for sequence one
Precision Recall $F1$ $SR$ 更新模型 0.89 0.95 0.92 0.97 不更新模型 0.47 0.95 0.63 0.73 表 3 各方法在6个图像序列中检测目标得到的Precision/Recall值比较
Table 3 Precision and Recall results of different methods
方法 序列1 序列2 序列3 序列4 序列5 序列6 本文方法 0.89/0.95 0.97/0.85 0.87/0.97 0.92/0.98 0.62/0.56 0.83/0.95 Chen等[19] 0.93/0.36 0.90/0.50 0.98/0.20 0.95/0.63 0.77/0.38 0.87/0.89 Oreifej等[17] 0.56/0.60 0.85/0.57 0.92/0.80 0.20/0.52 0.00/0.00 0.00/0.00 Stauffer等[3] 0.00/0.00 0.90/0.46 0.96/0.48 0.25/0.68 0.06/0.81 0.03/0.59 Barnich等[8] (阈值40) 0.19/0.48 0.71/0.86 0.63/0.67 0.19/0.83 0.02/0.44 0.02/0.73 Barnic等[8] (阈值80) 0.00/0.00 0.81/0.47 1.00/0.03 0.53/0.82 0.03/0.58 0.06/0.76 Noh等[13] 0.00/0.00 0.98/0.67 1.00/0.19 0.67/0.56 0.00/0.02 0.21/0.73 St-Charles等[14] (SuBSENSE) 0.76/0.93 0.90/0.90 0.96/0.64 0.35/0.75 0.05/0.63 0.04/0.84 St-Charles等[15] (PAWCS) 0.40/0.99 0.86/0.87 0.85/0.97 0.18/0.69 0.02/0.40 0.02/0.70 表 4 各方法在6个图像序列中检测目标得到的$F1$值比较
Table 4 The $F1$ results of different methods
方法 序列1 序列2 序列3 序列4 序列5 序列6 本文方法 0.92 0.91 0.92 0.95 0.59 0.89 Chen等[19] 0.52 0.64 0.33 0.76 0.51 0.88 Oreifej等[17] 0.58 0.68 0.86 0.29 0.00 0.00 Stauffer等[3] 0.00 0.61 0.64 0.36 0.12 0.05 Barnich等[8] (阈值40) 0.28 0.78 0.65 0.31 0.03 0.04 Barnich等[8] (阈值80) 0.00 0.59 0.05 0.65 0.06 0.11 Noh等[13] 0.00 0.80 0.32 0.61 0.01 0.32 St-Charles等[14] (SuBSENSE) 0.84 0.90 0.77 0.47 0.09 0.08 St-Charles等[15] (PAWCS) 0.57 0.86 0.91 0.29 0.03 0.04 表 5 各方法在6个图像序列中检测目标得到的$SR$值比较
Table 5 The $SR$ results of different methods
方法 序列1 序列2 序列3 序列4 序列5 序列6 本文方法 0.97 0.64 0.76 0.85 0.84 0.98 Chen等[19] 0.72 0.18 0.06 0.43 0.84 0.99 Oreifej等[17] 0.78 0.22 0.64 0.01 0.04 0.00 Stauffer等[3] 0.55 0.15 0.33 0.05 0.14 0.27 Barnich等[8] (阈值40) 0.75 0.41 0.24 0.00 0.00 0.00 Barnich等[8] (阈值80) 0.59 0.08 0.02 0.12 0.00 0.05 Noh等[13] 0.59 0.20 0.12 0.27 0.50 0.83 St-Charles等[14] (SuBSENSE) 0.93 0.72 0.50 0.16 0.19 0.24 St-Charles等[15] (PAWCS) 0.76 0.63 0.74 0.02 0.00 0.18 -
[1] Brutzer S, Höferlin B, Heidemann G. Evaluation of background subtraction techniques for video surveillance. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Providence, RI, USA: IEEE, 2011. 1937-1944 doi: 10.1109/CVPR.2011.5995508 [2] Wren C R, Azarbayejani A, Darrell T, Pentland A P. Pfinder:real-time tracking of the human body. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19 (7):780-785 doi: 10.1109/34.598236 [3] Stauffer C, Grimson W E L. Adaptive background mixture models for real-time tracking. In: Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Fort Collins, USA: IEEE, 1999, 2: 246-252 http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=784637 [4] Haritaoglu I, Harwood D, Davis L S. W4:Real-time surveillance of people and their activities. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22 (8):809-830 doi: 10.1109/34.868683 [5] Kim K, Chalidabhongse T H, Harwood D, Davis L. Real-time foreground-background segmentation using codebook model. Real-Time Imaging, 2005, 11 (3):172-185 doi: 10.1016/j.rti.2004.12.004 [6] Elgammal A, Harwood D, Davis L. Non-parametric model for background subtraction. In: Proceedings of the 2000 European Conference on Computer Vision. Dublin, Ireland: Springer-Verlag, 2000. 751-767 doi: 10.1007/3-540-45053-X_48 [7] Sheikh Y, Shah M. Bayesian modeling of dynamic scenes for object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27 (11):1778-1792 doi: 10.1109/TPAMI.2005.213 [8] Barnich O, Van Droogenbroeck M. ViBe:a universal background subtraction algorithm for video sequences. IEEE Transactions on Image Processing, 2011, 20 (6):1709-1724 doi: 10.1109/TIP.2010.2101613 [9] Oliver N M, Rosario B, Pentland A P. A Bayesian computer vision system for modeling human interactions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22 (8):831-843 doi: 10.1109/34.868684 [10] Heikkilä M, Pietikäinen M. A texture-based method for modeling the background and detecting moving objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28 (4):657-662 doi: 10.1109/TPAMI.2006.68 [11] Javed O, Shafique K, Shah M. A hierarchical approach to robust background subtraction using color and gradient information. In: Proceedings of the 2002 Workshop on Motion and Video Computing. Orlando, USA: IEEE, 2002. 22-27 http://doi.ieeecomputersociety.org/resolve?ref_id=doi:10.1109/MOTION.2002.1182209&rfr_id=trans/tp/2003/10/ttp2003101355.htm [12] Mason M, Duric Z. Using histograms to detect and track objects in color video. In: Proceedings of the 30th Applied Imagery Pattern Recognition Workshop. Washington, USA: IEEE, 2001. 154-159 doi: 10.1109/AIPR.2001.991219 [13] Noh S J, Jeon M. A new framework for background subtraction using multiple cues. In: Proceedings of the 2012 Asian Conference on Computer Vision. Daejeon, Korea: Springer-Verlag, 2012. 493-506 http://www.mendeley.com/research/new-framework-background-subtraction-using-multiple-cues-2/ [14] St-Charles P L, Bilodeau G A, Bergevin R. SuBSENSE:a universal change detection method with local adaptive sensitivity. IEEE Transactions on Image Processing, 2015, 24 (1):359-373 doi: 10.1109/TIP.2014.2378053 [15] St-Charles P L, Bilodeau G A, Bergevin R. Universal background subtraction using word consensus models. IEEE Transactions on Image Processing, 2016, 25 (10):4768-4781 doi: 10.1109/TIP.2016.2598691 [16] Li D L, Mersereau R M, Simske S. Atmospheric turbulence-degraded image restoration using principal components analysis. IEEE Geoscience and Remote Sensing Letters, 2007, 4 (3):340-344 doi: 10.1109/LGRS.2007.895691 [17] Oreifej O, Li X, Shah M. Simultaneous video stabilization and moving object detection in turbulence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35 (2):450-462 doi: 10.1109/TPAMI.2012.97 [18] Zhu X, Milanfar P. Image reconstruction from videos distorted by atmospheric turbulence. In: Proceedings of SPIE 7543, Visual Information Processing and Communication. San Jose, USA: SPIE, 2010. Article No.75430S http://www.mendeley.com/research/image-reconstruction-videos-distorted-atmospheric-turbulence/ [19] Chen E L, Haik O, Yitzhaky Y. Detecting and tracking moving objects in long-distance imaging through turbulent medium. Applied Optics, 2014, 53 (6):1181-1190 doi: 10.1364/AO.53.001181 [20] Elkabetz A, Yitzhaky Y. Background modeling for moving object detection in long-distance imaging through turbulent medium. Applied Optics, 2014, 53 (6):1132-1141 doi: 10.1364/AO.53.001132 [21] 齐玉娟, 王延江, 李永平.基于记忆的混合高斯背景建模.自动化学报, 2010, 36 (11):1520-1526 http://www.aas.net.cn/CN/abstract/abstract17339.shtmlQi Yu-Juan, Wang Yan-Jiang, Li Yong-Ping. Memory-based Gaussian mixture background modeling. Acta Automatica Sinica, 2010, 36 (11):1520-1526 http://www.aas.net.cn/CN/abstract/abstract17339.shtml [22] 王永忠, 梁彦, 潘泉, 程咏梅, 赵春晖.基于自适应混合高斯模型的时空背景建模.自动化学报, 2009, 35 (4):371-378 http://www.aas.net.cn/CN/abstract/abstract15852.shtmlWang Yong-Zhong, Liang Yan, Pan Quan, Cheng Yong-Mei, Zhao Chun-Hui. Spatiotemporal background modeling based on adaptive mixture of Gaussians. Acta Automatica Sinica, 2009, 35 (4):371-378 http://www.aas.net.cn/CN/abstract/abstract15852.shtml [23] Bouwmans T, Baf F E, Vachon B. Background modeling using mixture of Gaussians for foreground detection:a survey. Recent Patents on Computer Science, 2008, 1 (3):219-237 doi: 10.2174/2213275910801030219 [24] Bouwmans T. Traditional and recent approaches in background modeling for foreground detection:an overview. Computer Science Review, 2014, 11-12:31-66 doi: 10.1016/j.cosrev.2014.04.001 [25] Zhu X, Milanfar P. Removing atmospheric turbulence via space-Invariant deconvolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35 (1):157-170 doi: 10.1109/TPAMI.2012.82 [26] Çaliskan T, Arica N. Atmospheric turbulence mitigation using optical flow. In: Proceedings of the 2nd International Conference on Pattern Recognition (ICPR). Stockholm, Sweden: IEEE, 2014. 883-888 http://mobile.computer.org/csdl/proceedings/icpr/2014/5209/00/5209a883-abs.html [27] Arica N, Caliskan T. Moving object detection in turbulence degraded video. International Journal of Applied Mathematics, Electronics and Computers, 2015, 3 (4):232-236 doi: 10.18100/ijamec.97614 [28] Deshmukh A S, Medasani S S, Reddy G R. Moving object detection from images distorted by atmospheric turbulence. In: Proceedings of the 2013 International Conference on Intelligent Systems and Signal Processing (ISSP). Gujarat, India: IEEE, 2013. 122-127 http://www.mendeley.com/research/moving-object-detection-images-distorted-atmospheric-turbulence/ [29] Ganzalez R C, Woods R E[著], 阮秋琦, 阮宇智[译].数字图像处理第.第3版.北京: 电子工业出版社, 2011.Ganzalez R C, Woods R E[Author], Ruan Qiu-Qi, Ruan Yu-Zhi[Translator]. Digital Image Processing (Third edition). Beijing: Publishing House of Electronics Industry, 2011. [30] Zamek S, Yitzhaky Y. Turbulence strength estimation from an arbitrary set of atmospherically degraded images. Journal of the Optical Society of America A, 2006, 23 (12):3106-3113 doi: 10.1364/JOSAA.23.003106