-
摘要: 为了有效融合RGB图像颜色信息和Depth图像深度信息, 提出一种基于贝叶斯框架融合的RGB-D图像显著性检测方法.通过分析3D显著性在RGB图像和Depth图像分布的情况, 采用类条件互信息熵(Class-conditional mutual information, CMI)度量由深层卷积神经网络提取的颜色特征和深度特征的相关性, 依据贝叶斯定理得到RGB-D图像显著性后验概率.假设颜色特征和深度特征符合高斯分布, 基于DMNB (Discriminative mixed-membership naive Bayes)生成模型进行显著性检测建模, 其模型参数由变分最大期望算法进行估计.在RGB-D图像显著性检测公开数据集NLPR和NJU-DS2000上测试, 实验结果表明提出的方法具有更高的准确率和召回率.Abstract: In this paper, we propose a saliency detection model for RGB-D images based on the deep features of RGB images and depth images within a Bayesian framework. By analysis of 3D saliency in the case of RGB images and depth images, class-conditional mutual information (CMI) is computed for measuring the dependence of deep features extracted by CNN, then the posterior probability of the RGB-D saliency is formulated by applying the Bayes' theorem. By assuming that color- and depth-based deep features are Gaussian distributions, a discriminative mixed-membership naive Bayes (DMNB) model is used to calculate the final saliency map. The Gaussian distribution parameter can be estimated in the DMNB model by using a variational inference-based expectation maximization algorithm. The experimental results on the RGB-D image NLPR and NJU-DS2000 datasets show that the proposed model performs better than other existing models.
-
Key words:
- Bayesian fusion /
- deep learning /
- generative model /
- saliency detection /
- RGB-D images
1) 本文责任编委 刘跃虎 -
图 4 RGB-D图像超像素分割(如RGB图像矩形框区域所显示, 兼顾颜色和深度信息超像素分割得到边缘比只考虑颜色信息要准确.同样情况, Depth图像矩形框区域显示兼顾颜色和深度信息超像素分割得到边缘比只考虑深度信息要准确)
Fig. 4 Visual samples for superpixel segmentation of RGB-D images (Within the rectangle, the boundary between the foreground and the background segmented by the color and depth cues more accurate than color-based segmentation. Similarly, within the rectangle, the boundary between the foreground and the background segmented by the color and depth cues more accurate than depth-based segmentation)
图 5 监督迁移学习过程示意图((a)提取Depth图像显著特征的深层卷积神经网络结构图.其中Relu层使用修正线性函数Relu$(x) = \max(x, 0)$保证输出不为负; Lrn表示局部响应归一化层; Dropout表示Dropout层, 在训练时以0.5比例忽略隐层节点防止过拟合. (b)基于深层卷积神经网络提取RGB图像和Depth图像显著特征流程图.首先图像被裁剪成尺寸为227$\times$227$\times$3作为深层卷积神经网络的输入, 在卷积层1通过96核的尺寸为7$\times$7步长为2滤波器卷积滤波, 得到卷积图像通过Relu函数, 再经过池化层1尺寸为3$\times$3步长为2的最大值池化成96个尺寸为55$\times$55的特征图, 最后对得到的特征图进行局部响应归一化.在卷积层2, 池化层2, 卷积层3, 卷积层4, 卷积层5和池化层5执行相似的处理.其池化层5输出作为全连接层6的输入, 经过全连接层7由输出层输出显著类别, 其中输出层采用softmax函数. (c)本文基于监督迁移学习的方法, 在RGB图像训练完成的Clarifai网络的基础上, 利用与RGB图像配对的Depth图像重新训练提取Depth图像显著特征的深层卷积神经网络)
Fig. 5 Architecture for supervision transfer ((a) The Architecture of Depth CNN, where Relu denotes a rectified linear function Relu$(x) = \max(x, 0)$, which rectify the feature maps thus ensuring the feature maps are always positive, lrn denotes a local response normalization layer, and Dropout is used in the fully connected layers with a rate of 0.5 to prevent CNN from overfitting. (b) The flowchart of image processed based on Depth CNN. A 227 by 227 crop of an image (with 3 planes) is presented as the input. This is convolved with 96 different 1st layer filters, each of size 7 by 7, using a stride of 2 in both $x$ and $y$. The resulting feature maps are then: passed through a rectified linear fuction, pooled (max within 3 by 3 regions, using stride 2), and local response normalized across feature maps to give 96 different 55 by 55 element feature maps. Similar operations are repeated in layers 2, 3, 4, 5. The last two layers are fully connected, taking features from the pooling layer 5 as input in vector form. The final layer is a 2-way softmax function, which indicates the image is salient or not. (c) We train a CNN model for depth images by teaching the network to reproduce the mid-level semantic representation learned from RGB images for which there are paired images)
图 6 NLPR数据集和NJU-DS2000数据集RGB图像和Depth图像显著特征的类条件互信息熵分布图((a) NLPR数据集颜色-深度显著情况; (b) NLPR数据集颜色显著情况; (c) NLPR数据集深度显著情况; (d) NJU-DS2000数据集颜色-深度显著情况; (e) NJU-DS2000数据集颜色显著情况; (f) NJU-DS2000数据集深度显著情况)
Fig. 6 Visual result for class-conditional mutual information between color and depth deep features on NLPR and NJU-DS2000 RGB-D image datasets ((a) Color-depth saliency situation in terms of the NLPR dataset, (b) Color saliency situation in terms of the NLPR dataset, (c) Depth saliency situation in terms of the NLPR dataset, (d) Color-depth saliency situation in terms of the NJU-DS2000 dataset, (e) Color saliency situation in terms of the NJU-DS2000 dataset, (f) Depth saliency situation in terms of the NJU-DS2000 dataset.)
图 7 基于DMNB模型显著性检测的图模型($y$和$\pmb{x}$为可观测变量, $\pmb{z}$为隐藏变量.其中$\pmb{x}_{1:N} = (\pmb{x}_c, \pmb{x}_d)$表示RGB-D图像显著特征, 特征$\pmb{x}_j$服从$C$个均值为$\{\mu_{jk}|j = 1, \cdots, N\}$和方差为$\{\sigma_{jk}^2|j = 1, \cdots, N\}$高斯分布. $y$是标识超像素是否显著的标签, 取值1或者0, 其中1表示显著, 0表示非显著)
Fig. 7 Graphical models of DMNB for saliency estimation ($y$ and $\pmb{x}$ are the corresponding observed states, and $\pmb{z}$ is the hidden variable, where each feature $\pmb{x}_j$ is assumed to have been generated from one of $C$ Gaussian distribution with a mean of $\{\mu_{jk}|j = 1, \cdots, N\}$ and a variance of $\{\sigma_{jk}^2|j = 1, \cdots, N\}$, $y$ is either 1 or 0 that indicates whether the pixel is salient or not.)
图 8 对比基于生成聚类和狄利克雷过程聚类方法确定DMNB模型混合分量参数$C$ ((a)针对NLPR数据集显著特征生成聚类图. (b)针对NLPR数据集基于狄利克雷过程的显著特征聚类图, 其中不同图例的数目代表DMNB模型混合分量参数$C$.对于NLPR数据集, 得到$C = 24$. (c)针对NJU-DS2000数据集显著性特征生成聚类图. (d)针对NJU-DS2000数据集基于狄利克雷过程的显著特征聚类图, 其中不同图例的数目代表DMNB模型混合分量参数$C$.对于NJU-DS2000数据集, 得到$C = 28$)
Fig. 8 Visual result for the number of components $C$ in the DMNB model: generative clusters vs DPMM clustering ((a) Generative clusters for NLPR RGB-D image dataset. (b) DPMM clustering for NLPR RGB-D image dataset, where the number of colors and shapes of the points denote the number of components $C$. We find $C = 24$ using DPMM on the NLPR dataset. (c) Generative clusters for NJU-DS2000 RGB-D image dataset. (d) DPMM clustering for NJU-DS2000 RGB-D image dataset, where the number of colors and shapes of the points denote the number of components $C$. We find $C = 28$ using DPMM on the NJU-DS2000 dataset.)
图 9 对于NLPR数据集交叉验证DMNB模型混合分量参数$C$, 给定一个由DPMM模型得到的参数$C$的取值范围, 采用10-fold进行交叉验证
Fig. 9 Cross validation for the parameter $C$ in the DMNB model in terms of NLPR dataset, we use 10-fold cross-validation with the parameter $C$ for DMNB models, the $C$ found using DPMM was adjusted over a wide range in a 10-fold cross-validation
图 10 NLPR数据集颜色-深度显著情况显著图对比. ((a) RGB图像; (b) Depth图像; (c)真值图; (d) ACSD方法; (e) GMR方法; (f) MC方法; (g) MDF方法; (h) LMH方法; (i) GP方法; (j)本文方法)
Fig. 10 Visual comparison of the saliency detection in the color-depth saliency situation in terms of NLPR dataset ((a) RGB image, (b) Depth image, (c) Ground truth, (d) ACSD, (e) GMR, (f) MC, (g) MDF, (h) LMH, (i) GP, (j) BFSD)
图 13 NLPR数据集颜色显著情况显著图对比((a) RGB图像; (b) Depth图像; (c)真值图; (d) ACSD方法; (e) GMR方法; (f) MC方法; (g) MDF方法; (h) LMH方法; (i) GP方法; (j)本文方法)
Fig. 13 Visual comparison of the saliency detection in the color saliency situation in terms of NLPR dataset ((a) RGB image, (b) Depth image, (c) Ground truth, (d) ACSD, (e) GMR, (f) MC, (g) MDF, (h) LMH, (i) GP, (j) BFSD)
图 14 NLPR数据集深度显著情况显著图对比((a) RGB图像; (b) Depth图像; (c)真值图; (d) ACSD方法; (e) GMR方法; (f) MC方法; (g) MDF方法; (h) LMH方法; (i) GP方法; (j)本文方法)
Fig. 14 Visual comparison of the saliency detection in the depth saliency situation in terms of NLPR dataset ((a) RGB image, (b) Depth image, (c) Ground truth, (d) ACSD, (e) GMR, (f) MC, (g) MDF, (h)LMH, (i) GP, (j) BFSD)
图 15 NJU-DS2000数据集颜色-深度显著情况显著图对比((a) RGB图像; (b) Depth图像; (c)真值图; (d) ACSD方法; (e) GMR方法; (f) MC方法; (g) MDF方法; (h)本文方法)
Fig. 15 Visual comparison of the saliency detection in the color-depth saliency situation in terms of NJU-DS2000 dataset ((a) RGB image, (b) Depth image, (c) Ground truth, (d) ACSD, (e) GMR, (f) MC, (g) MDF, (h) BFSD)
图 16 NJU-DS2000数据集颜色显著情况显著图对比. ((a) RGB图像; (b) Depth图像; (c)真值图; (d) ACSD方法; (e) GMR方法; (f) MC方法; (g) MDF方法; (h)本文方法)
Fig. 16 Visual comparison of the saliency detection in the color saliency situation in terms of NJU-DS2000 dataset ((a) RGB image, (b) Depth image, (c) Ground truth, (d) ACSD, (e) GMR, (f) MC, (g) MDF, (h) BFSD)
图 17 NJU-DS2000数据集深度显著情况显著图对比((a) RGB图像; (b) Depth图像; (c)真值图; (d) ACSD方法; (e) GMR方法; (f) MC方法; (g) MDF方法; (h)本文方法)
Fig. 17 Visual comparison of the saliency detection in the depth saliency situation in terms of NJU-DS2000 dataset ((a) RGB image; (b) Depth image; (c) Ground truth; (d) ACSD; (e) GMR; (f) MC; (g) MDF; (h) BFSD)
表 1 RGB-D图像数据集中3D显著性分布比例
Table 1 3D saliency situation on RGB-D image dataset
表 2 参数表
Table 2 Summary of parameters
变量名 取值范围 参数描述 $\tau$ (0, 1) 类条件互信息熵阈值 $\alpha$ (0, 20) 狄利克雷分布参数 $\theta$ (0, 1) 多项式分布参数 $\eta$ (-10.0, 3.0) 伯努利分布参数 $\Omega$ ((0, 1), (0, 0.2)) 高斯分布参数 $N$ $> 2$ 特征维度 $C$ $> 2$ DMNB模型分量参数 $\varepsilon_\mathcal{L}$ (0, 1) EM收敛阈值 表 3 NLPR数据集和NJU-DS2000数据集分布情况
Table 3 The benchmark and existing 3D saliency detection dataset
数据集 图片数 显著目标数 场景种类 中央偏置 NLPR 1000 (大多数)一个 11 是 NJU-DS2000 2000 (大多数)一个 $>$ 20 是 表 4 NLPR数据集处理一幅RGB-D图像平均时间比较
Table 4 Comparison of the average running time for per RGB-D image on the NLPR dataset
数据集 GMR MC MDF ACSD LMH GP BFSD NLPR 2.9s 72.7s $942.3$s 0.2s 2.8s 38.9s 80.1s 表 5 AUC值比较
Table 5 Comparison of the AUC on the NLPR dataset
显著分布情况 ACSD GMR MC MDF LMH GP BFSD 颜色-深度显著 0.61 0.73 0.81 0.82 0.70 0.79 0.83 颜色显著 0.56 0.74 0.84 0.83 0.61 0.65 0.84 深度显著 0.63 0.71 0.76 0.74 0.75 0.81 0.90 总体 0.60 0.73 0.81 0.80 0.69 0.78 0.85 -
[1] Borji A, Itti L. State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 185-207 doi: 10.1109/TPAMI.2012.89 [2] Wang Wen-Guan, Shen Jian-Bing, Shao Ling, Porikli Fatih. Correspondence driven saliency transfer. IEEE Transaction on Image Processing, 2016, 25(11): 5025-5034 doi: 10.1109/TIP.2016.2601784 [3] 丁正虎, 余映, 王斌, 张立明.选择性视觉注意机制下的多光谱图像舰船检测.计算机辅助设计与图形学学报, 2011, 23(3): 419-425 http://d.old.wanfangdata.com.cn/Periodical/jsjfzsjytxxxb201103007Ding Zheng-Hu, Yu Ying, Wang Bin, Zhang Li-Ming. Visual attention-based ship detection in multispectral imagery. Journal of Computer-Aided Design & Computer Graphics, 2011, 23(3): 419-425 http://d.old.wanfangdata.com.cn/Periodical/jsjfzsjytxxxb201103007 [4] Gao D S, Han S Y, Vasconcelos N. Discriminant saliency, the detection of suspicious coincidences, and applications to visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(6): 989-1005 doi: 10.1109/TPAMI.2009.27 [5] Jian M W, Dong J Y, Ma J. Image retrieval using wavelet-based salient regions. The Imaging Science Journal, 2011, 59(4): 219-231 doi: 10.1179/136821910X12867873897355 [6] 王向阳, 杨红颖, 郑宏亮, 吴俊峰.基于视觉权值的分块颜色直方图图像检索算法.自动化学报, 2010, 36(10): 1489-1492 doi: 10.3724/SP.J.1004.2010.01489Wang Xiang-Yang, Yang Hong-Ying, Zheng Hong-Liang, Wu Jun-Feng. A color block-histogram image retrieval based on visual weight. Acta Automatica Sinica, 2010, 36(10): 1489-1492 doi: 10.3724/SP.J.1004.2010.01489 [7] 冯欣, 杨丹, 张凌.基于视觉注意力变化的网络丢包视频质量评估.自动化学报, 2011, 37(11): 1322-1331 doi: 10.3724/SP.J.1004.2011.01322Feng Xin, Yang Dan, Zhang Ling. Saliency variation based quality assessment for packet-loss-impaired videos. Acta Automatica Sinica, 2011, 37(11): 1322-1331 doi: 10.3724/SP.J.1004.2011.01322 [8] Gupta R, Chaudhury S. A scheme for attentional video compression. In: Proceedings of the 4th International Conference on Pattern Recognition and Machine Intelligence. Moscow, Russia: IEEE, 2011. 458-465 [9] Guo C L, Zhang L M. A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE Transactions on Image Processing, 2010, 19(1): 185-198 http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=560c7c523a5fae193c072cc702070cd8 [10] Kim W, Kim C. A novel image importance model for content-aware image resizing. In: Proceedings of the 18th IEEE International Conference on Image Processing. Brussels, Belgium: IEEE, 2011. 2469-2472 [11] 江晓莲, 李翠华, 李雄宗.基于视觉显著性的两阶段采样突变目标跟踪算法.自动化学报, 2014, 40(6): 1098-1107 doi: 10.3724/SP.J.1004.2014.01098Jiang Xiao-Lian, Li Cui-Hua, Li Xiong-Zong. Saliency based tracking method for abrupt motions via two-stage sampling. Acta Automatica Sinica, 2014, 40(6): 1098-1107 doi: 10.3724/SP.J.1004.2014.01098 [12] 黎万义, 王鹏, 乔红.引入视觉注意机制的目标跟踪方法综述.自动化学报, 2014, 40(4): 561-576 doi: 10.3724/SP.J.1004.2014.00561Li Wan-Yi, Wang Peng, Qiao Hong. A survey of visual attention based methods for object tracking. Acta Automatica Sinica, 2014, 40(4): 561-576 doi: 10.3724/SP.J.1004.2014.00561 [13] Le Callet P, Niebur E. Visual attention and applications in multimedia technologies. Proceedings of the IEEE, 2013, 101(9): 2058-2067 doi: 10.1109/JPROC.2013.2265801 [14] Wang J L, Fang Y M, Narwaria M, Lin W S, Le Callet P. Stereoscopic image retargeting based on 3D saliency detection, In: Proceedings of 2014 International Conference on Acoustics, Speech, and Signal Processing. Florence, Italy: IEEE, 2014. 669-673 [15] Kim H, Lee S, Bovik A C. Saliency prediction on stereoscopic videos. IEEE Transactions on Image Processing, 2014, 23(4): 1476-1490 doi: 10.1109/TIP.2014.2303640 [16] Zhang Y, Jiang G Y, Yu M, Chen K. Stereoscopic visual attention model for 3D video. In: Proceedings of the 16th International Conference on Multimedia Modeling. Chongqing, China: Springer, 2010. 314-324 [17] Uherčík M, Kybic J, Zhao Y, Cachard C, Liebgott H. Line filtering for surgical tool localization in 3D ultrasound images. Computers in Biology and Medicine, 2013, 43(12): 2036-2045 doi: 10.1016/j.compbiomed.2013.09.020 [18] Zhao Y, Cachard C, Liebgott H. Automatic needle detection and tracking in 3D ultrasound using an ROI-based RANSAC and Kalman method. Ultrasonic Imaging, 2013, 35(4): 283-306 doi: 10.1177/0161734613502004 [19] Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11): 1254-1259 doi: 10.1109/34.730558 [20] 胡正平, 孟鹏权.全局孤立性和局部同质性图表示的随机游走显著目标检测算法.自动化学报, 2011, 37(10): 1279-1284 doi: 10.3724/SP.J.1004.2011.01279Hu Zheng-Ping, Meng Peng-Quan. Graph presentation random walk salient object detection algorithm based on global isolation and local homogeneity. Acta Automatica Sinica, 2011, 37(10): 1279-1284 doi: 10.3724/SP.J.1004.2011.01279 [21] Cheng M M, Mitra N J, Huang X L, Torr P H S, Hu S M. Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 569-582 doi: 10.1109/TPAMI.2014.2345401 [22] 唐勇, 杨林, 段亮亮.基于图像单元对比度与统计特性的显著性检测.自动化学报, 2013, 39(10): 1632-1641 doi: 10.3724/SP.J.1004.2013.01632Tang Yong, Yang Lin, Duan Liang-Liang. Image cell based saliency detection via color contrast and distribution. Acta Automatica Sinica, 2013, 39(10): 1632-1641 doi: 10.3724/SP.J.1004.2013.01632 [23] 郭迎春, 袁浩杰, 吴鹏.基于Local特征和Regional特征的图像显著性检测.自动化学报, 2013, 39(8): 1214-1224 doi: 10.3724/SP.J.1004.2013.01214Guo Ying-Chun, Yuan Hao-Jie, Wu Peng. Image saliency detection based on local and regional features. Acta Automatica Sinica, 2013, 39(8): 1214-1224 doi: 10.3724/SP.J.1004.2013.01214 [24] 徐威, 唐振民.利用层次先验估计的显著性目标检测.自动化学报, 2015, 41(4): 799-812 doi: 10.16383/j.aas.2015.c140281Xu Wei, Tang Zhen-Min. Exploiting hierarchical prior estimation for salient object detection. Acta Automatica Sinica, 2015, 41(4): 799-812 doi: 10.16383/j.aas.2015.c140281 [25] Shi K Y, Wang K Z, Lu J, B Lin L. PISA: pixelwise image saliency by aggregating complementary appearance contrast measures with spatial priors. In: Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR, USA: IEEE, 2013. 2115-2122 [26] Judd T, Ehinger K, Durand F, Torralba A. Learning to predict where humans look. In: Proceedings of the 12th International Conference on Computer Vision. Kyoto, Japan: IEEE, 2009. 2106-2113 [27] Liu T, Yuan Z J, Sun J, Wang J D, Zheng N N, Tang X O, et al. Learning to detect a salient object. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(2): 353-367 doi: 10.1109/TPAMI.2010.70 [28] Wei Y C, Wen F, Zhu W J, Sun J. Geodesic saliency using background priors. In: Proceedings of the 12th European Conference on Computer Vision. Firenze, Italy: Springer, 2012. 29-42 [29] 钱生, 陈宗海, 林名强, 张陈斌.基于条件随机场和图像分割的显著性检测.自动化学报, 2015, 41(4): 711-724 doi: 10.16383/j.aas.2015.c140328Qian Sheng, Chen Zong-Hai, Lin Ming-Qiang, Zhang Chen-Bin. Saliency detection based on conditional random field and image segmentation. Acta Automatica Sinica, 2015, 41(4): 711-724 doi: 10.16383/j.aas.2015.c140328 [30] Shen X H, Wu Y. A unified approach to salient object detection via low rank matrix recovery. In: Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USA: IEEE, 2012. 853-860 [31] Jiang H Z, Wang J D, Yuan Z J, Liu T, Zheng N N, Li S P. Automatic salient object segmentation based on context and shape prior. In: Proceedings of 2011 British Machine Vision Conference. Dundee, UK: BMVA Press, 2011. 110.1-110.12 [32] Yang C, Zhang L H, Lu H C, Ruan X, Yang M H. Saliency detection via graph-based manifold ranking. In: Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR, USA: IEEE, 2013. 3166-3173 [33] Zhao R, Ouyang W L, Li H S, Wang X G. Saliency detection by multi-context deep learning. In: Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015. 1265-1274 [34] Li G B, Yu Y Z. Visual saliency detection based on multiscale deep CNN features. IEEE Transactions on Image Processing, 2016, 25(11): 5012-5024 doi: 10.1109/TIP.2016.2602079 [35] Lang C Y, Nguyen T V, Katti H, Yadati K, Kankanhalli M, Yan S C. Depth matters: influence of depth cues on visual saliency. In: Proceedings of 12th European Conference on Computer Vision. Firenze, Italy: Springer, 2012. 101-115 [36] Desingh K, Krishna K M, Rajan D, Jawahar C V. Depth really matters: improving visual salient region detection with depth. In: Proceedings of 2013 British Machine Vision Conference. Bristol, England: BMVA Press, 2013. 98.1-98.11 [37] Niu Y Z, Geng Y J, Li X Q, Liu F. Leveraging stereopsis for saliency analysis. In: Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USA: IEEE, 2012. 454-461 [38] Ju R, Ge L, Geng W J, Ren T W, Wu G S. Depth saliency based on anisotropic center-surround difference. In: Proceedings of 2014 IEEE International Conference on Image Processing. Pairs, France: IEEE, 2014. 1115-1119 http://www.researchgate.net/publication/282375096_Depth_saliency_based_on_anisotropic_center-surround_difference [39] Ren J Q, Gong X J, Yu L, Zhou W H, Yang M Y. Exploiting global priors for RGB-D saliency detection. In: Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Boston, MA, USA: IEEE, 2015. 25-32 http://www.researchgate.net/publication/288507923_Exploiting_global_priors_for_RGB-D_saliency_detection [40] Peng H W, Li B, Xiong W H, Hu W M, Ji R R. RGBD salient object detection: a benchmark and algorithms. In: Proceedings of 13th European Conference on Computer Vision. Zurich, Switzerland: Springer, 2014. 92-109 [41] Fang Y M, Wang J L, Narwaria M, Le Callet P, Lin W S. Saliency detection for stereoscopic images. IEEE Transactions on Image Processing, 2014, 23(6): 2625-2636 doi: 10.1109/TIP.2014.2305100 [42] Ciptadi A, Hermans T, Rehg J. An in depth view of saliency. In: Proceedings of 2013 British Machine Vision Conference. Bristol, United Kingdom: BMVA Press, 2013. 112.1-112.11 [43] Wu P L, Duan L L, Kong L F. RGB-D salient object detection via feature fusion and multi-scale enhancement. In: Proceedings of 2015 CCF Chinese Conference on Computer Vision. Xi'an, China: Springer, 2015. 359-368 doi: 10.1007/978-3-662-48570-5_35 [44] Iatsun I, Larabi M C, Fernandez-Maloigne C. Using monocular depth cues for modeling stereoscopic 3D saliency. In: Proceedings of 2014 IEEE International Conference on Acoustics, Speech and Signal Processing. Florence, Italy: IEEE, 2014. 589-593 [45] Ouerhani N, Hugli H. Computing visual attention from scene depth. In: Proceedings of the 15th International Conference on Pattern Recognition. Barcelona, Spain: IEEE, 2000. 375-378 [46] Xue H Y, Gu Y, Li Y J, Yang J. RGB-D saliency detection via mutual guided manifold ranking. In: Proceedings of 2015 IEEE International Conference on Image Processing. Quebec City, QC, Canada: IEEE, 2015. 666-670 [47] Wang J L, Da Silva M P, Le Callet P, Ricordel V. Computational model of stereoscopic 3D visual saliency. IEEE Transactions on Image Processing, 2013, 22(6): 2151-2165 doi: 10.1109/TIP.2013.2246176 [48] Iatsun I, Larabi M C, Fernandez-Maloigne C. Visual attention modeling for 3D video using neural networks. In: Proceedings of 2014 International Conference on 3D Imaging. Liege, Belglum: IEEE, 2014. 1-8 [49] Fang Y M, Lin W S, Fang Z J, Lei J J, Le Callet P, Yuan F N. Learning visual saliency for stereoscopic images. In: Proceedings of 2014 IEEE International Conference on Multimedia and Expo Workshops. Chengdu, China: IEEE, 2014. 1-6 [50] Bertasius G, Park H S, Shi J B. Exploiting egocentric object prior for 3D saliency detection. arXiv: 1511.02682, 2015. [51] Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S. Slic superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11): 2274-2282 doi: 10.1109/TPAMI.2012.120 [52] Qu L Q, He S F, Zhang J W, Tian J D, Tang Y D, Yang Q X. RGBD salient object detection via deep fusion. IEEE Transactions on Image Processing, 2017, 26(5): 2274-2285 doi: 10.1109/TIP.2017.2682981 [53] Gupta S, Hoffman J, Malik J. Cross modal distillation for supervision transfer. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016. 2827-2836 [54] Shan H H, Banerjee A, Oza N C. Discriminative mixed-membership models. In: Proceedings of the 9th IEEE International Conference on Data Mining. Miami, Florida, USA: IEEE, 2009. 466-475 [55] Wang S T, Zhou Z, Qu H B, Li B. Visual saliency detection for RGB-D images with generative model. In: Proceedings of the 13th Asian Conference on Computer Vision. Taipei, China: Springer, 2016. 20-35 [56] Rish I. An empirical study of the naive Bayes classifier. Journal of Universal Computer Science, 2001, 3(22): 41-46 [57] Blei D M, Jordan M I. Variational inference for dirichlet process mixtures. Bayesian Analysis, 2006, 1(1): 121-143 doi: 10.1214/06-BA104 [58] Sun D Q, Roth S, Black M J. Secrets of optical flow estimation and their principles. In: Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, CA, USA: IEEE, 2010. 2432-2439