2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

分级特征反馈融合的深度图像超分辨率重建

张帅勇 刘美琴 姚超 林春雨 赵耀

张帅勇, 刘美琴, 姚超, 林春雨, 赵耀. 分级特征反馈融合的深度图像超分辨率重建. 自动化学报, 2022, 48(4): 992−1003 doi: 10.16383/j.aas.c200542
引用本文: 张帅勇, 刘美琴, 姚超, 林春雨, 赵耀. 分级特征反馈融合的深度图像超分辨率重建. 自动化学报, 2022, 48(4): 992−1003 doi: 10.16383/j.aas.c200542
Zhang Shuai-Yong, Liu Mei-Qin, Yao Chao, Lin Chun-Yu, Zhao Yao. Hierarchical feature feedback network for depth super-resolution reconstruction. Acta Automatica Sinica, 2022, 48(4): 992−1003 doi: 10.16383/j.aas.c200542
Citation: Zhang Shuai-Yong, Liu Mei-Qin, Yao Chao, Lin Chun-Yu, Zhao Yao. Hierarchical feature feedback network for depth super-resolution reconstruction. Acta Automatica Sinica, 2022, 48(4): 992−1003 doi: 10.16383/j.aas.c200542

分级特征反馈融合的深度图像超分辨率重建

doi: 10.16383/j.aas.c200542
基金项目: 国家自然科学基金(61972028, 61902022, U1936212), 国家重点研发计划(2018AAA0102100), 中央高校基本科研业务费(2019JBM018, FRF-TP-19-015A1)资助
详细信息
    作者简介:

    张帅勇:北京交通大学信息科学研究所硕士研究生. 主要研究方向为图像/视频超分辨率重建. E-mail: 19125150@bjtu.edu.cn

    刘美琴:北京交通大学信息科学研究所副教授. 主要研究方向为图像/视频编码, 三维视频处理. 本文通信作者. E-mail: mqliu@bjtu.edu.cn

    姚超:北京科技大学计算机与通信工程学院助理教授. 主要研究方向为图像和视频处理, 计算机视觉. E-mail: yaochao@ustb.edu.cn

    林春雨:北京交通大学信息科学研究所教授. 主要研究方向为图像视频编码, 三维视觉处理. E-mail: cylin@bjtu.edu.cn

    赵耀:北京交通大学信息科学研究所教授. 主要研究方向为图像编码,数字水印, 多媒体信息处理. E-mail: yzhao@bjtu.edu.cn

Hierarchical Feature Feedback Network for Depth Super-resolution Reconstruction

Funds: Supported by National Natural Science Foundation of China (61972028, 61902022, U1936212), National Key Research and Development of China (2018AAA0102100), and the Fundamental Research Funds for the Central Universities (2019JBM018, FRF-TP-19-015A1)
More Information
    Author Bio:

    ZHANG Shuai-Yong Master student at Institute of Information Science, Beijing Jiaotong University. His research interest covers image and video super resolution

    LIU Mei-Qin Assistant professor at Institute of Information Science, Beijing Jiaotong University. Her research interest covers image and video compression, three-dimension video processing. Corresponding author of this paper

    YAO Chao Assistant professor at School of Computer and Communication Engineering, University of Science and Technology Beijing. His research interest covers image and video processing, computer vision

    LIN Chun-Yu Professor at Institute of Information Science, Beijing Jiaotong University. His research interest covers image and video compression, three-dimension vision processing

    ZHAO Yao Professor at Institute of Information Science, Beijing Jiaotong University. His research interest covers image compression, digital watermarking, and multimedia information processing

  • 摘要: 受采集装置的限制, 采集的深度图像存在分辨率较低、易受噪声干扰等问题. 本文构建了分级特征反馈融合网络 (Hierarchical feature feedback network, HFFN), 以实现深度图像的超分辨率重建. 该网络利用金字塔结构挖掘深度−纹理特征在不同尺度下的分层特征, 构建深度−纹理的分层特征表示. 为了有效利用不同尺度下的结构信息, 本文设计了一种分级特征的反馈式融合策略, 综合深度−纹理的边缘特征, 生成重建深度图像的边缘引导信息, 完成深度图像的重建过程. 与对比方法相比, 实验结果表明HFNN网络提升了深度图像的主、客观重建质量.
  • 图  1  分级特征反馈融合网络

    Fig.  1  Hierarchical feature feedback network

    图  2  深度−纹理反馈式融合模块

    Fig.  2  Depth-Color feedback fusion module

    图  3  特征重建结果

    Fig.  3  Feature reconstruction results

    图  4  “Art”的视觉质量对比结果

    Fig.  4  Visual quality comparison results of “Art”

    图  5  各算法重建“Books”的RMSE值走势图

    Fig.  5  The trend of the RMSE for “Books”

    图  6  $ 4\times $尺度下测试图片“Art”的视觉质量对比结果

    Fig.  6  Visual quality comparison results of “Art” at scale $ 4\times $

    图  7  $ 8\times $尺度下测试图片“Laundry”的视觉质量对比结果

    Fig.  7  Visual quality comparison results of “Laundry” at scale $ 8\times $

    表  1  残差块数目对HFFN网络性能的影响

    Table  1  Influence of the number of residual blocks on the performance of HFFN

    网络模型H_R3H_R5H_R7H_R10
    训练时间(h)33.84.45.4
    模型参数(M)2.672.963.263.70
    重建结果(dB)48.1448.2348.4548.54
    下载: 导出CSV

    表  2  金字塔层数对HFFN网络性能的影响

    Table  2  Influence of pyramid layers on the performance of HFFN

    网络模型H_P2H_ P3H_ P4
    训练时间(h)3.63.88.6
    模型参数(M)1.622.968.34
    重建结果(dB)48.0448.2448.32
    下载: 导出CSV

    表  3  消融分析结果

    Table  3  Results of ablation study

    网络纹理分层深层融合结果(RMSE/PSNR)
    Basic$ \checkmark$3.2238/40.071
    H_Color$ \checkmark$$ \checkmark$2.8239/41.438
    H_CP$ \checkmark$$ \checkmark$$ \checkmark$2.8544/41.578
    H_Res$ \checkmark$$ \checkmark$$ \checkmark$2.9352/41.285
    HFFN-100$ \checkmark$$ \checkmark$$ \checkmark$$ \checkmark$2.7483/41.671
    下载: 导出CSV

    表  4  测试数据集A的客观对比结果 (RMSE)

    Table  4  Objective comparison results (RMSE) on test dataset A

    对比算法ArtBooksMoebius
    $ 2\times $$ 3\times $$ 4\times $$ 8\times $$ 2\times $$ 3\times $$ 4\times $$ 8\times $$ 2\times $$ 3\times $$ 4\times $$ 8\times $
    Bicubic2.663.343.905.501.081.391.632.360.851.081.291.89
    GF[6]3.633.844.145.491.491.591.732.351.251.321.421.91
    TGV[14]3.033.313.784.791.291.411.601.991.131.251.461.91
    JID[16]1.241.632.013.230.650.760.921.270.640.710.891.27
    SRCNN[17]2.483.053.715.281.031.261.582.300.811.031.231.84
    Huang[21]0.66/1.592.710.54/0.831.190.52/0.861.21
    MSG[25]0.66/1.472.460.37/0.671.030.36/0.661.02
    RDN-GDE[26]0.56/1.472.600.36/0.621.000.38/0.691.05
    MFR-SR[27]0.71/1.542.710.42/0.631.050.42/0.721.10
    PMBA[28]0.61/2.043.630.41/0.921.680.39/0.841.41
    DepthSR[29]0.530.891.202.220.420.560.600.89////
    HFFN0.410.841.282.290.280.370.490.870.310.450.570.89
    HFFN+0.380.811.242.190.270.360.470.840.300.440.550.85
    下载: 导出CSV

    表  6  测试数据集C的客观对比结果 (RMSE)

    Table  6  Objective comparison results (RMSE) on test dataset C

    对比算法TsukubaVenusTeddyCones
    $ 2\times $$ 3\times $$ 4\times $$ 8\times $$ 2\times $$ 3\times $$ 4\times $$ 8\times $$ 2\times $$ 3\times $$ 4\times $$ 8\times $$ 2\times $$ 3\times $$ 4\times $$ 8\times $
    Bicubic5.817.178.5612.31.321.641.912.761.992.482.904.072.453.063.605.30
    GF[6]8.128.639.4012.51.631.751.932.692.492.672.933.983.333.553.875.29
    TGV[14]7.207.7810.317.52.152.342.524.042.712.993.35.393.513.974.457.14
    JID[16]3.484.915.9510.90.80.911.171.761.281.532.942.761.692.424.175.11
    SRCNN[17]5.476.328.1111.81.271.431.852.671.882.252.773.952.342.523.435.15
    Huang[21]1.41/3.737.790.56/0.721.090.85/1.582.880.88/2.384.66
    MSG[25]1.85/4.298.420.14/0.351.040.71/1.492.760.90/2.604.23
    DepthSR[29]1.332.253.266.89////0.831.151.371.85////
    HFFN1.372.493.537.670.210.280.420.840.610.931.212.270.651.241.713.91
    HFFN+1.142.223.217.600.200.280.400.780.560.861.132.120.611.141.593.66
    下载: 导出CSV

    表  5  测试数据集B的客观对比结果 (RMSE)

    Table  5  Objective comparison results (RMSE) on test dataset B

    对比算法DollsLaundryReindeer
    $ 2\times $$ 3\times $$ 4\times $$ 8\times $$ 2\times $$ 3\times $$ 4\times $$ 8\times $$ 2\times $$ 3\times $$ 4\times $$ 8\times $
    Bicubic0.941.151.331.871.612.052.393.431.972.462.864.05
    GF[6]1.251.311.411.862.212.362.543.422.682.843.054.06
    TGV[14]1.121.211.361.861.992.222.513.762.402.562.713.79
    JID[16]0.700.790.921.260.750.941.212.080.921.211.562.58
    SRCNN[17]0.901.011.281.821.521.742.313.321.842.172.733.92
    Huang[21]0.58/0.911.310.52/0.921.520.59/1.111.80
    MSG[25]0.35/0.691.050.37/0.791.510.42/0.981.76
    RDN-GDE[26]0.56/0.881.210.48/0.961.630.51/1.172.05
    MFR-SR[27]0.60/0.891.220.61/1.111.750.65/1.232.06
    PMBA[28]0.36/0.951.470.38/1.142.190.40/1.392.74
    DepthSR[29]////0.440.620.781.310.510.770.961.57
    HFFN0.360.590.751.110.320.520.731.330.350.660.961.64
    HFFN+0.340.570.741.090.300.510.701.260.340.630.921.58
    下载: 导出CSV

    表  7  平均运行时间

    Table  7  Average running time

    对比算法时间 (s)对比算法时间 (s)
    Bicubic0.01MSG[25]0.29
    GF[6]0.13PMBA[28]0.46
    TGV[14]894.43DepthSR[29]1.84
    SRCNN[17]0.13HFFN0.21
    下载: 导出CSV
  • [1] Kopf J, Cohen M F, Lischinski D, Uyttendaele M. Joint bilateral upsampling. ACM Transactions on Graphics, 2007, 26(3): 96-es. doi: 10.1145/1276377.1276497
    [2] Liu M Y, Tuzel O, Taguchi Y. Joint geodesic upsampling of depth images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA: IEEE, 2013: 169−176
    [3] Lei J, Li L, Yue H, Wu F, Ling N, Hou C. Depth map super-resolution considering view synthesis quality. IEEE Transactions on Image Processing, 2017, 26(4): 1732-1745. doi: 10.1109/TIP.2017.2656463
    [4] 杨宇翔, 曾毓, 何志伟, 高明煜. 基于自适应权值滤波的深度图像超分辨率重建. 中国图象图形学报, 2014, 19(8): 1210-1218.

    Yang Yu-Xiang, Zeng Yu, He Zhi-Wei, Gao Ming-Yu. Depth map super-resolution via adaptive weighting filter. Journal of Image and Graphics, 2014, 19(8): 1210-1218.
    [5] Lu J, Forsyth D. Sparse depth super resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015: 2245−2253
    [6] He K, Sun J, Tang X. Guided image filtering. In: Proceedings of the European Conference on Computer Vision. Heraklion, Greece: Springer, 2010: 1−14
    [7] Barron J T, Poole B. The fast bilateral solver. In: Proceedings of the European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016: 617−632
    [8] Yang Y, Lee H S, Oh B T. Depth map upsampling with a confidence-based joint guided filter. Signal Processing: Image Communication, 2019, 77: 40-48. doi: 10.1016/j.image.2019.05.014
    [9] Diebel J, Thrun S. An application of Markov random fields to range sensing. In: Proceedings of the Advances in Neural Information Processing Systems. Vancouver, Canada: NIPS, 2006: 291−298
    [10] 安耀祖, 陆耀, 赵红. 一种自适应正则化的图像超分辨率算法. 自动化学报, 2012, 38(4): 601-608. doi: 10.3724/SP.J.1004.2012.00601

    An Yao-Zu, Lu Yao, Zhao Hong. An Adaptive-regularized Image Super-resolution. Acta Automatica Sinica, 2012, 38(4): 601-608. doi: 10.3724/SP.J.1004.2012.00601
    [11] Park J, Kim H, Tai Y W, Brown M S, Kweon I. High quality depth map upsampling for 3D-TOF cameras. In: Proceedings of the IEEE International Conference on Computer Vision. Barcelona, Spain: IEEE, 2011: 1623−1630
    [12] 严徐乐, 安平, 郑帅, 左一帆, 沈礼权. 基于边缘增强的深度图超分辨率重建[J]. 光电子·激光, 2016, 27(4): 437-447.

    Yan Xu-Le, An Ping, Zheng Shuai, Zuo Yi-Fan, Shen Li-Quan. Super-resolution reconstruction for depth map based on edge enhancement. Journal of Optoelectronics·Laser, 2016, 27(4): 437-447.
    [13] Yang J, Ye X, Li K, Hou C, Wang Y. Color-guided depth recovery from RGB-D data using an adaptive autoregressive model. IEEE Transactions on Image Processing, 2014, 23(8): 3443-3458. doi: 10.1109/TIP.2014.2329776
    [14] Ferstl D, Reinbacher C, Ranftl R, Ruether M, Bichof H. Image guided depth upsampling using anisotropic total generalized variation. In: Proceedings of the IEEE International Conference on Computer Vision. Sydney, Australia: IEEE, 2013: 993−1000
    [15] Li Y, Min D, Do M N, Lu J. Fast guided global interpolation for depth and motion. In: Proceedings of the European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016: 717−733
    [16] Kiechle M, Hawe S, Kleinsteuber M. A joint intensity and depth co-sparse analysis model for depth map super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision. Sydney, Australia: IEEE, 2013: 1545−1552
    [17] Dong C, Loy C C, He K, Tang X. Learning a deep convolutional network for image super-resolution. In: Proceedings of the European Conference on Computer Vision. Zurich, Switzerland: Springer, 2014: 184−199
    [18] Shi W, Caballero J, Huszár F, Totz J, Aitken A P, Bishop R et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016: 1874−1883
    [19] Lim B, Son S, Kim H, Nah S, Mu Lee K. Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, USA: IEEE, 2017: 136−144
    [20] 周登文, 赵丽娟, 段然, 柴晓亮. 基于递归残差网络的图像超分辨率重建. 自动化学报, 2019, 45(6): 1157-1165.

    Zhou Deng-Wen, Zhao Li-Juan, Duan Ran, Chai Xiao-Liang. Image super-resolution based on recursive residual networks. Acta Automatica sinica, 2019, 45(6): 1157-1165.
    [21] 张毅锋, 刘袁, 蒋程, 程旭. 用于超分辨率重建的深度网络递进学习方法. 自动化学报, 2020, 46(2): 274-282.

    Zhang Yi-Feng, Liu Yuan, Jiang Cheng, Cheng Xu. A curriculum learning approach for single image super resolution. Acta Automatica Sinica, 2020, 46(2): 274-282.
    [22] Huang L, Zhang J, Zuo Y, Wu Q. Pyramid-structured depth map super-resolution based on deep dense-residual network. IEEE Signal Processing Letters, 2019, 26(12): 1723-1727. doi: 10.1109/LSP.2019.2944646
    [23] Song X, Dai Y, Zhou D, Liu L, Li W, Li H et al. Channel attention based iterative residual learning for depth map super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020: 5631−5640
    [24] Zhao L, Bai H, Liang J, Zeng B, Wang A, Zhao Y. Simultaneous color-depth super-resolution with conditional generative adversarial networks. Pattern Recognition, 2019, 88: 356-369. doi: 10.1016/j.patcog.2018.11.028
    [25] Hui T W, Loy C C, Tang X. Depth map super-resolution by deep multi-scale guidance. In: Proceedings of the European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016: 353−369
    [26] Zuo Y, Fang Y, Yang Y, Shang X, Wang B. Residual dense network for intensity-guided depth map enhancement. Information Sciences, 2019, 495: 52-64. doi: 10.1016/j.ins.2019.05.003
    [27] Zuo Y, Wu Q, Fang Y, An P, Huang L Chen Z. Multi-scale frequency reconstruction for guided depth map super-resolution via deep residual network. IEEE Transactions on Circuits and Systems for Video Technology, 2019, 30(2): 297-306.
    [28] Ye X, Sun B, Wang Z, Yang J, Xu R, Li H, Li B. PMBANet: Progressive multi-branch aggregation network for scene depth super-resolution. IEEE Transactions on Image Processing, 2020, 29: 7427-7442. doi: 10.1109/TIP.2020.3002664
    [29] Guo C, Li C, Guo J, Cong R, Fu H, Han P. Hierarchical features driven residual learning for depth map super-resolution. IEEE Transactions on Image Processing, 2018, 28(5): 2545-2557.
    [30] Kingma D P, Ba J. Adam: A method for stochastic optimization. In: Proceedings of the 3rd International Conference on Learning Representations. San Diego, CA, USA, 2015: 1−15
    [31] Scharstein D, Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 2002, 47(1-3): 7-42.
    [32] Scharstein D, Pal C. Learning conditional random fields for stereo. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, USA: IEEE, 2007: 1−8
    [33] Scharstein D, Hirschmüller H, Kitajima Y, Krathwohl G, Nešić N, Wang X, et al. High-resolution stereo datasets with subpixel-accurate ground truth. In: Proceedings of the German Conference on Pattern Recognition. Münster, Germany: Springer, 2014: 31−42
    [34] Butler D J, Wulff J, Stanley G B, Black M J. A naturalistic open source movie for optical flow evaluation. In: Proceedings of the European Conference on Computer Vision. Florence, Italy: Springer, 2012: 611−625
    [35] Scharstein D, Szeliski R. High-accuracy stereo depth maps using structured light. In: Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Madison, USA: IEEE, 2003, 1: I−I
    [36] Hirschmuller H, Scharstein D. Evaluation of cost functions for stereo matching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, USA: IEEE, 2007: 1−8
    [37] Timofte R, Rothe R, Van Gool L. Seven ways to improve example-based single image super resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016: 1865−1873
  • 加载中
图(7) / 表(7)
计量
  • 文章访问数:  1928
  • HTML全文浏览量:  530
  • PDF下载量:  298
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-07-13
  • 修回日期:  2020-09-30
  • 网络出版日期:  2020-12-07
  • 刊出日期:  2022-04-13

目录

    /

    返回文章
    返回