2.793

2018影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于模糊核估计的图像盲超分辨率神经网络

李公平 陆耀 王子建 吴紫薇 汪顺舟

李公平, 陆耀, 王子建, 吴紫薇, 汪顺舟. 基于模糊核估计的图像盲超分辨率神经网络. 自动化学报, 2021, x(x): 1−13 doi: 10.16383/j.aas.c200987
引用本文: 李公平, 陆耀, 王子建, 吴紫薇, 汪顺舟. 基于模糊核估计的图像盲超分辨率神经网络. 自动化学报, 2021, x(x): 1−13 doi: 10.16383/j.aas.c200987
Li Gong-Ping, Lu Yao, Wang Zi-Jian, Wu Zi-Wei, Wang Shun-Zhou. Blurred image blind super-resolution network via kernel estimation. Acta Automatica Sinica, 2021, x(x): 1−13 doi: 10.16383/j.aas.c200987
Citation: Li Gong-Ping, Lu Yao, Wang Zi-Jian, Wu Zi-Wei, Wang Shun-Zhou. Blurred image blind super-resolution network via kernel estimation. Acta Automatica Sinica, 2021, x(x): 1−13 doi: 10.16383/j.aas.c200987

基于模糊核估计的图像盲超分辨率神经网络

doi: 10.16383/j.aas.c200987
基金项目: 国家自然科学基金(61273273号)、国家重点研究发展计划(2017YFC0112001号)和中央电视台(JG2018-0247)资助
详细信息
    作者简介:

    李公平:北京理工大学计算机学院硕士研究生.主要研究方向为计算机视觉与深度学习. E-mail: gongping_li@bit.edu.cn

    陆耀:北京理工大学计算机学院教授, 博士生导师. 主要研究兴趣包括视觉神经计算、图像图形处理与视频分析、模式识别和机器学习等. 本文通信作者. E-mail: vis_yl@bit.edu.cn

    王子建:北京理工大学计算机学院博士研究生. 主要研究方向为计算机视觉与深度学习. E-mail: wangzijian@bit.edu.cn

    吴紫薇:北京理工大学计算机学院硕士研究生.主要研究方向为计算机视觉与深度学习. E-mail: wzw_cs@bit.edu.cn

    汪顺舟:北京理工大学计算机学院博士研究生. 主要研究方向为计算机视觉与深度学习. E-mail: shunzhouwang@bit.edu.cn

Blurred Image Blind Super-Resolution Network via Kernel Estimation

Funds: Supported by the National Natural Science Foundation of China (No.61273273), by the National Key Research and Development Plan (No.2017YFC0112001), and by China Central Television (JG2018-0247)
More Information
    Author Bio:

    LI Gong-Ping Master candidate at School of Computer Science and Technology, Beijing Institute of Technology. His research interests cover computer vision and deep learning

    LU Yao Professor at School of Computer Science, Beijing Institute of Technology. His research interests include neural network, image processing and video analysis, and pattern recognition. Corresponding author of this paper

    WANG Zi-Jian Ph. D. candidate at School of Computer Science and Technology, Beijing Institute of Technology. His main research interests include computer vision and deep learning

    WU Zi-Wei Master candidate at School of Computer Science and Technology, Beijing Institute of Technology. Her main research interests include computer vision and deep learning

    WANG Shun-Zhou Ph. D. candidate at School of Computer Science and Technology, Beijing Institute of Technology. His main research interests include computer vision and deep learning

  • 摘要: 模糊图像的超分辨率重建具有挑战性并且有重要的实用价值. 本文提出了一种基于模糊核估计的图像盲超分辨率神经网络, 主要包括两部分: 模糊核估计子网络和模糊核自适应的图像重建子网络. 给定任意低分辨率图像, 该网络首先利用模糊核估计子网络从输入图像估计出实际的模糊核, 然后根据估计到的模糊核, 该网络利用模糊核自适应的图像重建子网络完成输入图像的超分辨率重建. 与其他图像盲超分辨率方法不同, 本文提出的模糊核估计子网络能够显式地从输入低分辨率图像中估计出完整的模糊核, 然后模糊核自适应的图像重建子网络根据估计到的模糊核, 动态地调整网络各层的图像特征, 从而适应不同输入图像的模糊. 本文在多个基准数据集上进行了有效性实验, 定性和定量的结果都表明该网络优于同类的图像盲超分辨率神经网络.
  • 图  1  BESRNet 结构示意图

    Fig.  1  Overview of the BESRNet

    图  2  BKENet 结构示意图

    Fig.  2  Architecture of the BKENet

    图  3  模糊核自适应的特征选择模块示意图

    Fig.  3  Architecture of the proposed KAFS module

    图  4  动态特征选择器结构示意图

    Fig.  4  Architecture of the proposed DFS

    图  5  训练所用的高斯模糊核

    Fig.  5  Visualization of Gaussian kernels used for training

    图  6  (×4) 各个超分方法的视觉效果对比

    Fig.  6  (×4) Visual comparison of different SISR methods

    图  7  (×4) 真实图像"chip"上的视觉对比结果

    Fig.  7  (×4) Visual comparison on real-world image "chip"

    图  8  不同方法在Set5[39]上估计出的模糊核的视觉效果对比

    Fig.  8  Visual comparison of blur kernels estimated by different methods on Set5[39]

    图  9  不同基准数据集上模糊核估计结果的视觉效果对比

    Fig.  9  Visual comparison of blur kernels estimated by different methods on different benchmark datasets

    图  10  (×4) 使用真值模糊核作为先验情况下, 各个超分辨率方法的视觉效果对比, 放大观看效果更佳

    Fig.  10  (×4) Visual comparison of different SISR methods with real blur kernel prior. Zoom in for best view

    表  1  (×4) 各个超分方法在基准数据集上的性能对比(PSNR/SSIM)

    Table  1  (×4) Performance comparison of different super-resolution methods on the benchmark datasets (PSNR/SSIM)

    Method scale Set5[39] Set14[40] BSD100[41] Urban100[42] Div2k_val[37]
    Bicubic × 2 25.76/0.800 23.73/0.699 24.15/0.681 21.51/0.670 25.73/0.776
    RDN[14] × 2 28.03/0.840 25.20/0.713 25.44/0.697 23.04/0.699 27.93/0.807
    RCAN[17] × 2 24.53/0.751 23.05/0.668 23.49/0.653 21.04/0.633 24.70/0.733
    DRN[8] × 2
    HAN[19] × 2 24.45/0.714 22.90/0.650 23.29/0.634 20.91/0.615 24.54/0.708
    RDNMD × 2 29.00/0.879 25.89/0.803 25.97/0.798 24.16/0.818 28.23/0.863
    ZSSR[30] × 2 26.06/0.804 24.02/0.707 24.43/0.688 21.90/0.685 25.99/0.785
    IKC[22] × 2
    BESRNet(ours) × 2 30.96/0.903 27.73/0.834 27.20/0.827 25.38/0.845 29.96 /0.886
    Bicubic × 4 24.72/0.755 22.83/0.647 23.34/0.628 20.65/0.613 24.79/0.733
    RDN[14] × 4 27.46/0.808 24.72/0.694 25.03/0.671 22.53/0.690 27.24/0.775
    RCAN[17] × 4 22.83/0.619 21.62/0.548 22.16/0.541 19.77/0.521 23.25/0.619
    DRN[8] × 4 23.07/0.679 21.92/0.596 22.50/0.580 20.07/0.562 23.96/0.683
    HAN[19] × 4 22.65/0.603 20.81/0.524 22.09/0.536 19.33/0.497 22.83/0.605
    RDNMD × 4 28.63/0.834 25.33/0.716 25.51/0.690 23.29/0.718 27.68/0.793
    ZSSR[30] × 4 25.09/0.710 23.75/0.640 24.15/0.620 21.52/0.622 26.72/0.752
    IKC[22] × 4 28.93/0.844 25.94/0.719 25.73/0.696 23.49/0.729 28.15/0.800
    BESRNet(ours) × 4 29.18/0.860 26.10/0.742 25.74/0.714 23.81/0.751 28.23/0.813
    Bicubic × 8 21.90/0.622 20.68/0.535 21.58/0.530 18.73/0.493 22.66/0.640
    RDN[14] × 8
    RCAN[17] × 8 20.91/0.518 20.15/0.468 21.10/0.463 18.51/0.434 22.26/0.567
    DRN[8] × 8 21.09/0.536 20.76/0.499 21.31/0.493 18.81/0.471 22.67/0.594
    HAN[19] × 8 20.30/0.492 19.88/0.486 19.53/0.467 18.17/0.401 21.47/0.529
    RDNMD × 8 23.86/0.710 21.79/0.560 22.70/0.569 20.29/0.586 24.18/0.686
    ZSSR[30] × 8
    IKC[22] × 8
    BESRNet(ours) × 8 24.15/0.722 22.64/0.600 22.87/0.571 20.54/0.599 24.75/0.691
    下载: 导出CSV

    表  2  各个模糊核预测方法在基准数据集上的定量结果对比 (MSE$\times10^{-5}$/MAE$\times10^{-3}$)

    Table  2  Quantitative comparison of kernel estimation methods on the benchmark datasets (MSE$\times10^{-5}$/MAE$\times10^{-3}$)

    Method Set5[39] Set14[40] BSD100[41] Urban100[42] Div2k_val[37]
    Pan et al.[33] 3.83/3.85 2.56/3.87 3.23/3.58 2.55/3.32 2.13/2.89
    BKENet $w/o$ R 1.91/2.69 2.12/2.66 1.83/2.73 2.15/2.90 2.00/2.67
    BKENet ${w/}$ R 1.76/2.61 1.80/2.53 1.78/2.7 2.13/2.88 1.89/2.59
    下载: 导出CSV

    表  3  (×4) 使用真值模糊核作为先验的不同方法的量化指标对比(PSNR/SSIM)

    Table  3  (×4) Quantitative comparison of different methods with real blur kernels as prior (PSNR/SSIM)

    Method Set5[39] Set14[40] Div2k_val[37]
    KZNet 26.45/0.818 22.59/0.702 22.75/0.752
    ZSSR $w/\ k$ 24.38/0.734 23.17/0.672 25.50/0.771
    SRNet $w/o\ k$ 25.14/0.796 23.09/0.688 24.72/0.762
    SRNet $w/\ k$ 29.65/0.864 26.39/0.747 28.45/0.814
    下载: 导出CSV

    表  4  (×4) 不同DFS分支数的KAFS 模块在Set5[39]数据集上的定量结果对比

    Table  4  (×4) Quantitative comparison of KAFS module with different numbers of DFS on Set5[39]

    DFSs 1 2 4
    PSNR/SSIM 29.50/0.861 29.61/0.863 29.54/0.862
    Params 12.92M 12.98M 13.12M
    Multi-Adds 151.04G 151.05G 151.06G
    下载: 导出CSV

    表  5  (×4) 不同失活通道数的KAFS模块在Set5[39]数据集上的定量结果对比

    Table  5  (×4) Quantitative comparison of KAFS module with different numbers of inactive channel on Set5[39]

    Inactive-channels 4 8 16 24
    PSNR/SSIM 29.60/0.860 29.65/0.864 29.61/0.863 29.56/0.857
    下载: 导出CSV

    表  6  (×4) 使用不同$\delta$值训练的模型在DIV2K[37] 数据集的验证集上的性能对比

    Table  6  (×4) Performance comparison of BESRNet with different $\delta$ on the validation set of DIV2K[37]

    δ 0.01 0.05 0.1 0.5 1
    PSNR/SSIM 28.01/0.809 28.09/0.811 28.23/0.813 28.12/0.811 27.99/0.810
    下载: 导出CSV
  • [1] Luo Y, Zhou L, Wang S, et al. Video satellite imagery super resolution via convolutional neural networks. IEEE Geoscience and Remote Sensing Letters, 2017, 14(12): 2398−2402 doi: 10.1109/LGRS.2017.2766204
    [2] Shi W, Caballero J, Ledig C, et al. Cardiac image super-resolution with global correspondence using multi-atlas patchmatch. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Berlin, Heidelberg, 2013. 9−16.
    [3] Zou W W W, Yuen P C. Very low resolution face recognition problem. IEEE Transactions on Image Processing, 2011, 21(1): 327−340
    [4] Dong C, Loy C C, He K, et al. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 38(2): 295−307
    [5] Kim J, Kwon Lee J, Mu Lee K. Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 1646−1654.
    [6] Ledig C, Theis L, Huszár F, et al. Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Hawaii, USA: IEEE, 2017. 4681−4690.
    [7] Lim B, Son S, Kim H, et al. Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition workshops. Hawaii, USA: IEEE, 2017. 136−144.
    [8] Guo Y, Chen J, Wang J, et al. Closed-loop matters: Dual regression networks for single image super-resolution. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 5407−5416.
    [9] Kim J, Lee J K, Lee K M. Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 1637−1645.
    [10] Tai Y, Yang J, Liu X. Image super-resolution via deep recursive residual network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Hawaii, USA: IEEE, 2017. 3147−3155.
    [11] 周登文, 赵丽娟, 段然, 柴晓亮. 基于递归残差网络的图像超分辨率重建. 自动化学报, 2019, 45(6): 1157−1165

    Zhou Deng-Wen, Zhao Li-Juan, Duan Ran, Chai Xiao-Liang. Image Super-resolution based on recursive residual networks. Acta Automatica Sinica, 2019, 45(6): 1157−1165
    [12] Han W, Chang S, Liu D, et al. Image super-resolution via dual-state recurrent networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 1654−1663.
    [13] Tong T, Li G, Liu X, et al. Image super-resolution using dense skip connections. In: Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 4799−4807.
    [14] Zhang Y, Tian Y, Kong Y, et al. Residual dense network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 2472−2481.
    [15] Liu J, Zhang W, Tang Y, et al. Residual feature aggregation network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 2359−2368.
    [16] 李金新, 黄志勇, 李文斌, 周登文. 基于多层次特征融合的图像超分辨率重建. 自动化学报, 2021, x(x): 1−11

    Li Jin-Xin, Huang Zhi-Yong, Li Wen-Bin, Zhou Deng-Wen. Image super-resolution based on multi hierarchical features fusion network. Acta Automatica Sinica, 2021, x(x): 1−11
    [17] Zhang Y, Li K, Li K, et al. Image super-resolution using very deep residual channel attention networks. In: Proceedings of the European Conference on Computer Vision. Mohini, Germany: Springer, 2018. 286−301.
    [18] Dai T, Cai J, Zhang Y, et al. Second-order attention network for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 11065−11074.
    [19] Niu B, Wen W, Ren W, et al. Single image super-resolution via a holistic attention network. In: Proceedings of European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 191−207.
    [20] Bulat A, Yang J, Tzimiropoulos G. To learn image super-resolution, use a gan to learn how to do image degradation first. In: Proceedings of the European conference on computer vision. Mohini, Germany: Springer, 2018. 185−200.
    [21] Zhang K, Zuo W, Zhang L. Learning a single convolutional super-resolution network for multiple degradations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 3262−3271.
    [22] Gu J, Lu H, Zuo W, et al. Blind super-resolution with iterative kernel correction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 1604−1613.
    [23] Wang X, Yu K, Dong C, et al. Recovering realistic texture in image super-resolution by deep spatial feature transform. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 606−615.
    [24] Luo Z, Huang Y, Li S, et al. Unfolding the alternating optimization for blind super resolution. arXiv preprint arXiv: 2010.02631, 2020.
    [25] 常振春, 禹晶, 肖创柏, 孙卫东. 基于稀疏表示和结构自相似性的单幅图像盲解卷积算法. 自动化学报, 2017, 43(11): 1908−1919

    Chang Zhen-Chun, Yu Jing, Xiao Chuang-Bai. Single image blind deconvolution using sparse representation and structural self-similarity. Acta Automatica Sinica, 2017, 43(11): 1908−1919
    [26] Pan J, Lin Z, Su Z, et al. Robust kernel estimation with outliers handling for image deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 2800−2808.
    [27] Yan R, Shao L. Blind image blur estimation via deep learning. IEEE Transactions on Image Processing, 2016, 25(4): 1910−1921
    [28] Yuan Y, Liu S, Zhang J, et al. Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City, USA: IEEE, 2018. 701−710.
    [29] Zhang Y, Liu S, Dong C, et al. Multiple cycle-in-cycle generative adversarial networks for unsupervised image super-resolution. IEEE Transactions on Image Processing, 2019, 29: 1101−1112
    [30] Shocher A, Cohen N, Irani M. “zero-shot” super-resolution using deep internal learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 3118−3126.
    [31] Soh J W, Cho S, Cho N I. Meta-transfer learning for zero-shot super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 3516−3525.
    [32] Efrat N, Glasner D, Apartsin A, et al. Accurate blur models vs. image priors in single image super-resolution. In Proceedings of the IEEE International Conference on Computer Vision. Sydney, NSW, Australia: IEEE, 2013. 2832−2839.
    [33] Pan J, Sun D, Pfister H, et al. Blind image deblurring using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 1628−1636.
    [34] He K, Zhang X, Ren S, et al. Identity mappings in deep residual networks. In: Proceedings of European Conference on Computer Vision. Springer, Cham, 2016. 630−645.
    [35] Su Z, Fang L, Kang W, et al. Dynamic group convolution for accelerating convolutional neural networks. arXiv e-prints, 2020: arXiv: 2007.04242.
    [36] Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Proceedings of International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015. 234−241.
    [37] Agustsson E, Timofte R. Ntire 2017 challenge on single image super-resolution: Dataset and study. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Hawaii, USA: IEEE, 2017. 126−135.
    [38] Timofte R, Agustsson E, Van Gool L, et al. Ntire 2017 challenge on single image super-resolution: Methods and results. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Hawaii, USA: IEEE, 2017. 114−125.
    [39] Bevilacqua M, Roumy A, Guillemot C, et al. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: British Machine Vision Conference. Springer, Cham, 2012. 131−1.
    [40] Zeyde R, Elad M, Protter M. On single image scale-up using sparse-representations. In: International conference on curves and surfaces. Springer, Berlin, Heidelberg, 2010: 711−730.
    [41] Martin D, Fowlkes C, Tal D, et al. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of the IEEE International Conference on Computer Vision. Vancouver, British Columbia, Canada: IEEE, 2001. 416−423.
    [42] Huang J B, Singh A, Ahuja N. Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 5197−5206.
    [43] Kingma D P, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980, 2014.
  • 加载中
计量
  • 文章访问数:  204
  • HTML全文浏览量:  85
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-11-26
  • 录用日期:  2021-04-16
  • 网络出版日期:  2021-05-26

目录

    /

    返回文章
    返回