2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于多层次特征融合的图像超分辨率重建

李金新 黄志勇 李文斌 周登文

李金新, 黄志勇, 李文斌, 周登文. 基于多层次特征融合的图像超分辨率重建. 自动化学报, 2023, 49(1): 161−171 doi: 10.16383/j.aas.c200585
引用本文: 李金新, 黄志勇, 李文斌, 周登文. 基于多层次特征融合的图像超分辨率重建. 自动化学报, 2023, 49(1): 161−171 doi: 10.16383/j.aas.c200585
Li Jin-Xin, Huang Zhi-Yong, Li Wen-Bin, Zhou Deng-Wen. Image super-resolution based on multi-hierarchical features fusion network. Acta Automatica Sinica, 2023, 49(1): 161−171 doi: 10.16383/j.aas.c200585
Citation: Li Jin-Xin, Huang Zhi-Yong, Li Wen-Bin, Zhou Deng-Wen. Image super-resolution based on multi-hierarchical features fusion network. Acta Automatica Sinica, 2023, 49(1): 161−171 doi: 10.16383/j.aas.c200585

基于多层次特征融合的图像超分辨率重建

doi: 10.16383/j.aas.c200585
详细信息
    作者简介:

    李金新:华北电力大学控制与计算机工程学院硕士研究生. 2018年获河北建筑工程学院学士学位. 主要研究方向为计算机视觉和深度学习. E-mail: 1182227091@ncepu.edu.cn

    黄志勇:华北电力大学控制与计算机工程学院硕士研究生. 2018年获华北电力大学学士学位. 主要研究方向为计算机视觉和深度学习. E-mail: 1182227193@ncepu.edu.cn

    李文斌:华北电力大学控制与计算机工程学院硕士研究生. 2017年获上海电力学院学士学位. 主要研究方向为计算机视觉和深度学习. E-mail: 1182227108@ncepu.edu.cn

    周登文:华北电力大学控制与计算机工程学院教授. 主要研究方向为神经网络在图像处理中的应用. 本文通信作者. E-mail: zdw@ncepu.edu.cn

Image Super-resolution Based on Multi-hierarchical Features Fusion Network

More Information
    Author Bio:

    LI Jin-Xin Master student at the School of Control and Computer Engineering, North China Electric Power University. He received his bachelor degree from Hebei University of Architecture in 2018. His research interest covers computer vision and deep learning

    HUANG Zhi-Yong Master student at the School of Control and Computer Engineering, North China Electric Power University. He received his bachelor degree from North China Electric Power University in 2018. His research interest covers computer vision and deep learning

    LI Wen-Bin Master student at the School of Control and Computer Engineering, North China Electric Power University. He received his bachelor degree from Shanghai University of Electric Power in 2017. His research interest covers computer vision and deep learning

    ZHOU Deng-Wen Professor at the School of Control and Comput-er Engineering, North China Electric Power University. His main research interest is applications of neural network in image processing. Corresponding author of this paper

  • 摘要: 深度卷积神经网络显著改进了单图像超分辨率的性能. 更深的网络往往能获得更好的性能. 但是, 加深网络会导致参数量急剧增加, 限制了它在资源受限设备上的应用, 比如智能手机. 提出了一个融合多层次特征的轻量级单图像超分辨率网络, 主要构件是双层嵌套残差块. 为了更好地提取特征, 减少参数量, 每个残差块采用对称结构: 先两次扩张, 然后两次压缩通道数. 在残差块中, 通过添加自相关权重单元, 加权融合不同通道的特征信息. 实验证明, 该方法显著优于当前同类方法.
    1)  1 该结构是将特征信息$ x $经过卷积处理再激活操作, 随后再次卷积处理得到$ \hat x $, 结构最终输出为$ x+\hat x $.
    2)  2 https://github.com/thstkdgus35/EDSR-PyTorch
  • 图  1  本文多层次特征融合网络结构与残差组结构((a) 多层次特征融合网络结构图; (b) 残差组结构图)

    Fig.  1  Our multi-hierarchical feature fusion network structure and the residual group structure ((a) The architecture of multi hierarchical feature fusion network; (b) The structure of residual group)

    图  2  不同的残差块结构图

    Fig.  2  The structure of different residual block

    图  3  自相关权重单元结构图

    Fig.  3  The structure of autocorrelation weight unit

    图  4  标准测试集放大4倍视觉效果比较

    Fig.  4  Visual qualitative comparison of $\times $4 super-resolution on the standard test datasets

    图  5  标准测试集下放大8倍视觉效果比较

    Fig.  5  Visual qualitative comparison of $\times $8 super-resolution on the standard test datasets

    表  1  Set5 和DIV2K-10 数据集上, 放大4倍, 运行200 个迭代周期, 残差组中不同双层嵌套残差块数模型的平均PSNR 及参数量

    Table  1  Average PSNRs and number of parameter with different numbers of DRBs in the residual group with a factor of × 4 on Set5 and DIV2K-10 datasets under 200 epochs

    数目 参数量 (MB) Set5 (%) DIV2K-10 (%)
    5 1.23 32.23 29.51
    6 1.47 32.26 29.55
    7 1.71 32.25 29.55
    下载: 导出CSV

    表  2  Set5与DIV2K-10数据集上, 放大4倍, 运行200迭代周期, 浅层特征映射单元支路不同卷积核设置的平均PSNR

    Table  2  Average PSNRs of the models with different convolutional kernel settings for SFMU branches for × 4 on Set5 and DIV2K-10 datasets under 200 epochs

    卷积核设置 Set5 (%) DIV2K-10 (%)
    32.22 29.52
    1 1 1 32.18 29.50
    3 3 3 32.24 29.53
    5 5 5 32.25 29.53
    1 3 5 32.26 29.55
    下载: 导出CSV

    表  3  Set5与DIV2K-10数据集上, 放大4倍, 运行200个迭代周期, 不同模型的平均PSNR

    Table  3  Average PSNRs of different models for × 4 super-resolution on Set5 and DIV2K-10 datasets under 200 epochs

    测试模型 残差块参数量 (KB) Set5 (%) DIV2K-10 (%)
    模型 I 73.8 32.11 29.42
    模型 II 53.5 32.12 29.47
    下载: 导出CSV

    表  4  Set5和DIV2K-10 数据集上, 放大4 倍, 运行200个迭代周期, 包含/不包含ACW模型的平均PSNR

    Table  4  Average PSNRs of the models with/without the ACW for × 4 super-resolution on the Set5 and DIV2K-10 datasets under 200 epochs

    模型 Set5 (%) DIV2K-10 (%)
    不包含 ACW 32.11 29.42
    包含 ACW 32.13 29.45
    下载: 导出CSV

    表  5  Set5和DIV2K-10 数据集上, 放大4 倍, 运行200 个迭代周期, 不同重建单元模型的平均PSNR

    Table  5  Average PSNRs of the models with different reconstruction modules for × 4 super-resolution on Set5 and DIV2K-10 datasets under 200 epochs

    重建单元 参数量 (KB) Set5 (%) DIV2K-10 (%)
    EDSR 重建单元 297.16 32.11 29.42
    MPRU 9.36 32.13 29.47
    下载: 导出CSV

    表  6  各个SISR方法的平均PSNR和SSIM

    Table  6  The average PSNRs/SSIMs of different SISR methods

    放大倍数 模型 参数量 (KB) Set14
    PSNR (%)/SSIM (%)
    B100
    PSNR (%)/SSIM (%)
    Urban100
    PSNR (%)/SSIM (%)
    Manga109
    PSNR (%)/SSIM (%)
    × 2SRCNN 57 32.42/0.9063 31.36/0.8879 29.50/0.8946 35.74/0.9661
    FSRCNN 12 32.63/0.9088 31.53/0.8920 29.88/0.9020 36.67/0.9694
    VDSR 665 33.03/0.9124 31.90/0.8960 30.76/0.9140 37.22/0.9729
    DRCN 1774 33.04/0.9118 31.85/0.8942 30.75/0.9133 37.63/0.9723
    LapSRN 813 33.08/0.9130 31.80/0.8950 30.41/0.9100 37.27/0.9740
    DRRN 297 33.23/0.9136 32.05/0.8973 31.23/0.9188 37.92/0.9760
    MemNet 677 33.28/0.9142 32.08/0.8978 31.31/0.9195 37.72/0.9740
    SRMDNF 1513 33.32/0.9150 32.05/0.8980 31.33/0.9200 38.07/0.9761
    CARN 1592 33.52/0.9166 32.09/0.8978 31.92/0.9256 38.36/0.9765
    MSRN 5930 33.70/0.9186 32.23/0.9002 32.29/0.9303 38.69/0.9772
    SRFBN-S 282 33.35/0.9156 32.00/0.8970 31.41/0.9207 38.06/0.9757
    CBPN 1036 33.60/0.9171 32.17/0.8989 32.14/0.9279
    IMDN 694 33.63/0.9177 32.19/0.8996 32.17/0.9283 38.88/0.9774
    本文 MHFN 1463 33.79/0.9196 32.20/0.8998 32.40/0.9301 38.88/0.9774
    ×3SRCNN5729.28/0.820928.41/0.786326.24/0.798930.59/0.9107
    FSRCNN 12 29.43/0.8242 28.53/0.7910 26.43/0.8080 30.98/0.9212
    VDSR 665 29.77/0.8314 28.82/0.7976 27.14/0.8279 32.01/0.9310
    DRCN 1774 29.76/0.8311 28.80/0.7963 27.15/0.8276 32.31/0.9328
    DRRN 297 29.96/0.8349 28.95/0.8004 27.53/0.8378 32.74/0.9390
    MemNet 677 30.00/0.8350 28.96/0.8001 27.56/0.8376 32.51/0.9369
    SRMDNF 1530 30.04/0.8370 28.97/0.8030 27.57/0.8400 33.00/0.9403
    CARN 1592 30.29/0.8407 29.06/0.8034 27.38/0.8404 33.50/0.9440
    MSRN 6114 30.41/0.8437 29.15/0.8064 28.33/0.8561 33.67/0.9456
    SRFBN-S 376 30.10/0.8372 28.96/0.8010 27.66/0.8415 33.02/0.9404
    ×3 IMDN 703 30.32/0.8417 29.09/0.8046 28.17/0.8519 33.61/0.9445
    本文 MHFN 1465 30.40/0.8428 29.13/0.8056 28.35/0.8557 33.85/0.9460
    × 4SRCNN5727.49/0.750326.90/0.710124.52/0.722127.66/0.8505
    FSRCNN 12 27.59/0.7535 26.98/0.7150 24.62/0.7280 27.90/0.8517
    VDSR 665 28.01/0.7674 27.29/0.7251 25.18/0.7524 28.83/0.8809
    DRCN 1774 28.02/0.7670 27.23/0.7233 25.14/0.7510 28.98/0.8816
    LapSRN 813 28.19/0.7720 27.32/0.7280 25.21/0.7560 29.09/0.8845
    DRRN 297 28.21/0.7720 27.38/0.7284 25.44/0.7638 29.46/0.8960
    MemNet 677 28.26/0.7723 27.40/0.7281 25.50/0.7630 29.42/0.8942
    SRMDNF 1555 28.35/0.7770 27.49/0.7340 25.68/0.7730 30.09/0.9024
    CARN 1592 28.60/0.7806 27.58/0.7349 26.07/0.7837 30.47/0.9084
    MSRN 6078 28.63/0.7836 27.61/0.7380 26.22/0.7911 30.57/0.9103
    SRFBN-S 483 28.45/0.7779 27.44/0.7313 25.71/0.7719 29.91/0.9008
    CBPN 1197 28.63/0.7813 27.58/0.7356 26.14/0.7869
    IMDN 715 28.58/0.7811 27.56/0.7353 26.04/0.7838 30.45/0.9075
    本文 MHFN 1468 28.66/0.7830 27.61/0.7371 26.27/0.7909 30.74/0.9114
    × 8SRCNN5723.86/0.544324.14/0.504321.29/0.513322.46/0.6606
    FSRCNN 12 23.94/0.5482 24.21/0.5112 21.32/0.5090 22.39/0.6357
    VDSR 655 23.20/0.5110 24.34/0.5169 21.48/0.5289 22.73/0.6688
    DRCN 1774 24.25/0.5510 24.49/0.5168 21.71/0.5289 23.20/0.6686
    LapSRN 813 24.45/0.5792 24.54/0.5293 21.81/0.5555 23.39/0.7068
    MSRN 6226 24.88/0.5961 24.70/0.5410 22.37/0.5977 24.28/0.7517
    本文 MHFN 1490 25.02/0.6426 24.80/0.5968 22.46/0.6170 24.60/0.7811
    下载: 导出CSV
  • [1] Shi W Z, Caballero J, Ledig C, Zhuang X H, Bai W J, Bhatia K, et al. Cardiac image super-resolution with global correspondence using multi-atlas PatchMatch. In: Proceedings of the 16th International Conference on Medical Image Computing and Com-puter-Assisted Intervention. Nagoya, Japan: 2013. 9−16
    [2] Luo Y M, Zhou L G, Wang S, Wang Z Y. Video satellite imagery super resolution via convolutional neural networks. IEEE Geoscience and Remote Sensing Letters, 2017, 14(12): 2398-2402 doi: 10.1109/LGRS.2017.2766204
    [3] Zou W W W, Yuen P C. Very low resolution face recognition problem. IEEE Transactions on Image Processing, 2012, 21(1): 327-340 doi: 10.1109/TIP.2011.2162423
    [4] 孙旭, 李晓光, 李嘉锋, 卓力. 基于深度学习的图像超分辨率复原研究进展. 自动化学报, 2017, 43(5): 697-709

    Sun Xu, Li Xiao-Guang, Li Jia-Feng, Zhuo Li. Review on deep learning based image super-resolution restoration algorithms. Acta Automatica Sinica, 2017, 43(5): 697-709
    [5] 周登文, 赵丽娟, 段然, 柴晓亮. 基于递归残差网络的图像超分辨率重建. 自动化学报, 2019, 45(6): 1157-1165

    Zhou Deng-Wen, Zhao Li-Juan, Duan Ran, Chai Xiao-Liang. Image super-resolution based on recursive residual networks. Acta Automatica Sinica, 2019, 45(6): 1157-1165
    [6] 张毅锋, 刘袁, 蒋程, 程旭. 用于超分辨率重建的深度网络递进学习方法. 自动化学报, 2020, 46(2): 274-282

    Zhang Yi-Feng, Liu Yuan, Jiang Cheng, Cheng Xu. A curriculum learning approach for single image super resolution. Acta Automatica Sinica, 2020, 46(2): 274-282
    [7] Dong C, Loy C C, He K M, Tang X O. Learning a deep convolutional network for image super-resolution. In: Proceedings of the 13th European Conference on Computer Vision. Zurich, Swi-tzerland: 2014. 184−199
    [8] Kim J, Kwon Lee J, Mu Lee K. Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: 2016. 1637−1645
    [9] Tai Y, Yang J, Liu X M. Image super-resolution via deep recursive residual network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: 2017. 2790−2798
    [10] Ahn N, Kang B, Sohn K A. Fast, accurate, and lightweight super-resolution with cascading residual network. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: 2018. 256−272
    [11] Li J C, Fang F M, Mei K F, Zhang G X. Multi-scale residual network for image super-resolution. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: 2018. 527−542
    [12] Lim B, Son S, Kim H, Nah S, Lee K M. Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, USA: 2017. 1132−1140
    [13] Zhang Y L, Tian Y P, Kong Y, Zhong B N, Fu Y. Residual dense network for image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: 2018. 2472−2481
    [14] Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278-2324 doi: 10.1109/5.726791
    [15] Kim J, Kwon Lee J, Mu Lee K. Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: 2016. 1646−1654
    [16] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 770−778
    [17] Li Z, Yang J L, Liu Z, Yang X M, Jeon G, Wu W. Feedback network for image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: 2019. 3862−3871
    [18] Hui Z, Gao X B, Yang Y C, Wang X M. Lightweight image super-resolution with information multi-distillation network. In: Proceedings of the 27th ACM International Conference on Multimedia. Nice, France: 2019. 2024−2032
    [19] Zhu F Y, Zhao Q J. Efficient single image super-resolution via hybrid residual feature learning with compact back-projection network. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop. Seoul, South Korea: 2019. 2453−2460
    [20] Lai W S, Huang J B, Ahuja N, Yang M H. Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: 2017. 5835−5843
    [21] Timofte R, Agustsson E, van Gool L, Yang M H, Zhang L, Lim B, et al. NTIRE 2017 challenge on single image super-resolution: Methods and results. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, USA: 2017. 1110−1121
    [22] Liu J, Zhang W J, Tang Y T, Tang J, Wu G S. Residual feature aggregation network for image super-resolution. In: Proce-edings of the IEEE/CVF Conference on Computer Visi-on and Pattern Recognition. Seattle, USA: 2020. 2356−2365
    [23] Zhang Y L, Li K P, Li K, Wang L C, Zhong B N, Fu Y. Image super-resolution using very deep residual channel attention networks. In: Proceedings of the 15th European Conference on Com-puter Vision. Munich, Germany: 2018. 294−310
    [24] Sandler M, Howard A, Zhu M L, Zhmoginov A, Chen L C. MobileNetV2: Inverted residuals and linear bottlenecks. In: Proce-edings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: 2018. 4510− 4520
    [25] Szegedy C, Ioffe S, Vanhoucke V, Alemi A A. Inception-v4, inception-ResNet and the impact of residual connections on learning. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence. San Francisco, USA: 2017. 4278−4284
    [26] Wang Z H, Chen J, Hoi S C H. Deep learning for image super-resolution: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(10): 3365-3387 doi: 10.1109/TPAMI.2020.2982166
    [27] Kingma D P, Ba J. Adam: A method for stochastic optimization. In: Proceedings of the 3rd International Conference on Lea-rning Representations. San Diego, USA, 2014.
    [28] Salimans T, Kingma D P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: 2016. 901− 909
    [29] Bevilacqua M, Roumy A, Guillemot C, Alberi Morel M L. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: Proceedings of the British Mach-ine Vision Conference. Surrey, UK: 2012. 1−10
    [30] Zeyde R, Elad M, Protter M. On single image scale-up using sparse-representations. In: Proceedings of the 7th International Conference on Curves and Surfaces. Avignon, France: 2010. 711−730
    [31] Martin D, Fowlkes C, Tal D, Malik J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of the 8th International Conference on Computer Vision. Vancouver, Canada: 2001. 416−423
    [32] Huang J B, Singh A, Ahuja N. Single image super-resolution from transformed self-exemplars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: 2015. 5197−5206
    [33] Matsui Y, Ito K, Aramaki Y, Fujimoto A, Ogawa T, Yamasaki T, et al. Sketch-based manga retrieval using manga109 dataset. Multimedia Tools and Applications, 2017, 76(20): 21811-21838 doi: 10.1007/s11042-016-4020-z
    [34] Wang Z, Bovik A C, Sheikh H R, Simoncelli E P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 2004, 13(4): 600-612 doi: 10.1109/TIP.2003.819861
    [35] Dong C, Loy C C, Tang X O. Accelerating the super-resolution convolutional neural network. In: Proceedings of the 14th Eur-opean Conference on Computer Vision. Amsterdam, Netherlan-ds: 2016. 391−407
    [36] Tai Y, Yang J, Liu X M, Xu C Y. MemNet: A persistent mem-ory network for image restoration. In: Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: 2017. 4539−4547
    [37] Zhang K, Zuo W M, Zhang L. Learning a single convolutional super-resolution network for multiple degradations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: 2018. 3262−3271
  • 加载中
图(5) / 表(6)
计量
  • 文章访问数:  2059
  • HTML全文浏览量:  498
  • PDF下载量:  430
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-07-24
  • 录用日期:  2020-12-14
  • 网络出版日期:  2021-01-14
  • 刊出日期:  2023-01-07

目录

    /

    返回文章
    返回