2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于自适应级联的注意力网络的超分辨率重建

陈一鸣 周登文

陈一鸣, 周登文. 基于自适应级联的注意力网络的超分辨率重建. 自动化学报, 2022, 48(8): 1950−1960 doi: 10.16383/j.aas.c200035
引用本文: 陈一鸣, 周登文. 基于自适应级联的注意力网络的超分辨率重建. 自动化学报, 2022, 48(8): 1950−1960 doi: 10.16383/j.aas.c200035
Chen Yi-Ming, Zhou Deng-Wen. Adaptive attention network for image super-resolution. Acta Automatica Sinica, 2022, 48(8): 1950−1960 doi: 10.16383/j.aas.c200035
Citation: Chen Yi-Ming, Zhou Deng-Wen. Adaptive attention network for image super-resolution. Acta Automatica Sinica, 2022, 48(8): 1950−1960 doi: 10.16383/j.aas.c200035

基于自适应级联的注意力网络的超分辨率重建

doi: 10.16383/j.aas.c200035
详细信息
    作者简介:

    陈一鸣:现为北京大学计算机学院硕士研究生. 主要研究方向为计算机视觉, 深度学习和生物计算. E-mail: 88143221@163.com

    周登文:华北电力大学控制与计算机工程学院教授. 主要研究方向为图像处理, 神经网络和深度学习在图像处理和计算机视觉中的应用和图像超分辨率技术. 本文通信作者. E-mail: zdw@ncepu.edu.cn

Adaptive Attention Network for Image Super-resolution

More Information
    Author Bio:

    CHEN Yi-Ming Master student at the School of Computer Science, Peking University currently. His research interest covers computer vision, deep learning and biocomputing

    ZHOU Deng-Wen Professor at the School of Control and Computer Engineering, North China Electric Power University. His research interest covers image processing the applications, neural networks and deep learning in image processing and computer vision and image super-resolution technology. Corresponding author of this paper

  • 摘要: 深度卷积神经网络显著提升了单图像超分辨率的性能. 通常, 网络越深, 性能越好. 然而加深网络往往会急剧增加参数量和计算负荷, 限制了在资源受限的移动设备上的应用. 提出一个基于轻量级自适应级联的注意力网络的单图像超分辨率方法. 特别地提出了局部像素级注意力模块, 给输入特征的每一个特征通道上的像素点都赋以不同的权值, 从而为重建高质量图像选取更精确的高频信息. 此外, 设计了自适应的级联残差连接, 可以自适应地结合网络产生的层次特征, 能够更好地进行特征重用. 最后, 为了充分利用网络产生的信息, 提出了多尺度全局自适应重建模块. 多尺度全局自适应重建模块使用不同大小的卷积核处理网络在不同深度处产生的信息, 提高了重建质量. 与当前最好的类似方法相比, 该方法的参数量更小, 客观和主观度量显著更好.
  • 图  1  自适应级联的注意力网络架构(ACAN)

    Fig.  1  Adaptive cascading attention network architecture (ACAN)

    图  2  提取及掩膜模块

    Fig.  2  The extract and mask block

    图  3  特征提取模块

    Fig.  3  Feature extracting block

    图  4  局部像素级注意力模块

    Fig.  4  Local pixel-wise attention block

    图  5  多尺度全局自适应重建模块

    Fig.  5  Multi-scale global adaptive reconstruction block

    图  6  非线性映射模块中每个HFEB输出特征的可视化结果

    Fig.  6  Visual results of each HFEB's output feature in non-linear mapping

    图  7  包含不同个数的HFEB的ACAN在验证集上的性能比较

    Fig.  7  Performance comparison of ACAN on validation set with different numbers of HFEB

    图  8  包含不同个数的HFEB的ACAN在Set5测试集上的性能比较

    Fig.  8  Performance comparison of ACAN on Set5 testing set with different number of HFEB

    图  9  视觉比较结果

    Fig.  9  Visual comparison of images

    表  1  不同卷积核的排列顺序对重建效果的影响

    Table  1  Effect of convolution kernels with different order on reconstruction performance

    卷积组排列顺序9753357933339999
    PSNR (dB)35.56935.51435.53035.523
    下载: 导出CSV

    表  2  不同层次特征对重建效果的影响

    Table  2  Impact of different hierarchical features on reconstruction performance

    移除的卷积组大小3579
    PSNR (dB)35.49635.51735.54135.556
    下载: 导出CSV

    表  3  原始DBPN (O-DBPN)和使用MGAR模块的DBPN (M-DBPN)的客观效果比较

    Table  3  Objective comparison between original DBPN (O-DBPN) and DBPN (M-DBPN) using MGAR module

    使用不同重建模块的DBPNPSNR (dB)
    O-DBPN35.343
    M-DBPN35.399
    下载: 导出CSV

    表  4  Sigmoid门函数的有无对LPA模块性能的影响

    Table  4  Influence of Sigmoid gate function to LPA block

    Sigmoid门函数PSNR (dB)
    $有$35.569
    $无$35.497
    下载: 导出CSV

    表  5  不同残差的连接方式对重建效果的影响

    Table  5  Effect of different residual connection methods on reconstruction performance

    不同种类的残差连接PSNR (dB)
    残差连接35.515
    无残差连接35.521
    带自适应参数的残差连接35.569
    下载: 导出CSV

    表  6  使用和未使用LPA模块的客观效果比较

    Table  6  Comparison of objective effects of ACAN with and without LPA module

    LPA模块PSNR (dB)
    $使用$35.569
    $未使用$35.489
    下载: 导出CSV

    表  7  NLMB使用3种不同连接方式对重建效果的影响

    Table  7  Impact of using three different connection methods on NLMB on reconstruction performance

    使用的跳跃连接PSNR (dB)
    残差连接35.542
    级联连接35.502
    自适应级联残差连接35.569
    下载: 导出CSV

    表  8  不同网络模型深度对重建性能的影响

    Table  8  Impact of different network depths on reconstruction performance

    T6789
    PSNR (dB)35.53035.53835.56935.551
    下载: 导出CSV

    表  9  各种SISR方法的平均PSNR值与SSIM值

    Table  9  Average PSNR/SSIM of various SISR methods

    放大倍数模型参数量Set5
    PSNR / SSIM
    Set14
    PSNR / SSIM
    B100
    PSNR / SSIM
    Urban100
    PSNR / SSIM
    Manga109
    PSNR / SSIM
    $\times$2SRCNN57 K36.66 / 0.952432.42 / 0.906331.36 / 0.887929.50 / 0.894635.74 / 0.9661
    FSRCNN12 K37.00 / 0.955832.63 / 0.908831.53 / 0.892029.88 / 0.902036.67 / 0.9694
    VDSR665 K37.53 / 0.958733.03 / 0.912431.90 / 0.896030.76 / 0.914037.22 / 0.9729
    DRCN1774 K37.63 / 0.958833.04 / 0.911831.85 / 0.894230.75 / 0.913337.63 / 0.9723
    LapSRN813 K37.52 / 0.959033.08 / 0.913031.80 / 0.895030.41 / 0.910037.27 / 0.9740
    DRRN297 K37.74 / 0.959133.23 / 0.913632.05 / 0.897331.23 / 0.918837.92 / 0.9760
    MemNet677 K37.78 / 0.959733.28 / 0.914232.08 / 0.897831.31 / 0.919537.72 / 0.9740
    SRMDNF1513 K37.79 / 0.960033.32 / 0.915032.05 / 0.898031.33 / 0.920038.07 / 0.9761
    CARN1592 K37.76 / 0.959033.52 / 0.916632.09 / 0.897831.92 / 0.925638.36 / 0.9765
    SRFBN-S282K37.78 / 0.959733.35 / 0.915632.00 / 0.897031.41 / 0.920738.06 / 0.9757
    本文 ACAN800 K38.10 / 0.960833.60 / 0.917732.21 / 0.900132.29 / 0.929738.81 / 0.9773
    本文 ACAN+800 K38.17 / 0.961133.69 / 0.9182 32.26 / 0.900632.47 / 0.931539.02 / 0.9778
    $\times$3SRCNN57 K32.75 / 0.909029.28 / 0.820928.41 / 0.786326.24 / 0.798930.59 / 0.9107
    FSRCNN12 K33.16 / 0.914029.43 / 0.824228.53 / 0.791026.43 / 0.808030.98 / 0.9212
    VDSR665 K33.66 / 0.921329.77 / 0.831428.82 / 0.797627.14 / 0.827932.01 / 0.9310
    DRCN1774 K33.82 / 0.922629.76 / 0.831128.80 / 0.796327.15 / 0.827632.31 / 0.9328
    DRRN297 K34.03 / 0.924429.96 / 0.834928.95 / 0.800427.53 / 0.837832.74 / 0.9390
    MemNet677 K34.09 / 0.924830.00 / 0.835028.96 / 0.800127.56 / 0.837632.51 / 0.9369
    SRMDNF1530 K34.12 / 0.925030.04 / 0.837028.97 / 0.803027.57 / 0.840033.00 / 0.9403
    CARN1592 K34.29 / 0.925530.29 / 0.840729.06 / 0.803427.38 / 0.840433.50 / 0.9440
    SRFBN-S376 K34.20 / 0.925530.10 / 0.837228.96 / 0.801027.66 / 0.841533.02 / 0.9404
    本文ACAN1115 K34.46 / 0.927730.39 / 0.843529.11 / 0.805528.28 / 0.855033.61 / 0.9447
    本文 ACAN+1115 K34.55 / 0.928330.46 / 0.844429.16 / 0.806528.45 / 0.857733.91 / 0.9464
    $\times$4SRCNN57 K30.48/0.862827.49 / 0.750326.90 / 0.710124.52 / 0.722127.66 / 0.8505
    FSRCNN12 K30.71 / 0.865727.59 / 0.753526.98 / 0.715024.62 / 0.728027.90 / 0.8517
    VDSR665 K31.35 / 0.883828.01 / 0.767427.29 / 0.725125.18 / 0.752428.83 / 0.8809
    DRCN1774 K31.53 / 0.885428.02 / 0.767027.23 / 0.723325.14 / 0.751028.98 / 0.8816
    LapSRN813 K31.54 / 0.885028.19 / 0.772027.32 / 0.728025.21 / 0.756029.09 / 0.8845
    DRRN297 K31.68 / 0.888828.21 / 0.772027.38 / 0.728425.44 / 0.763829.46 / 0.8960
    MemNet677 K31.74 / 0.889328.26 / 0.772327.40 / 0.728125.50 / 0.763029.42 / 0.8942
    SRMDNF1555 K31.96 / 0.893028.35 / 0.777027.49 / 0.734025.68 / 0.773030.09 / 0.9024
    CARN1592 K32.13 / 0.893728.60 / 0.780627.58 / 0.734926.07 / 0.783730.47 / 0.9084
    SRFBN-S483 K31.98 / 0.892328.45 / 0.777927.44 / 0.731325.71 / 0.771929.91 / 0.9008
    本文ACAN1556 K32.24 / 0.895528.62 / 0.782427.59 / 0.736626.17 / 0.789130.53 / 0.9086
    本文 ACAN+1556 K32.35 / 0.896928.68 / 0.783827.65 / 0.737926.31 / 0.792230.82 / 0.9117
    下载: 导出CSV
  • [1] Freeman W T, Pasztor E C, Carmichael O T. Learning lowlevel vision. International Journal of Computer Vision, 2000, 40(1): 25-47 doi: 10.1023/A:1026501619075
    [2] PeyréG, Bougleux S, Cohen L. Non-local regularization of inverse problems. In: Proceedings of the European Conference on Computer Vision. Berlin, Germany: Springer, Heidelberg, 2008. 57−68
    [3] LeCun Y, Bengio Y, Hinton G. Deep learning. nature, 2015, 521(7553): 436-444 doi: 10.1038/nature14539
    [4] Dong C, Loy C C, He K, Tang X. Learning a deep convolutional network for image super-resolution. In: Proceedings of the European Conference on Computer Vision. Zurich, Switzerland: Springer, Cham, 2014. 184−199
    [5] Li Z, Yang J, LiuLi Z, Yang J, Liu Z, Yang X, et al. Feedback network for image superresolution. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 3867−3876
    [6] Kim J, Kwon Lee J, Mu Lee K. Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 1637−1645
    [7] Tai Y, Yang J, Liu X. Image super-resolution via deep recursive residual network. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 3147−3155
    [8] Tai Y, Yang J, Liu X, Xu C. Memnet: A persistent memory network for image restoration. In: Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 4539−4547
    [9] Ahn N, Kang B, Sohn K A. Fast, accurate, and lightweight super-resolution with cascading residual network. In: Proceedings of the European Conference on Computer Vision. Zurich, Switzerland: Springer, Cham, 2018. 252−268
    [10] Cao C, Liu X, Yang Y, Yu Y, Wang J, Wang Z, et al. Look and think twice: Capturing top-down visual attention with feedback convolutional neural networks. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. 2956−2964
    [11] Wang F, Jiang M, Qian C, Yang S, Li C, Zhang H, et al. Residual attention network for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 3156−3164
    [12] Hu J, Shen L, Sun G. Squeeze-and-excitation networks. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 7132−7141
    [13] Li K, Wu Z, Peng K C, Ernst J, Fu Y. Tell me where to look: Guided attention inference network. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 9215−9223
    [14] Liu Y, Wang Y, Li N, Cheng X, Zhang Y, Huang Y, et al. An attention-based approach for single image super resolution. In: Proceedings of the 2018 24th International Conference on Pattern Recognition. Beijing, China: IEEE, 2018. 2777−2784
    [15] Zhang Y, Li K, Li K, Wang L, Zhong B, Fu Y. Image super-resolution using very deep residual channel attention networks. In: Proceedings of the European Conference on Computer Vision. Zurich, Switzerland: Springer, Cham, 2018. 286−301
    [16] Kim J, Kwon Lee J, Mu Lee K. Accurate image superresolution using very deep convolutional networks. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 1646−1654
    [17] Wang Z, Chen J, Hoi S C H. Deep learning for image superresolution: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020
    [18] Dong C, Loy C C, Tang X. Accelerating the super-resolution convolutional neural network. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zurich, Switzerland: Springer, Cham, 2016. 391−407
    [19] Shi W, Caballero J, Huszár F, Totz J, Aitken A P, Bishop R, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 1874−1883
    [20] Tong T, Li G, Liu X, Gao Q. Image super-resolution using dense skip connections. In: Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 4799−4807
    [21] Li J, Fang F, Mei K, Zhang G. Multi-scale residual network for image super-resolution. In: Proceedings of the European Conference on Computer Vision. Zurich, Switzerland: Springer, Cham, 2018. 517−532
    [22] Haris M, Shakhnarovich G, Ukita N. Deep back-projection networks for super-resolution. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA, 2018. 1664−1673
    [23] Agustsson E, Timofte R. Ntire 2017 challenge on single image super-resolution: Dataset and study. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, USA: IEEE, 2017. 126−135
    [24] Bevilacqua M, Roumy A, Guillemot C, Alberi-Morel M L. Lowcomplexity single-image super-resolution based on nonnegative neighbor embedding. In: Proceedings of the 23rd British Machine Vision Conference. Guildford, UK: BMVA Press, 2012. (135): 1−10
    [25] Zeyde R, Elad M, Protter M. On single image scale-up using sparse-representations. In: Proceedings of International Conference on Curves and Surfaces. Berlin, Germany: Springer, Heidelberg, 2010. 711−730
    [26] Huang J B, Singh A, Ahuja N. Single image super-resolution from transformed self-exemplars. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 5197−5206
    [27] Martin D, Fowlkes C, Tal D, Malik J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of the 2001 International Conference on Computer Vision. Vancouver, Canada: IEEE, 2015. 416−423
    [28] Matsui Y, Ito K, Aramaki Y, et al. Sketch-based manga retrieval using manga109 dataset. Multimedia Tools and Applications, 2017, 76(20): 21811-21838 doi: 10.1007/s11042-016-4020-z
    [29] Kingma D P, Ba J. Adam: A method for stochastic optimization. arXiv preprint, 2014, arXiv: 1412.6980
    [30] Lai W S, Huang J B, Ahuja N, Yang M H. Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 5835−5843
    [31] Zhang K, Zuo W, Zhang L. Learning a single convolutional super-resolution network for multiple degradations. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 3262−3271
    [32] Timofte R, Rothe R, Van Gool L. Seven ways to improve example-based single image super resolution. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 1865−1873
    [33] Wang Z, Bovik A C, Sheikh H R, et al. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 2004, 13(4): 600-612 doi: 10.1109/TIP.2003.819861
    [34] Wu H, Zou Z, Gui J, et al. Multi-grained Attention Networks for Single Image Super-Resolution. IEEE Transactions on Circuits and Systems for Video Technology, 2020
  • 加载中
图(9) / 表(9)
计量
  • 文章访问数:  1379
  • HTML全文浏览量:  427
  • PDF下载量:  293
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-01-16
  • 录用日期:  2020-06-28
  • 网络出版日期:  2022-07-13
  • 刊出日期:  2022-06-01

目录

    /

    返回文章
    返回