2.765

2022影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于像素对比学习的图像超分辨率算法

周登文 刘子涵 刘玉铠

周登文, 刘子涵, 刘玉铠. 基于像素对比学习的图像超分辨率算法. 自动化学报, 2024, 50(1): 181−193 doi: 10.16383/j.aas.c230395
引用本文: 周登文, 刘子涵, 刘玉铠. 基于像素对比学习的图像超分辨率算法. 自动化学报, 2024, 50(1): 181−193 doi: 10.16383/j.aas.c230395
Zhou Deng-Wen, Liu Zi-Han, Liu Yu-Kai. Pixel-wise contrastive learning for single image super-resolution. Acta Automatica Sinica, 2024, 50(1): 181−193 doi: 10.16383/j.aas.c230395
Citation: Zhou Deng-Wen, Liu Zi-Han, Liu Yu-Kai. Pixel-wise contrastive learning for single image super-resolution. Acta Automatica Sinica, 2024, 50(1): 181−193 doi: 10.16383/j.aas.c230395

基于像素对比学习的图像超分辨率算法

doi: 10.16383/j.aas.c230395
详细信息
    作者简介:

    周登文:华北电力大学控制与计算机工程学院教授. 主要研究方向为图像去噪, 图像去马赛克, 图像插值和图像超分辨率. 本文通信作者. E-mail: zdw@ncepu.edu.cn

    刘子涵:华北电力大学控制与计算机工程学院硕士研究生. 主要研究方向为计算机视觉, 深度学习. E-mail: 120212227102@ncepu.edu.cn

    刘玉铠:华北电力大学控制与计算机工程学院硕士研究生. 主要研究方向为计算机视觉, 深度学习. E-mail: liuyk@ncepu.edu.cn

Pixel-wise Contrastive Learning for Single Image Super-resolution

More Information
    Author Bio:

    ZHOU Deng-Wen Professor at the School of Control and Compu-ter Engineering, North China Electric Power University. His research interest covers image denoising, image demosaic, image interpolation, and image super-resolution. Corresponding author of this paper

    LIU Zi-Han Master student at the School of Control and Compu-ter Engineering, North China Electric Power University. His research interest covers computer vision and deep learning

    LIU Yu-Kai Master student at the School of Control and Compu-ter Engineering, North China Electric Power University. His research interest covers computer vision and deep learning

  • 摘要: 目前, 深度卷积神经网络(Convolutional neural network, CNN)已主导了单图像超分辨率(Single image super-resolution, SISR)技术的研究, 并取得了很大进展. 但是, SISR仍是一个开放性问题, 重建的超分辨率(Super-resolution, SR)图像往往会出现模糊、纹理细节丢失和失真等问题. 提出一个新的逐像素对比损失, 在一个局部区域中, 使SR图像的像素尽可能靠近对应的原高分辨率(High-resolution, HR)图像的像素, 并远离局部区域中的其他像素, 可改进SR图像的保真度和视觉质量. 提出一个组合对比损失的渐进残差特征融合网络(Progressive residual feature fusion network, PRFFN). 主要贡献有: 1)提出一个通用的基于对比学习的逐像素损失函数, 能够改进SR图像的保真度和视觉质量; 2)提出一个轻量的多尺度残差通道注意力块(Multi-scale residual channel attention block, MRCAB), 可以更好地提取和利用多尺度特征信息; 3)提出一个空间注意力融合块(Spatial attention fuse block, SAFB), 可以更好地利用邻近空间特征的相关性. 实验结果表明, PRFFN显著优于其他代表性方法.
  • 图  1  不同损失及其组合的PSNR/SSIM和视觉效果

    Fig.  1  PSNR/SSIM and visual effects for different losses and their combinations

    图  2  在Set14数据集上, 不同SISR方法2倍SR结果的平均PSNR值和参数量

    Fig.  2  Average PSNRs and parameter counts for 2 times SR models for each state-of-the-art SISR method on the Set14 dataset

    图  3  网络架构细节

    Fig.  3  Network architecture details

    图  4  多尺度残差通道注意力块

    Fig.  4  Multi-scale residual channel attention block

    图  5  空间注意力融合块

    Fig.  5  Spatial attention fusion block

    图  6  像素级对比损失

    Fig.  6  Pixel-wise contrastive loss

    图  7  Urban100数据集中, SwinIR-light使用不同损失函数, img004图像的3倍SR结果

    Fig.  7  The 3 times SR results of SwinIR-light using different losses on the img004 image inthe Urban100 data set

    图  8  2倍SR的视觉效果比较

    Fig.  8  Visual comparison for 2 times SR

    图  9  3倍SR的视觉效果比较

    Fig.  9  Visual comparison for 3 times SR

    图  10  4倍SR的视觉效果比较

    Fig.  10  Visual comparison for 4 times SR

    表  1  DIV2K_val5验证集上, 不同模型, 3倍SR的平均PSNR和参数量

    Table  1  The average PSNRs and parameter counts of 3 times SR for different models on the DIV2K_val5 validation data set

    模型$L_{1}$$L_{cntr}$参数量(K)PSNR (dB)
    PRFFN0$\checkmark$297532.259
    PRFFN1$\checkmark$322232.307
    PRFFN2$\checkmark$316732.342
    PRFFN3$\checkmark$316732.364
    PRFFN$\checkmark$$\checkmark$316732.451
    下载: 导出CSV

    表  2  DIV2K_val5验证集上, 不同损失函数及其组合, 3倍SR的平均PSNR和LPIPS结果

    Table  2  The average PSNRs and LPIPSs of 3 times SR for different losses and their combinations on the DIV2K_val5 validation data set

    $L_{1}$$L_{perc}$$L_{CSD}$$L_{cntr}$PSNR (dB)LPIPS
    $\checkmark$32.3640.0978
    $\checkmark$$\checkmark$32.4510.0969
    $\checkmark$$\checkmark$32.2360.0672
    $\checkmark$$\checkmark$$\checkmark$32.3050.0656
    $\checkmark$$\checkmark$32.3870.0624
    $\checkmark$$\checkmark$$\checkmark$32.4320.0613
    下载: 导出CSV

    表  3  DIV2K_val5验证集上, 不同$ \lambda_{C} $, 3倍SR的平均PSNR结果

    Table  3  The average PSNRs of 3 times SR for different $ \lambda_{C} $on the DIV2K_val5 validation data set

    $\lambda_{C}$PSNR (dB)
    $10^{-2}$32.386
    $10^{-1}$32.451
    $1$32.381
    $10$32.372
    下载: 导出CSV

    表  4  DIV2K_val5验证集上, 不同模型包含与不包含$ L_{cntr} $损失, 3倍SR的平均PSNR和SSIM结果

    Table  4  The average PSNRs and SSIMs of 3 times SR for different models with and without $ L_{cntr} $ loss on the DIV2K_val5 validation data set

    模型$L_{cntr}$PSNR (dB)SSIM
    EDSR32.2730.9057
    $ \checkmark$32.334 ($\uparrow 0.061$)0.9067 ($\uparrow 0.0010$)
    SwinIR-light32.4420.9062
    $ \checkmark$32.489 ($\uparrow 0.047$)0.9069 ($\uparrow 0.0007$)
    RCAN32.5640.9088
    $ \checkmark$32.628 ($\uparrow 0.064$)0.9096 ($\uparrow 0.0008$)
    下载: 导出CSV

    表  5  DIV2K_val5验证集上, 不同大小局部区域, 3倍SR的平均PSNR结果

    Table  5  The average PSNRs of 3 times SR for different size local regions on the DIV2K_val5 validation data set

    $S$PSNR (dB)
    $16$32.425
    $64$32.451
    $256$32.458
    下载: 导出CSV

    表  6  3倍SR训练10个迭代周期, 训练占用的内存和使用的训练时间

    Table  6  For 3 times SR, 10 epochs, comparing the memory and time used by training

    $L_{cntr}$$S$内存(MB)时间(s)
    41261962
    $\checkmark$1648362081
    $\checkmark$6452162219
    $\checkmark$25675412893
    下载: 导出CSV

    表  7  DIV2K_val5验证集上, MRCAB不同分支和不同扩张率组合, 3倍SR的平均PSNR结果

    Table  7  The average PSNRs of 3 times SR for the different branches of MRCAB with different dilation rate combinations on the DIV2K_val5 validation data set

    不同的扩张率卷积组合PSNR (dB)
    $1$32.375
    $1, 2$32.370
    $1, 2, 3$32.392
    $1, 2, 4$32.451
    $1, 2, 5$32.415
    下载: 导出CSV

    表  8  5个标准测试数据集上, 不同SISR方法的2倍、3倍和4倍SR的平均PSNR和SSIM结果

    Table  8  The average PSNRs and SSIMs of 2 times, 3 times, and 4 times SR for different SISR methods on five standard test data sets

    放大倍数方法参数量(K)计算量(G)推理时间(ms)Set5 PSNR/SSIMSet14 PSNR/SSIMB100 PSNR/SSIMUrban100 PSNR/SSIMManga109 PSNR/SSIM
    2FSRCNN136.02637.00/0.955832.63/0.908831.53/0.892029.88/0.902036.67/0.9694
    SMSR985224.153638.00/0.960133.64/0.917932.17/0.899032.19/0.928438.76/0.9771
    ACAN8002108.038.10/0.960833.60/0.917732.21/0.900132.29/0.929738.81/0.9773
    AWSRN1397320.550638.11/0.960833.78/0.918932.26/0.900632.49/0.931638.87/0.9776
    DRCN177417974.037.63/0.958833.04/0.911831.85/0.894230.75/0.913337.55/0.9732
    CARN1592222.835237.76/0.959033.52/0.916632.09/0.897831.92/0.925638.36/0.9765
    OISR-RK249711145.771538.12/0.960933.80/0.919332.26/0.900732.48/0.931738.79/0.9773
    OISR-LF49711145.772238.12/0.960933.78/0.919632.26/0.900732.52/0.932038.80/0.9774
    MSRN60781356.881038.08/0.960733.70/0.918632.23/0.900232.29/0.930338.69/0.9772
    SeaNet74713709.1192038.08/0.960933.75/0.919032.27/0.900832.50/0.931838.76/0.9774
    TSAN39891013.1118338.22/0.961933.84/0.921832.32/0.901532.77/0.9345—/—
    PRFFN2988656.479238.18/0.961133.90/0.920732.30/0.901232.75/0.933739.02/0.9777
    3FSRCNN134.61433.16/0.914029.43/0.824228.53/0.791026.43/0.808030.98/0.9212
    SMSR993100.528134.40/0.927030.33/0.841229.10/0.805028.25/0.853633.68/0.9445
    ACAN11151051.734.46/0.927730.39/0.843529.11/0.806528.28/0.855033.61/09447
    AWSRN1476150.626334.52/0.928130.38/0.842629.16/0.806928.42/0.858033.85/0.9463
    DRCN177417974.033.85/0.921529.89/0.830428.81/0.795427.16/0.831132.31/0.9328
    CARN1592118.817734.29/0.925530.29/0.840729.06/0.803427.38/0.849333.50/0.9440
    OISR-RK25640578.636634.55/0.928230.46/0.844329.18/0.807528.50/0.859733.80/0.9442
    OISR-LF5640578.636734.56/0.928430.46/0.845029.20/0.807728.56/0.860633.78/0.9441
    MSRN6078621.247634.38/0.926230.34/0.839529.08/0.804128.08/0.855433.44/0.9427
    SeaNet73973233.299434.55/0.928230.42/0.844429.17/0.807128.50/0.859433.73/0.9463
    TSAN4174565.665034.64/0.928230.52/0.845429.20/0.808028.55/0.8602—/—
    PRFFN3167312.544134.67/0.928830.54/0.846029.23/0.808428.65/0.862134.03/0.9473
    4FSRCNN134.6830.71/0.865727.59/0.753526.98/0.715024.62/0.728027.90/0.8517
    SMSR100657.219232.12/0.893228.55/0.780827.55/0.735126.11/0.786830.54/0.9085
    ACAN1556616.532.24/0.895528.62/0.782427.59/0.737926.31/0.792230.53/0.9086
    AWSRN158791.118832.27/0.896028.69/0.784327.64/0.738526.29/0.793030.72/0.9109
    DRCN177417974.031.56/0.881028.15/0.762727.23/0.715025.14/0.751028.98/0.8816
    CARN159290.912132.13/0.893728.60/0.780627.58/0.734926.07/0.783730.47/0.9084
    OISR-RK25500412.224132.32/0.896528.72/0.784327.66/0.739026.37/0.795330.75/0.9082
    OISR-LF5500412.223932.33/0.896828.73/0.784527.66/0.738926.38/0.795330.76/0.9080
    MSRN6078365.135232.07/0.890328.60/0.775127.52/0.727326.04/0.789630.17/0.9034
    SeaNet73973065.670432.33/0.898128.72/0.785527.65/0.738826.32/0.794230.74/0.9129
    TSAN4137415.145232.40/0.897528.73/0.7847 27.67/0.739826.39/0.7955—/—
    PRFFN3131200.131632.43/0.898328.75/0.785727.70/0.739826.55/0.797430.93/0.9133
    下载: 导出CSV
  • [1] Freeman W T, Pasztor E C, Carmichael O T. Learning low-level vision. International Journal of Computer Vision, 2000, 40(1): 25–47 doi: 10.1023/A:1026501619075
    [2] Thornton M W, Atkinson P M, Holland D A. Subpixel mapping of rural land cover objects from fine spatial resolution satellite sensor imagery using super-resolution pixel-swapping. International Journal of Remote Sensing, 2006, 27(3): 473–491 doi: 10.1080/01431160500207088
    [3] Zou W, Yuen P C. Very low resolution face recognition problem. IEEE Transactions on Image Processing, 2011, 21(1): 327–340
    [4] Shi W, Caballero J, Ledig C, Zhuang X, Bai W, Bhatia K, et al. Cardiac image super-resolution with global correspondence using multi-atlas patch-match. In: Proceedings of the International Conference on Medical Image Computing and Computer-assisted Intervention. Nagoya, Japan: 2013. 9–16
    [5] Chang H, Yeung D Y, Xiong Y M. Super-resolution through neighbor embedding. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington DC, USA: IEEE, 2004. 1−8
    [6] Zeyde R, Elad M, Protter M. On single image scale-up using sparse-representations. In: Proceedings of the International Conference on Curves and Surfaces. Avignon, France: 2010. 711–730
    [7] Timofte R, De Smet V, Van Gool L. Anchored neighborhood regression for fast example-based super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision. Sydney, Australia: 2013. 1920–1927
    [8] Dong C, Loy C C, He K M, Tang X O. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 38(2): 295–307
    [9] Kim J, Lee J K, Lee K M. Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: 2016. 1646–1654
    [10] Lim B, Son S, Kim H, Nah S, Lee K M. Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Hawaii, USA: 2017. 136–144
    [11] Zhang Y L, Li K P, Li K, Wang L C, Zhong B N, Fu Y. Image super-resolution using very deep residual channel attention networks. In: Proceedings of the European Conference on Computer Vision. Munich, Germany: 2018. 286–301
    [12] Tong T, Li G, Liu X J, Gao Q Q. Image super-resolution using dense skip connections. In: Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: 2017. 4799– 4807
    [13] Zhang Y L, Li K P, Li K, Zhong B N, Fu Y. Residual non-local attention networks for image restoration. In: Proceedings of the International Conference on Learning Representations. New Orleans, USA: 2019. 1−18
    [14] Fan Y C, Shi H H, Yu J H, Liu D, Han W, Yu H C, et al. Balanced two-stage residual networks for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Hawaii, USA: 2017. 161–168
    [15] Johnson J, Alahi A, Li F F. Perceptual losses for real-time style transfer and super-resolution. In: Proceedings of the European Conference on Computer Vision. Amsterdam, Netherlands: 2016. 694–711
    [16] Karen S, Andrew Z. Very deep convolutional networks for large-scale image recognition. In: Proceedings of the 3rd International Conference on Learning Representations, Conference Track Proceedings. San Diego, USA: 2015. 7−9
    [17] Wang Y, Lin S, Qu Y, Wu H, Zhang Z, Xie Y, et al. Towards compact single image super-resolution via contrastive self-distillation. In: Proceedings of the 30th International Joint Conference on Artificial Intelligence. Montreal, Canada: 2021. 1122– 1128
    [18] Wang Z, Bovik A C, Sheikh H R, Simoncelli E P. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 2004, 13(4): 600–612 doi: 10.1109/TIP.2003.819861
    [19] Sajjadi M S, Scholkopf B, Hirsch M. EnhanceNet: Single image super-resolution through automated texture synthesis. In: Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: 2017. 4491–4500
    [20] Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, et al. Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Hawaii, USA: 2017. 4681–4690
    [21] Fisher Y, Vladlen K. Multi-scale context aggregation by dilated convolutions. In: Proceedings of the International Conference on Learning Representations. San Juan, Puerto Rico: 2016. 1−13
    [22] Woo S, Park J, Lee J Y, Kweon I S. CBAM: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision. Munich, Germany: 2018. 3–19
    [23] Zhang Y L, Tian Y P, Kong Y, Zhong B N, Fu Y. Residual dense network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: 2018. 2472–2481
    [24] Niu B, Wen W L, Ren W Q, Zhang X D, Yang L P, Wang S Z, et al. Single image super-resolution via a holistic attention network. In: Proceedings of the European Conference on Computer Vision. Glasgow, UK: 2020. 191–207
    [25] Li J C, Fang F M, Mei K F, Zhang G X. Multi-scale residual network for image super-resolution. In: Proceedings of the European Conference on Computer Vision. Munich, Germany: 2018. 517–532
    [26] Liang J Y, Cao J Z, Sun G L, Zhang K, Van Gool L, Timofte R. SwinIR: Image restoration using swin transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal, Canada: 2021. 1833–1844
    [27] Liu Z, Lin Y T, Cao Y, Hu H, Wei Y X, Zhang Z, et al. Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal, Canada: 2021. 10012–10022
    [28] Gao D D, Zhou D W. A very lightweight and efficient image super-resolution network. Expert Systems with Applications, 2023, 213: Article No.118898 doi: 10.1016/j.eswa.2022.118898
    [29] He K M, Fan H Q, Wu Y X, Xie S N, Girshick R. Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: 2020. 9729–9738
    [30] Sermanet P, Lynch C, Chebotar Y, Hsu J, Jang E, Schaal S, et al. Time-contrastive networks: Self-supervised learning from video. In: Proceedings of the IEEE International Conference on Robotics and Automation. Queensland, Australia: IEEE, 2018. 1134–1141
    [31] Wu G, Jiang J J, Liu X M, Ma J Y. A practical contrastive learning framework for single image super-resolution. arXiv preprint arXiv: 2111.13924, 2021.
    [32] Wu H Y, Qu Y Y, Lin S H, Zhou J, Qiao R Z, Zhang Z Z, et al. Contrastive learning for compact single image dehazing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Virtual Event: 2021. 10551–10560
    [33] Wang L G, Wang Y Q, Dong X Y, Xu Q Y, Yang J G, An W, et al. Unsupervised degradation representation learning for blind super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Virtual Event: 2021. 10581–10590
    [34] Shi W Z, Caballero J, Huszár F, Totz J, Aitken A P, Bishop B, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: 2016. 1874–1883
    [35] Liu J, Zhang W J, Tang Y, Tang J, Wu G S. Residual feature aggregation network for image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: 2020. 2359–2368
    [36] Chen T, Kornblith S, Norouzi M, Hinton G. A simple framework for contrastive learning of visual representations. In: Proceedings of the International Conference on Machine Learning. Virtual Event: 2020. 1597–1607
    [37] Xie Z D, Lin Y T, Zhang Z, Cao Y, Lin S, Hu H. Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Virtual Event: 2021. 16684–16693
    [38] Oord A V D, Li Y Z, Vinyals O. Representation learning with contrastive predictive coding. arXiv preprint arXiv: 1807.03748, 2018.
    [39] Timofte R, Agustsson E, Van Gool L, Yang M H, Zhang L. Ntire 2017 challenge on single image super-resolution: Methods and results. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Hawaii, USA: 2017. 114–125
    [40] Bevilacqua M, Roumy A, Guillemot C, Alberi-Morel M. Low-complexity single-image super-resolution based on non-negative neighbor embedding. In: Proceedings of the British Machine Vision Conference. Surrey, UK: 2012. 1–10
    [41] Martin D, Fowlkes C, Tal D, Malik J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of the 8th IEEE International Conference on Computer Vision. Vancouver, Canada: IEEE, 2001. 416–423
    [42] Huang J B, Singh A, Ahuja N. Single image super-resolution from transformed self-exemplars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: 2015. 5197–5206
    [43] Yusuke M, Ito K, Aramaki Y, Fujimoto A, Ogawa T, Yamasaki T, et al. Sketch-based manga retrieval using manga109 dataset. Multimedia Tools and Applications, 2017, 76(20): 21811–21838 doi: 10.1007/s11042-016-4020-z
    [44] Zhang R, Isola P, Efros A A, Shechtman E, Wang O. The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: 2018. 586− 595
    [45] Dong C, Loy C C, Tang X O. Accelerating the super-resolution convolutional neural network. In: Proceedings of the European Conference on Computer Vision. Amsterdam, Netherlands: 2016. 391–407
    [46] Wang L G, Dong X Y, Wang Y Q, Ying X Y, Lin Z P, An W, et al. Exploring sparsity in image super-resolution for efficient inference. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Virtual Event: 2021. 4917–4926
    [47] Zhou D W, Chen Y M, Li W B, Li J X. Image super-resolution based on adaptive cascading attention network. Expert Systems with Applications, 2021, 186: Article No.115815 doi: 10.1016/j.eswa.2021.115815
    [48] Li Z, Wang C F, Wang J, Ying S H, Shi J. Lightweight adaptive weighted network for single image super-resolution. Computer Vision and Image Understanding, 2021, 211: 103-254
    [49] Kim J, Lee J K, Lee K M. Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: 2016. 1637–1645
    [50] Ahn N, Kang B, Sohn K A. Fast, accurate, lightweight super-resolution with cascading residual network. In: Proceedings of the European Conference on Computer Vision. Munich, Germany: 2018. 252–268
    [51] He X Y, Mo Z T, Wang P S, Liu Y, Yang M Y, Cheng J. Ode-inspired network design for single image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: 2019. 1732–1741
    [52] Fang F M, Li J C, Zeng T Y. Soft-edge assisted network for single image super-resolution. IEEE Transactions on Image Processing, 2020, 29: 4656–4668 doi: 10.1109/TIP.2020.2973769
    [53] Zhang J Q, Long C J, Wang Y X, Piao H Y, Mei H Y, Yang X, et al. A two-stage attentive network for single image super-resolution. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(3): 1020–1033 doi: 10.1109/TCSVT.2021.3071191
  • 加载中
图(10) / 表(8)
计量
  • 文章访问数:  342
  • HTML全文浏览量:  197
  • PDF下载量:  179
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-06-27
  • 录用日期:  2023-10-15
  • 网络出版日期:  2023-11-09
  • 刊出日期:  2024-01-29

目录

    /

    返回文章
    返回