2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

模糊失真图像无参考质量评价综述

陈健 李诗云 林丽 王猛 李佐勇

杨杰, 赵磊, 郭文彬. 基于图谱域移位的带限图信号重构算法. 自动化学报, 2021, 47(9): 2132−2142 doi: 10.16383/j.aas.c200802
引用本文: 陈健, 李诗云, 林丽, 王猛, 李佐勇. 模糊失真图像无参考质量评价综述. 自动化学报, 2022, 48(3): 689−711 doi: 10.16383/j.aas.c201030
Yang Jie, Zhao Lei, Guo Wen-Bin. Graph band-limited signals reconstruction method based graph spectral domain shifting. Acta Automatica Sinica, 2021, 47(9): 2132−2142 doi: 10.16383/j.aas.c200802
Citation: Chen Jian, Li Shi-Yun, Lin Li, Wang Meng, Li Zuo-Yong. A review on no-reference quality assessment for blurred image. Acta Automatica Sinica, 2022, 48(3): 689−711 doi: 10.16383/j.aas.c201030

模糊失真图像无参考质量评价综述

doi: 10.16383/j.aas.c201030
基金项目: 国家自然科学基金(61972187), 福建省自然科学基金(2020J02024, 2018J01637), 福州市科技计划项目(2020-RC-186), 福建省信息处理与智能控制重点实验室(闽江学院)开放课题(MJUKF-IPIC202110)资助
详细信息
    作者简介:

    陈健:福建工程学院电子电气与物理学院副教授. 2015年获得福州大学通信与信息系统专业博士学位. 主要研究方向为计算机视觉, 深度学习, 医学图像处理与分析. 本文通信作者. E-mail: jchen321@126.com

    李诗云:福建工程学院电子电气与物理学院硕士研究生. 主要研究方向为图像处理和机器学习. E-mail: 13997691527@163.com

    林丽:福建工程学院电子电气与物理学院讲师. 2009年获得福州大学信号与信息处理专业硕士学位. 主要研究方向为机器视觉及信号处理. E-mail: linli@fjut.edu.cn

    王猛:福建工程学院电子电气与物理学院硕士研究生. 主要研究方向为计算机视觉. E-mail: wm15720503705@163.com

    李佐勇:闽江学院计算机与控制工程学院教授. 2010年获得南京理工大学计算机应用专业博士学位. 主要研究方向为图像处理, 模式识别及深度学习. E-mail: fzulzytdq@126.com

A Review on No-reference Quality Assessment for Blurred Image

Funds: Supported by National Natural Science Foundation of China (61972187), Natural Science Foundation of Fujian Province (2020J02024, 2018J01637), Fuzhou Science and Technology Project (2020-RC-186), and Open Fund Project of Fujian Provincial Key Laboratory of Information Processing and Intelligent Control (Minjiang University) (MJUKF-IPIC202110)
More Information
    Author Bio:

    CHEN Jian Associate professor at the School of Electronic, Electrical Engineering and Physics, Fujian University of Technology. He received his Ph.D. degree in communication and information system from Fuzhou University in 2015. His research interest covers computer vision, deep learning, and medical image processing and analysis. Corresponding author of this paper

    LI Shi-Yun Master student at the School of Electronic, Electrical Engineering and Physics, Fujian University of Technology. His research interest covers image processing and machine learning

    LIN Li Lecturer at the School of Electronic, Electrical and Physics, Fujian University of Technology. She received her master degree in signal and information processing from Fuzhou University in 2009. Her research interest covers machine vision and signal processing

    WANG Meng Master student at the School of Electronic, Electrical Engineering and Physics, Fujian University of Technology. His main research interest is computer vision

    LI Zuo-Yong Professor at the College of Computer and Control Engineering, Minjiang University. He received his Ph.D. degree in computer application from Nanjing University of Science and Technology in 2010. His research interest covers image processing, pattern recognition, and deep learning

  • 摘要: 图像的模糊问题影响人们对信息的感知、获取及图像的后续处理. 无参考模糊图像质量评价是该问题的主要研究方向之一. 本文分析了近20年来无参考模糊图像质量评价相关技术的发展. 首先, 本文结合主要数据集对图像模糊失真进行分类说明; 其次, 对主要的无参考模糊图像质量评价方法进行分类介绍与详细分析; 随后, 介绍了用来比较无参考模糊图像质量评价方法性能优劣的主要评价指标; 接着, 选择典型数据集及评价指标, 并采用常见的无参考模糊图像质量评价方法进行性能比较; 最后, 对无参考模糊图像质量评价的相关技术及发展趋势进行总结与展望.
  • 随着信息技术的高速发展, 各领域中所产生的数据维度正在以前所未有的速度增长, 例如社交网络数据、金融交易数据和城市交通流量数据等.

    然而, 传统的数据表征方法无法适用于具有复杂关联特征的网络数据集. 所以, 图网络[1]——一种非规则域中用于表征关联数据的模型应运而生. 如何更好地分析这些基于图网络表征的数据集, 从而更加高效地挖掘数据集的深度信息成为当下研究的热点问题之一.

    近年来, 随着图信号处理的兴起和发展, 图网络中的信号(数据)分析与处理引起了研究者们的广泛关注. 图信号处理是将传统的信号处理理论衍生至基于图网络表征的非规则域信号处理理论[2]. 目前,图信号处理的理论研究主要包括图滤波器(组)的设计[3]、图信号采样/恢复[4]、图信号压缩[5]和图拓扑学习[6]等. 相关的应用研究有传感网络中的异常数据检测[7]及修复[8], 基于图数据的机器学习等[9-10]. 然而, 目前该研究领域中仍然存在着许多亟待探索和解决的理论问题和应用瓶颈[11]. 例如, 图信号处理中尚未出现类似于奈奎斯特采样定理的统一采样理论[12]. 相关的挑战还包括图信号的大规模分布式计算[13]、异构网络中的图信号处理[14]、如何融合多尺度下的图信号特征而进行信号多分辨分析[15], 以及如何分析张量图网络中的多层图数据之间的关联性[16]等. 随着图信号处理的不断发展, 必将成为有效应对数据泛滥现象和降低数据冗余的重要工具, 并为网络数据的高效处理提供理论支撑.

    由于存在图网络的拓扑结构复杂多变以及数据维度带来的计算消耗大的问题, 如何利用尽可能少的采样节点信号和网络拓扑信息更加高效和完备地表征未采样节点信号, 从而为网络数据的传输和处理提供高效的技术支撑是图信号处理中的核心问题[17]. 在图信号重构的相关研究中, 由于带限图信号重构问题可作为其他类型图信号重构问题的源问题进行相关推广; 如何设计高效的带限图信号重构算法是一个重要的研究课题, 它为设计平滑图信号重构算法和实际网络数据重构方法提供了理论基础.

    基于Papoulis-Gerchberg信号重构算法[18], Narang等[19]提出一种基于空域迭代图滤波的信号重构方法(Iteration least square reconstruction, ILSR). 该方法通过将采样信号和每次迭代后产生的采样信号残差进行累加后, 再进行图谱域带限滤波处理, 从而达到重构目的. 在ILSR重构算法的基础上, Wang等[20]提出了基于迭代加权策略的信号重构算法(Iteration weighting reconstruction, IWR)和基于迭代传播策略的信号重构算法(Iteration propagating reconstruction, IPR), 两种算法优于ILSR算法的原因在于对采样节点进行了残差滤波处理. 在IWR算法中, Wang等[20]首先将采样信号的残差扩大相应的权重, 然后进行图滤波处理; 而在IPR算法中, 首先是基于预先划分好的局部集将采样节点的信号残差传递给相邻的未采样节点, 然后进行图滤波处理. 由于两种算法在每步迭代中加入了对于采样信号残差的处理, 增大了未采样信号在插值过程中的增量, 进而提高了重构的效率和精度. 为了进一步地提高对于残差信号的估计精度, Yang等[21]提出了基于扩散算子的迭代重构算法(Iteration graph reconstruction based diffusion operator, IGDR). IGDR算法修正了IWR和IPR算法中由于采样信号残差在局部集内均匀传递而导致的过平滑现象, 在每步迭代中基于局部扩散算子和全局扩散算子对信号采样进行了联合处理, 使得迭代滤波得到的未采样信号为图带限滤波信号和残差扩散信号的总和. 不同于IWR、IPR和IGDR算法聚焦于迭代残差信号的处理方法, Brugnoli等[22]同样在ILSR算法的基础上提出了基于最优参数的Papoulis-Gerchberg信号迭代重构算法(Optimal Papoulis-Gerchberg iterative reconstruction, O-PGIR), 该算法通过在每步迭代中设置松弛参数的最优解而达到较高的迭代效率.

    不同于基于空域滤波的重构算法研究, 为了完善图信号谱域理论框架及提升图信号的谱域特征分析能力, 基于图傅里叶变换的图谱域重构算法同样是近年来的研究热点.

    Tseng等先后提出基于压缩感知的硬阈值截断图谱域重构算法[23]和基于图傅里叶变换的图谱域重构算法[24]. 在硬阈值截断图谱域重构算法中,作者首先将图信号重构问题转化为图谱域中的稀疏优化问题, 然后采用经典压缩感知理论中的基追踪算法和正交匹配算法或迭代硬阈值截断法分别进行求解. 通过上述方法估计出未采样图信号在图谱域中的频率分量, 最后基于图傅里叶逆变换将估计的频率分量转换为空域图信号. 在正交匹配算法的基础上作者又提出了基于图傅里叶变换的信号重构算法; 在正交匹配算法中, 完整频率分量是通过逐步重构出每个图频率分量值而实现的. 而在基于图傅里叶变换的信号重构算法中, 作者通过重构出小于截止图频率内的频率分量值实现信号重构. 该算法实质上是将ILSR算法转化到图频域进行处理. 然而, 两种方法并没有针对低通带限图信号的谱域特性进行更深入的分析, 只是将空域重构算法转化到变换域进行.

    本文首先基于图傅里叶变换的分块矩阵形式和图带限信号特性分析得出图带限分量的恒等不变性. 基于该特性, 本文将重构问题建模为一个最小二乘模型. 本文所提出的重构模型是根据图高频部分的恒等关系, 相比于基于图低频段相似性的ILSR重构模型, 更加能够准确地表征信号的图谱域带限特性, 提高了重构精度. 此外, 由于根据重构模型而设计的迭代算法采用拟牛顿法进行求解, 在避免海森矩阵求解的同时高效利用了模型的二阶梯度信息, 相比于ILSR和O-PGIR提高了迭代效率. 而在基于残差信号的重构算法中, 本文根据残差信号同样具备图带限分量的恒等不变性, 设计了一种基于残差谱移位的重构算法. 相比于IWR/IPR和IGDR算法, 本文算法具有较好的重构性能. 此外, 由于本文提出的图带限分量的恒等不变性不需要考虑带限频率所在的频段, 所以针对分段带限图信号的重构问题同样适用, 并且具有良好的重构性能.

    图信号是指定义在具有网络拓扑结构中的信号集合, 其拓扑结构采用图模型$G = (V{\rm{, }}\;E{\rm{, }}\;{\boldsymbol{W}})$进行表征. 其中, 节点集为$V = \{ {v_1}, \cdot \cdot \cdot ,{v_N}\} $. $E = \{ e(i,j)\} $是图模型中的边集合, $e(i,j)$表示节点${v_i}$和节点${v_j}$之间有边相连. 信号${\boldsymbol{f}} = \{ f(i)\} \in {{\mathbf{R}}^N},$ 其中$f(i)$为图模型${{G}}$中节点${v_i}$上的信号值. 邻接矩阵 ${\boldsymbol{W}} = $$ \{ w(i,j)\} \in {{\mathbf{R}}^{N \times N}}$用于表征节点之间的相关性, ${\boldsymbol{W}}$中的元素$w(i,j)$如式(1)所示.

    $$w(i,j) = \left\{ {\begin{aligned} &1,\;\;\;\;{e(i,j) \in {{E}}}\\ &0,\;\;\;\;{\text{其他}} \end{aligned}} \right.$$ (1)

    由矩阵${\boldsymbol{W}}$可得到图拉普拉斯矩阵${\boldsymbol{L}} = {\boldsymbol{D}} - {\boldsymbol{W}}$和归一化图拉普拉斯矩阵${{\boldsymbol{L}}_{{Nor}}} = {{\boldsymbol{D}}^{ - 1/2}}{\boldsymbol{L}}{{\boldsymbol{D}}^{ - 1/2}}$, 其中的度矩阵定义为${\boldsymbol{D}} = {\rm{diag}}\{ {d_i}\}$,对角线元素${d_i}$为邻接矩阵中第$i$行元素之和. 通过对归一化图拉普拉斯矩阵${{\boldsymbol{L}}_{Nor}}$进行特征值分解, 得到特征向量矩阵${\boldsymbol{U}} = [{{\boldsymbol{u}}_1}\;\cdots\;{{\boldsymbol{u}}_N}]$和与其对应的特征值矩阵${\boldsymbol{\Lambda }} = $$ {\rm{diag}} \{ {\lambda _1},\cdots,{\lambda _N}\}$.

    在图信号处理理论中, 图傅里叶变换对建立了图信号在空间域和图谱域之间的联系, 从谱聚类的角度分析和处理图信号[25]. 其正变换和逆变换分别如式(2)和式(3)所示, 其中${\boldsymbol{f}}$${\tilde{\boldsymbol f}}$分别表示空域信号和图频率分量.

    $${\tilde{\boldsymbol f}} = {{\boldsymbol{U}}^{\rm{T}}}{\boldsymbol{f}}$$ (2)
    $${\boldsymbol{f}} = {\boldsymbol{U\tilde f}}$$ (3)

    根据图傅里叶变换对的定义, 图带限信号(Band-limited graph signals) ${{\boldsymbol{f}}_{BLG}} \in P{W_\omega }$的定义为: 当${\lambda _i} > \omega $时, ${{\tilde{\boldsymbol f}}_{BLG}}(i) = 0$; 其中$\omega $为带限图信号${{\boldsymbol{f}}_{BLG}}$的截止图频率. 如图1所示, 图1(a)为空间域中节点信号的分布图; 下层为图信号的拓扑结构, 上层为将各节点信号连接而成的平面图. 图1(b)表示的是图信号经过图傅里叶变换后得到的图谱域示意图; 在其图谱域示意图中, 其高频段的图频率分量为零. 由于在图信号重构问题中, 如何设计采样策略同样对于能否实现精确重构有着一定的影响. 在本文中将采用基于重构唯一性条件而设计采样策略. 在满足该条件的情况下, 任意的带限图信号均可实现精确重构. 图截止频率($\omega $)的重构唯一性的条件为[26]: 当带限图信号${{\boldsymbol{f}}_{BLG}} \in P{W_\omega }$的截止图频率${\omega ^2} \leq \eta$时, 从任意的采样节点集合重构得到的带限图信号具有唯一性. 其中$\eta $是关于${\boldsymbol{L}}_{Nor}^ *$的最小特征值, ${\boldsymbol{L}}_{Nor}^ *$是由${{\boldsymbol{L}}_{Nor}}^2$中对应于未采样节点集合的行和列而构成的子矩阵.

    图 1  带限图信号
    Fig. 1  Graph band-limited signals

    本文所研究的是带限图信号重构问题,即是在已知图信号${\boldsymbol{f}}$的先验信息 —图带限特性 $({\boldsymbol{f}} \in $$ P{W_\omega })$和采样信号${\boldsymbol{f}}(S)$的情况下, 如何重构得到未采样信号${\boldsymbol{f}}({S^c})$.采样矩阵${{\boldsymbol{P}}_S} = {\rm{diag}}\{ {{\boldsymbol{1}}_S}\} \in {{\mathbf{R}}^{N \times N}}$(对应于采样节点的主对角线元素为1,其余为0). 本文定义带限图信号的带宽为$B$, 即共有$B$${\lambda _i} \leq \omega$, 采样节点个数为$M$, 未采样节点个数为$N - M .$

    若将图信号${\boldsymbol{f}}$中的采样信号${\boldsymbol{f}}(S)$和未采样信号${\boldsymbol{f}}({S^c})$进行适当的重新排序后可得图信号 ${\boldsymbol{f}} = $$ {[{\boldsymbol{f}}^{\rm{T}}{(S)}{\boldsymbol{\Phi }}^{\rm{T}}{(S)}{\rm{ }}\quad\Phi^{\rm{T}} {({S^c})}{\boldsymbol{f}}^{\rm{T}}{({S^c})}]^{\rm{T}}}, $ 其中 ${\boldsymbol{\Phi }}(S) \in {{\mathbf{R}}^{M \times N}}$是由${{\boldsymbol{P}}_{\rm{S}}}$$M$个非全零的行向量而构成, ${\boldsymbol{\Phi }}({S^c}) \in $$ {{\mathbf{R}}^{(N - M) \times N}}$是由$({\boldsymbol{I}} - {{\boldsymbol{P}}_{\rm{S}}})$$N - M$个非全零行向量组成.

    图傅里叶变换对的分块矩阵表示形式如式(4)和式(5)所示. ${{\boldsymbol{U}}_L}(S) \in {{\mathbf{R}}^{M \times B}}$${{\boldsymbol{U}}_L}({S^c}) \in $$ {{\mathbf{R}}^{(N - M) \times B}}$分别由矩阵${\boldsymbol{U}}$的子矩阵$[{{\boldsymbol{u}}_1}\;\cdots\;{{\boldsymbol{u}}_B}]$中对应于采样节点和未采样节点的行向量所构成的子矩阵; 子矩阵${{\boldsymbol{U}}_H}(S)$${{\boldsymbol{U}}_H}({S^c})$分别是由${\boldsymbol{U}}$的子矩阵$[{{\boldsymbol{u}}_{B + 1}}\;\cdots\;{{\boldsymbol{u}}_N}]$中对应于采样节点和未采样节点的行向量所构成的子矩阵. ${{\tilde{\boldsymbol f}}_L} \in {{\mathbf{R}}^B}$${{\tilde{\boldsymbol f}}_H} \in {{\mathbf{R}}^{(N - B)}}$分别表示对应于前$B$个图频率分量(图低频分量)和第$(B + 1)$至第$N$个图频率分量(图高频分量).

    $$\begin{split} \left[ \begin{array}{l} {{{\tilde{\boldsymbol f}}}_L} \\ {{{\tilde{\boldsymbol f}}}_H} \end{array} \right] = \left[ \begin{array}{l} {{\boldsymbol{U}}^{\rm{T}}_L}{(S)}{\rm{ }}\;\quad{{\boldsymbol{U}}^{\rm{T}}_L}{({S^c})} \\ {{\boldsymbol{U}}^{\rm{T}}_H}{(S)}{\rm{ }}\quad{{\boldsymbol{U}}^{\rm{T}}_H}{({S^c})} \end{array} \right]\left[ \begin{array}{l} \Phi (S){\boldsymbol{f}}(S) \\ \Phi ({S^c}){\boldsymbol{f}}({S^c}) \end{array} \right] \end{split}$$ (4)
    $$ \left[ \begin{array}{l} {\Phi(S)}{\boldsymbol{f}}(S) \\ {\Phi ({S^c})}{\boldsymbol{f}}({S^c}) \end{array} \right] = \left[ \begin{array}{l} {{\boldsymbol{U}}_L}(S){\rm{ }}\;\,\quad{{\boldsymbol{U}}_H}(S) \\ {{\boldsymbol{U}}_L}({S^c}){\rm{ }}\quad{{\boldsymbol{U}}_H}({S^c}) \end{array} \right]\left[ \begin{array}{l} {{{\tilde{\boldsymbol f}}}_L} \\ {{{\tilde{\boldsymbol f}}}_H} \end{array} \right] $$ (5)

    在ILSR算法中[19], Narang等根据图带限特性——${{\tilde{\boldsymbol f}}_H} = {\mathbf{0}}$, 将式(5)表示为式(6). 该算法的重构准则是通过中间变量——带限图信号的低频分量${{\tilde{\boldsymbol f}}_L}$恒定, 建立了采样信号${\boldsymbol{f}}(S)$和未采样信号${\boldsymbol{f}}({S^c})$之间的联系, 得到重构信号的闭式解, 如式(6)所示.

    $$\begin{split} {\boldsymbol{f}}({S^c}) =\;&\Phi^{\rm{T}} ({S^c}){{\boldsymbol{U}}_L}({S^c}){[{{\boldsymbol{U}}^{\rm{T}}_L}{(S)}{{\boldsymbol{U}}_L}(S)]^{ - 1}}\times \\ &{{\boldsymbol{U}}^{\rm{T}}_L}{(S)}\Phi (S){\boldsymbol{f}}(S) \end{split} $$ (6)

    与ILSR算法的重构准则不同, 本文提出的重构准则是基于采样信号的高频分量和未采样信号的图高频分量之和为零; 即图带限分量的恒等不变性, 如式(7)所示. 根据此特性, 可得重构信号${\boldsymbol{f}}({{{S}}^c})$的闭式解, 如式(8)所示.

    $$ {{\tilde{\boldsymbol f}}_H} = [{{\boldsymbol{U}}^{\rm{T}}_H}{(S)}{\rm{ }}\quad{{\boldsymbol{U}}^{\rm{T}}_H}{({S^c})}]\left[ \begin{array}{l} {\Phi (S)}{\boldsymbol{f}}(S) \\ {\Phi ({S^c})}{\boldsymbol{f}}({S^c}) \end{array} \right] $$ (7)
    $$ \begin{split} {\boldsymbol{f}}({S^c}) =\;& - \Phi^{\rm{T}} {({S^c})}{[{{\boldsymbol{U}}_H}({S^c}){{\boldsymbol{U}}^{\rm{T}}_H}{({S^c})}]^{ - 1}} \times\\ &{{\boldsymbol{U}}_H}({S^c}){{\boldsymbol{U}}^{\rm{T}}_H}{(S)}\Phi (S){\boldsymbol{f}}(S) \end{split} $$ (8)

    然而, 由于闭式解中涉及到矩阵逆运算, 导致求解的计算开销大. 尤其是当处理大规模图网络数据时, 计算和存储的成本都较高. 为了避免此问题, 本文基于图带限分量的恒等不变性提出如式(9)所示的重构模型, 采用迭代求解实现重构带限图信号的目的. 该模型的目标函数利用了图带限分量的恒等不变性, 将其建模为最小二乘模型. 进而估计出未采样信号${\boldsymbol{f}}({S^c})$.

    $$ \begin{split} & \mathop {{\rm{min}}}\limits_{{\boldsymbol{f}}({S^c})} {\rm{ }}\left\| {{\boldsymbol{Y}} - {{\boldsymbol{U}}^{\rm{T}}_H}{{({S^c})}}\Phi ({S^c}){\boldsymbol{f}}({S^c})} \right\|_2^2 \\ & {\rm{ }}{\rm{s.t}}\;\;\;{\rm{ }}{\boldsymbol{Y}} = - {{\boldsymbol{U}}^{\rm{T}}_H}{(S)}\Phi (S){\boldsymbol{f}}(S) \end{split} $$ (9)

    通过设计该重构模型的求解算法, 本文提出了基于谱移位的带限图信号重构算法(Reconstruction algorithm of band-limited graph signals based graph frequency shifting, BGSR-GFS). BGSR-GFS算法流程如算法1所示.

    算法 1. BGSR-GFS算法

    输入.

    ${{\boldsymbol{U}}_1} = {{\boldsymbol{U}}_H}({S^c}){{\boldsymbol{U}}^{\rm{T}}_H}{({S^c})},$ $\sigma, $ $\Phi (S)$

    $\Phi ({S^c}),$ ${\boldsymbol{f}}(S),$ $K,$ ${{\boldsymbol{U}}_2} = {{\boldsymbol{U}}_{{H}}}({S^c}){{\boldsymbol{U}}^{\rm{T}}_{{H}}}{(S)}$

    输出. ${{\boldsymbol{f}}_{{R}}}({S^c})$

    初始化.

    ${\boldsymbol{f}}({S^c}) = {\mathbf{0}},$ ${{\boldsymbol{H}}^{(1)}} = {\bf {I}},$ $k = 1$

    ${{\boldsymbol{G}}^{(1)}} = {{\boldsymbol{U}}_2}{\boldsymbol{\Phi }}(S){\boldsymbol{f}}(S)$

    $k \leq K,$则:

    步骤 1. ${{\boldsymbol{d}}^{(k)}} = - {{\boldsymbol{H}}^{(k)}}{{\boldsymbol{G}}^{(k)}}$;

    步骤 2. ${\alpha ^{(k)}} = \frac{{{({\boldsymbol{G}}^{(k)})}^{{\rm{ T}}}{{\boldsymbol{d}}^{(k)}}}}{{{({\boldsymbol{d}}^{(k)})}^{{\rm{ T}}}{{\boldsymbol{U}}_1}{{\boldsymbol{d}}^{(k)}}}}$;

    步骤 3. ${\boldsymbol{f}}^{(k + 1)}{({S^c})} = {\boldsymbol{f}}^{(k)}{({S^c})} + {\alpha ^{(k)}}{{\boldsymbol{d}}^{(k)}} ;$

    步骤 4. ${{\boldsymbol{G}}^{(k + 1)}} = {{\boldsymbol{U}}_1}{\boldsymbol{f}}^{(k + 1)}{({S^c})} + {{\boldsymbol{G}}^{(1)}}$;

    步骤 5. ${{\boldsymbol{p}}^{(k)}} = {\boldsymbol{f}}^{(k + 1)}{({S^c})} - {\boldsymbol{f}}{({S^c})^{(k)}}$;

    步骤 6. ${{\boldsymbol{q}}^{(k)}} = {{\boldsymbol{G}}^{(k + 1)}} - {{\boldsymbol{G}}^{(k)}}$;

    步骤 7.

    $$\Delta {{\boldsymbol{H}}^{(k)}} = \frac{{{{\boldsymbol{p}}^{(k)}}{({\boldsymbol{p}}^{(k)})}^{\rm{T}}}}{{{({\boldsymbol{p}}^{(k)})}^{\rm{T}}{{\boldsymbol{q}}^{(k)}}}} - \frac{{{{\boldsymbol{H}}^{(k)}}{{\boldsymbol{q}}^{(k)}}{({\boldsymbol{q}}^{(k)})}^{\rm{T}}{{\boldsymbol{H}}^{(k)}}}}{{{({\boldsymbol{p}}^{(k)})}^{\rm{T}}{{\boldsymbol{H}}^{(k)}}{{\boldsymbol{q}}^{(k)}}}}$$

    步骤 8. ${{\boldsymbol{H}}^{(k + 1)}} = {{\boldsymbol{H}}^{(k + 1)}} + \Delta {{\boldsymbol{H}}^{(k)}} ;$

    步骤 9. 若$\big\| {{\boldsymbol{f}}^{(k + 1)}{{({S^c})}} - {\boldsymbol{f}}^{(k)}{{({S^c})}}}\big\|_2^2$, 小于门限阈值$\sigma $或达到最大迭代次数$K\,(k > K)$, 则终止迭代, 输出参数${\boldsymbol{f}}{({S^c})^{(k + 1)}},$ 否则, 继续迭代, 跳转至步骤1;

    步骤 10. ${{\boldsymbol{f}}_{{R}}}({S^c}) = {\boldsymbol{f}}^{(k + 1)}{({S^c})}$.

    该重构算法基于拟牛顿法进行迭代求解. 在高效利用其重构模型二阶梯度信息的同时, 避免了海森矩阵的求解.

    当且仅当${{\boldsymbol{U}}^{\rm{T}}_H}{({S^c})}$满足其行数大于等于列数时, 该最小二乘问题有唯一解. 可知BGSR-GFS重构算法的适用条件为图信号具有带限特性且带宽$B$小于等于采样节点个数$M$.

    由于ILSR重构算法并没有对迭代过程中的残差信号进行分析和处理, 所以无论在还是迭代效率上都较为有限. 因此, 针对如何根据迭代残差信号的相关特性提升和迭代效率, 研究者们先后提出了IPR/IWR[20]和IGDR重构算法[21]. 此类基于残差估计的重构算法的关键在于如何根据采样节点的残差信号${{\boldsymbol{f}}^{(k)}_{{{\rm{Re}}{\rm{s}}}}}(S)$估计未采样节点的残差信号${{\boldsymbol{f}}^{(k)}_{{{\rm{Re}} {\rm{s}}}}}({S^c})$.

    基于残差估计的重构算法的迭代步骤归纳为公式(10), 不同算法之间的差异在于如何更好地估计采样残差信号${{\boldsymbol{f}}^{(k)}_{{\rm{Re}} {\rm{s}}}}{(S)}$和未采样残差信号${{\boldsymbol{f}}^{(k)}_{{\rm{Re}} {\rm{s}}}}{({{\rm{S}}^c})}$.

    $$ \begin{split} {{\boldsymbol{f}}^{(k + 1)}} = \;&\Phi^{\rm{T}} {({S^c})}{\boldsymbol{U}}^{{S^c}}_L\Phi ({S^c}){\boldsymbol{f}}^{(k)}{({S^c})} + \\ &\Phi^{\rm{T}}{(S)}{\boldsymbol{U}}^S_L\Phi (S){\boldsymbol{f}}^{(k)}{(S)} +\\ & {{\boldsymbol{f}}^{(k)}_{{{\rm{Re}}{\rm{s}}}}}{({S^c})} + {{\boldsymbol{f}}^{(k)}_{{{\rm{Re}} {\rm{s}}}}}{(S)} \end{split} $$ (10)
    $$ \begin{split} &{\boldsymbol{U}}_L^{{S^c}} = {{\boldsymbol{U}}_L}({S^c}){{\boldsymbol{U}}^{\rm{T}}_L}{({S^c})} \qquad\qquad\quad\quad \\ &{\boldsymbol{U}}_L^S = {{\boldsymbol{U}}_L}({S}){{\boldsymbol{U}}^{\rm{T}}_L}{(S)} \\ & {\boldsymbol{f}}^{(k)}{({{S}})} = ({\boldsymbol{I}} - {{\boldsymbol{P}}_S}){{\boldsymbol{f}}^{(k)}} \\ &{\boldsymbol{f}}^{(k)}{({{{S}}^c})} = {{\boldsymbol{P}}_S}{{\boldsymbol{f}}^{(k)}} \end{split} $$ (11)

    Wang等[20]基于局部聚合的处理方法, 提出了基于局部集采样的IPR和IWR重构算法. 在IWR重构算法中, 采样残差首先进行相应权重的扩大(权重矩阵${{\boldsymbol{W}}_{{\rm{IPR}}}}$), 然后再进行图带限滤波, 如式(12)所示.

    $$ \begin{split} & {{\boldsymbol{f}}^{(k)}_{{\rm{Res}}}}{(S)} = \Phi^{\rm{T}} {(S)}{\boldsymbol{U}}_L^S\Phi (S) \times\\ &\qquad\qquad\;\;{{\boldsymbol{W}}_{{\rm{IPR}}}}[{\boldsymbol{f}}(S) - {\boldsymbol{f}}^{(k)}{(S)}] \\ & {{\boldsymbol{f}}^{(k)}_{{\rm{Res}}}}{({S^c})} = {\boldsymbol{0}} \end{split} $$ (12)

    不同于IWR算法, IPR重构算法通过采样残差${{\boldsymbol{f}}_{{\rm{Re}}{\rm{ s}}}}({{S}})$和网络拓扑特性, 估计未采样残差${{\boldsymbol{f}}_{{\rm{Re}} {\rm{s}}}}({{{S}}^c})$. 具体而言, 首先是基于局部集内平滑特性, 将未采样残差设置为局部集内的采样残差, 然后再进行图带限滤波. 如式(13)所示, 其中${\boldsymbol{V}}({v_d})$为采样节点${v_d}$的未采样邻居节点集.

    $$\begin{split} &{{\boldsymbol{f}}^{(k)}_{{\rm{Res}}}}{(S)} = \Phi^{\rm{T}} {(S)}{\boldsymbol{U}}_L^S\Phi (S)[{\boldsymbol{f}}(S) - {\boldsymbol{f}}^{(k)}{(S)}] \\ &{{\boldsymbol{f}}^{(k)}_{{\rm{Res}}}}{({S^c})} = \Phi^{\rm{T}} {({S^c})}{\boldsymbol{U}}_L^{{S^c}}\Phi ({S^c})[{{{\boldsymbol{f}}^{{\rm{Prop}}}}{(S)}]^{(k)}} \\ & [{{{\boldsymbol{f}}^{{\rm{Prop}}}}{(S)}]^{(k)}}\{ {v_i}\} = {\boldsymbol{f}}(S)\{ {v_d}\} - {\boldsymbol{f}}^{(k)}{(S)}\{ {v_d}\} , \\ &\qquad\qquad\qquad\qquad\qquad\qquad\quad\forall {v_i} \in {\boldsymbol{V}}({v_d}) \end{split} $$ (13)

    由于IPR/IWR算法在迭代过程中, 都对采样残差进行相应的预处理工作; 所以相比于ILSR算法, 两种算法的和迭代效率均有提升. 然而, 由于IWR和IPR重构算法对于未采样图信号的迭代残差估计是基于平滑准则, 导致会出现过平滑现象[27]. 为了缓解“过平滑” 问题, Yang等[21]提出基于局部扩散算子的IGDR重构算法, 如式(14)所示. 其中, $J$为采样节点和未采样节点之间的最大跳数, ${\delta _j}$表示与采样节点集$S$和未采样节点${v_j}$之间的最短路径相关的指示函数. IGDR算法通过将采样残差经过图带限滤波后得到的全局未采样残差${\boldsymbol{f}}_{{\rm{Re}}{\rm{ s}}}^G({S^c})$和采样残差基于随机游走策略得到的局部未采样残差${\boldsymbol{f}}_{{\rm{Re}} {\rm{s}}}^L({S^c})$相加, 得到最终的未采样残差${{\boldsymbol{f}}_{{\rm{Re}}{\rm{ s}}}}({S^c})$.

    $$ \begin{split} & {{\boldsymbol{f}}^{(k)}_{{\rm{Res}}}}{(S)} = \Phi^{\rm{T}} {(S)}{\boldsymbol{U}}_L^S\Phi (S)[{\boldsymbol{f}}(S) - {\boldsymbol{f}}^{(k)}{(S)}] \\ & {{\boldsymbol{f}}^{(k)}_{{\rm{Res}}}}{({S^c})} = [{\boldsymbol{f}}_{{\rm{Re}} {\rm{s}}}^G{({S^c})]^{(k)}} + [{\boldsymbol{f}}_{{\rm{Re}} {\rm{s}}}^L{({S^c})]^{(k)}} =\\ & \qquad\qquad\qquad{{\boldsymbol{U}}_{\rm{H}}}({S^c}){{\boldsymbol{U}}_{\rm{H}}^{\rm{T}}{(S)}}[{\boldsymbol{f}}(S) - {\boldsymbol{f}}^{(k)}{(S)}]+ \\ & \qquad\qquad\qquad{{\boldsymbol{P}}_S}\sum\limits_{j = 1}^J {{\delta _j}{{\boldsymbol{D}}^{ - 1}}{{\boldsymbol{A}}^j}[{\boldsymbol{f}}(S) - {\boldsymbol{f}}^{(k)}{{(S)}}]} \end{split} $$ (14)

    综上所述, IPR/IWR重构算法是基于图平滑滤波估计残差信号, 而IGDR算法是基于图带限特性的原则而设计的. 两种残差重构法都是基于重构信号的低频分量相似性而设计的, 对于高频分量缺乏相应的分析和处理, 导致迭代效率和相比于ILSR算法的提升有限.

    根据ILSR算法以及凸集映射原理[18]可知, 在第$k$次迭代中的信号${{\boldsymbol{f}}^{(k)}}$满足图带限特性[28]. 因为${{\boldsymbol{f}}^{(k)}}$满足图带限特性以及图傅里叶变换具有线性特征, 所以可知残差信号${\boldsymbol{f}}_{{\rm{Re}} s}^{(k)} = {\boldsymbol{f}} - {{\boldsymbol{f}}^{(k)}}$同样满足图带限分量的恒等不变性. 由此, 本文设计了一种基于残差谱移位的图信号重构模型, 如式(15)所示.

    $$ \begin{split} &{{\boldsymbol{f}}^{(k)}_{{\rm{Res}}}}{(S)} = \Phi^{\rm{T}} {(S)}{\boldsymbol{U}}_L^S{\Phi _S}[{\boldsymbol{f}}(S) - {\boldsymbol{f}}^{(k)}{(S)}] \\ & {{\boldsymbol{f}}^{(k)}_{{\rm{Res}}}}{({S^c})} = \Phi^{\rm{T}} {({S^c})}{{\boldsymbol{f}}^*} \\ &\mathop {\min }\limits_{{{\boldsymbol{f}}^ * }}\;\;{\rm{ }}\left\| {{\boldsymbol{Y}} - {{\boldsymbol{U}}^{\rm{T}}_H}{{({S^c})}}{{\boldsymbol{f}}^ * }} \right\|_2^2 \\ &{\rm{s.t}}\qquad{\rm{ }}{\boldsymbol{Y}} = - {{\boldsymbol{U}}^{\rm{T}}_H}{(S)}\Phi (S)[{\boldsymbol{f}}(S) - {\boldsymbol{f}}^{(k)}{(S)}]{\rm{ }} \end{split} $$ (15)

    基于此重构模型, 本文提出基于残差谱移位的重构算法(Band-limited graph signals reconstruction based graph frequency shifting of residual signals, BGSR-GFS-R), 算法流程如算法2所示.

    算法 2. BGSR-GFS-R算法

    输入. ${{\boldsymbol{U}}_1} = {{\boldsymbol{U}}_H}({S^c}){{\boldsymbol{U}}^{\rm{T}}_H}{({S^c})},$ $K,$ $M$

    输出. ${{\boldsymbol{f}}_{{R}}}({S^c})$.

    ${{\boldsymbol{H}}_{{\rm{BL}}}} = {[{{\boldsymbol{U}}^{\rm{T}}_{{L}}}{(S)}{\rm{ }}\;\;\;{{\boldsymbol{U}}^{\rm{T}}_{{L}}}{(S^c)}]^{\rm{T}}}[{{\boldsymbol{U}}^{\rm{T}}_{{L}}}{(S)}{\rm{ }}\;\;\;{{\boldsymbol{U}}^{\rm{T}}_{{L}}}{(S^c)}]$

    ${{\boldsymbol{U}}_2} = {{\boldsymbol{U}}_H}({S^c}){{\boldsymbol{U}}^{\rm{T}}_H}{(S)},$ $\Phi (S),$ $\Phi ({S^c}),$ ${\boldsymbol{f}}(S),$ ${{\boldsymbol{P}}_{{S}}},$ $\sigma $

    步骤 1. 初始化$k = 1,$ ${{\boldsymbol{f}}^{(1)}} = {{\boldsymbol{H}}_{{\rm{BL}}}}{\boldsymbol{f}}(S)$;

    $k \leq K,$ 则:

    步骤 2. ${{\boldsymbol{f}}_{{\rm{Res}}}}(S) = {\boldsymbol{f}}(S) - {{\boldsymbol{P}}_S}{{\boldsymbol{f}}^{(k)}}$;

    步骤 3. 设置零向量${{\boldsymbol{f}}^{(1)}_{{\rm{Res}}}}{({S^c})}$, 单位矩阵${{\boldsymbol{H}}^{(1)}}$,${{\boldsymbol{G}}^{(1)}} = $$ {{\boldsymbol{U}}_{\rm{2}}}{\boldsymbol{\Phi }}(S){{\boldsymbol{f}}_{{\rm{Res}}}}(S),$初始化$m=1 $;

    $m \leq M$, 则:

    步骤 4. ${{\boldsymbol{d}}^{(m)}} = - {{\boldsymbol{H}}^{(m)}}{{\boldsymbol{G}}^{(m)}}$;

    步骤 5. ${\alpha ^{(m)}} = \frac{{{({\boldsymbol{G}}^{(m)})}^{{\rm{ T}}}{{\boldsymbol{d}}^{(m)}}}}{{{({\boldsymbol{d}}^{(m)})}^{{\rm{ T}}}{{\boldsymbol{U}}_1}{{\boldsymbol{d}}^{(m)}}}}$;

    步骤 6. ${{\boldsymbol{f}}^{(m + 1)}_{{\rm{Res}}}}{({S^c})} = {\boldsymbol{f}}^{(m)}{({S^c})} + {\alpha ^{(m)}}{{\boldsymbol{d}}^{(m)}}$;

    步骤 7. ${{\boldsymbol{G}}^{(m + 1)}} = {{\boldsymbol{U}}_1}{\boldsymbol{f}}^{(m + 1)}{({S^c})} + {{\boldsymbol{G}}^{(1)}}$;

    步骤 8. ${{\boldsymbol{p}}^{(m)}} = {{\boldsymbol{f}}^{(m + 1)}_{{\rm{Res}}}}{({S^c})} - {{\boldsymbol{f}}^{(m)}_{{\rm{Res}}}}{({S^c})}$;

    步骤 9. ${{\boldsymbol{q}}^{(m)}} = {{\boldsymbol{G}}^{(m + 1)}} - {{\boldsymbol{G}}^{(m)}}$;

    步骤 10.

    $$\Delta {{\boldsymbol{H}}^{(m)}} = \frac{{{{\boldsymbol{p}}^{(m)}}{({\boldsymbol{p}}^{(m)})}^{\rm{T}}}}{{{({\boldsymbol{p}}^{(m)})}^{\rm{T}}{{\boldsymbol{q}}^{(m)}}}} - \frac{{{{\boldsymbol{H}}^{(m)}}{{\boldsymbol{q}}^{(m)}}{({\boldsymbol{q}}^{(m)})}^{\rm{T}}{{\boldsymbol{H}}^{(m)}}}}{{{({\boldsymbol{p}}^{(m)})}^{\rm{T}}{{\boldsymbol{H}}^{(k)}}{{\boldsymbol{q}}^{(m)}}}}$$

    步骤 11. ${{\boldsymbol{H}}^{(m + 1)}} = {{\boldsymbol{H}}^{(m + 1)}} + \Delta {{\boldsymbol{H}}^{(m)}}$;

    步骤 12. 若$\big\| {{{\boldsymbol{f}}^{(m + 1)}_{{\rm{Res}}}}{{({S^c})}} - {{\boldsymbol{f}}^{(m)}_{{\rm{Res}}}}{{({S^c})}}} \big\|_2^2$小于门限阈值$\sigma $或达到最大迭代次数$M\,(m > M),$ 则终止迭代, 输出参数${{\boldsymbol{f}}^{(k + 1)}_{{\rm{Res}}}}{({S^c})},$ 否则, 继续迭代, 跳转至步骤4;

    步骤 13. $[{{\boldsymbol{f}}^ * _{{\rm{Res}}}}{({S^c})}]^{(k)} = \Phi^{\rm{T}} {({S^c})}{{\boldsymbol{f}}^{(m + 1)}_{{\rm{Res}}}}{({S^c})}$;

    步骤 14.

    ${{\boldsymbol{f}}^{(k + 1)}} = {{\boldsymbol{H}}_{{\rm{BL}}}}{{\boldsymbol{f}}^{(k)}} + {{\boldsymbol{H}}_{{\rm{BL}}}}{{\boldsymbol{f}}_{{\rm{Res}}}}(S) + [{{\boldsymbol{f}}^ *_{{\rm{Res}}}}{({S^c}) }]^{(k)}$

    步骤 15. 若$\left\| {{{\boldsymbol{f}}^{(k + 1)}} - {{\boldsymbol{f}}^{(k)}}} \right\|_2^2$小于门限阈值$\sigma $或达到最大迭代次数$K\,(k >K),$ 则终止迭代, 输出参数${{\boldsymbol{f}}^{(k + 1)}},$ 否则, 继续迭代, 跳转至步骤2;

    步骤 16. ${{\boldsymbol{f}}_{{R}}}({S^c}) = ({\boldsymbol{I}} - {{\boldsymbol{P}}_{{S}}}){{\boldsymbol{f}}^{(k + 1)}}$.

    本文提出的BGSR-GFS-R重构算法基于迭代中的采样残差信号和谱移位策略, 估计得到未采样残差信号. 然后将未采样残差信号与经过带限图滤波后的未采样信号相加, 最终得到重构后的未采样信号. 相比于其他基于残差处理的重构算法(IWR/IPR和IGDR), 本算法对于残差信号的处理不依赖与图网络的子图集合. 并且, 由于本算法利用的是其残差信号的图带限分量的恒等不变性, 将其建模为最小二乘问题后进行迭代求解, 避免了“过平滑”现象.

    由于其残差信号的图傅里叶变换的变换矩阵同样为${{\boldsymbol{U}}^{\rm{T}}_{{H}}}{({S^c})} ,$ 所以要求矩阵${\boldsymbol{U}}_1={\boldsymbol{U}}_0({S^c}){\boldsymbol{U}}^{\rm{T}}_0({S^c})$为满秩矩阵, 即采样节点个数不小于带限信号的带宽.

    在现有的图信号重构算法中, 所针对的图信号往往具备平滑或者是低频段受限的信号特征; 即各节点的信号值与其邻居节点的信号值差异较小, 在图谱域上呈现出能量较为集中在低频区域内. 除此以外, 由于在实际情况中由于物理设备及传输手段的限制, 采集得到的图信号中往往存在着少量的异常节点数据[7]. 将上述的数据集基于地理距离建模为图信号后, 本文发现由于其中存在着少量节点信号值与其邻居节点的信号值差异较大, 其在图谱域上所呈现的是类似于分段带限的信号特性, 如图2所示.

    图 2  分段带限图信号
    Fig. 2  Graph sperate band-limited signals
    $$ \left[ \begin{array}{l} {{{\tilde{\boldsymbol f}}}_L} \\ {{{\tilde{\boldsymbol f}}}_0} \\ {{{\tilde{\boldsymbol f}}}_H} \end{array} \right] = \left[ \begin{array}{l} {{\boldsymbol{U}}^{\rm{T}}_L}{(S)}{\rm{ }}\quad{{\boldsymbol{U}}^{\rm{T}}_L}{({S^c})} \\ {{\boldsymbol{U}}^{\rm{T}}_0}{(S)}{\rm{ }}\quad{{\boldsymbol{U}}^{\rm{T}}_0}{({S^c})} \\ {{\boldsymbol{U}}^{\rm{T}}_H}{(S)}{\rm{ }}\quad{{\boldsymbol{U}}^{\rm{T}}_H}{({S^c})} \end{array} \right]\left[ \begin{array}{l} {\Phi (S)}{\boldsymbol{f}}(S) \\ {\Phi ({S^c})}{\boldsymbol{f}}({S^c}) \end{array} \right] $$ (16)

    针对分段带限图信号的重构问题, 本文上述两种重构算法同样适用. 由${{\tilde{\boldsymbol f}}_0} = {\mathbf{0}}$, 可知分段带限图信号在图频率${\lambda _i} \in ({\omega _1},{\omega _2})$内, 同样满足图带限分量的恒等不变性, 如式(16)所示. 基于上述分析, 本文提出分段带限图信号重构的优化模型, 如式(17)所示.

    $$\begin{split} & \mathop {\min }\limits_{{\boldsymbol{f}}({S^c})} {\rm{ }}\left\| {{\boldsymbol{Y}} - {{\boldsymbol{U}}^{\rm{T}}_0}{{({S^c})}}\Phi ({S^c}){\boldsymbol{f}}({S^c})} \right\|_2^2 \\ &{\rm{ s}}{\rm{.t }}\;\;\;\;\;\;{\boldsymbol{Y}} = - {{\boldsymbol{U}}^{\rm{T}}_0}{(S)}\Phi (S){\boldsymbol{f}}(S) \end{split} $$ (17)

    基于上述模型, 只需更改重构算法BGSR-GFS/BGSR-GFS-R中的部分输入变量, 便可实现分段带限图信号的重构. 具体而言, 在BGSR-GFS中更改的输入变量${{\boldsymbol{U}}_1} = {{\boldsymbol{U}}_0}({S^c}){{\boldsymbol{U}}^{\rm{T}}_0}{({S^c})}$/${{\boldsymbol{U}}_2} = {{\boldsymbol{U}}_0}({S^c}){{\boldsymbol{U}}^{\rm{T}}_0}{(S)}$, 在GBSR-GFS-R算法中除了同样更新矩阵${{\boldsymbol{U}}_1}$${{\boldsymbol{U}}_2}$, 还需要将${{\boldsymbol{H}}_{{\rm{BL}}}}$更新为${\boldsymbol{H}}_{{\rm{BL}}}^ *$.

    $${\boldsymbol{H}}_{{\rm{BL}}}^ * = \left[ \begin{array}{l} {{\boldsymbol{U}}_{{L}}}(S){\rm{ }}\;\;\;\;\;\,{{\boldsymbol{U}}_{{H}}}(S) \\ {{\boldsymbol{U}}_{{L}}}({S^c}){\rm{ }}\;\;\;\;{{\boldsymbol{U}}_{{H}}}({S^c}) \end{array} \right]\left[ \begin{array}{l} {{\boldsymbol{U}}^{\rm{T}}_{{L}}}{(S)}{\rm{ }}\;\;\;\,\;\;{{\boldsymbol{U}}^{\rm{T}}_{{H}}}{(S)} \\ {{\boldsymbol{U}}^{\rm{T}}_{{L}}}{({S^c})}{\rm{ }}\;\;\;\;{{\boldsymbol{U}}^{\rm{T}}_{{H}}}{({S^c})} \end{array} \right]$$ (18)

    本文将BGSR-GFS和BGSR-GFS-R重构算法与4种重构算法(ILSR、O-PGIR、IPR和IGDR)进行对比. 由于IPR算法性能优于IWR算法,故实验中只对比了IPR算法; 其次, 由于GBSR-IHT和GBSR-GFT是将ILSR算法变换至图谱域上进行重构, 其迭代效率和ILSR算法一致, 故本文未将其加入对比算法. 本文的实验仿真是在3.40 GHz的Intel i7-6700处理器和16 GB RAM的个人计算机上运行, 使用的软件为MATLAB R2019b.

    实验中采用的数据集分别为美国明尼苏达州交通网络(${G_1}$)和美国部分主要城市温度网络(${G_2}$), 如图3所示. ${G_1}$是由2642个节点和6608条边构成的, 节点和边分别表示交通网中的十字路口和实际的州际公路[29]; ${G_2}$中的节点个数为218, 节点表示美国主要城市[30], 本文采用$K$近邻法构建节点之间的边连接$(K = 5).$ 数据集中的带限图信号是由服从高斯分布的随机信号经过带限图滤波后构成的. 数据集${G_1}$的截止频率为0.4077, 数据集${G_2}$的截止频率为0.3698. 迭代阈值$\sigma $设置为$1 \times {10^{ - 8}}$.

    本文采用的采样策略为贪婪采样[20]和随机采样. 基于贪婪采样策略, ${G_1}$${G_2}$分别得到的采样节点数为873和33, 如图3所示. 为了公平地比较各算法在不同采样情况下的重构效果, 仿真中随机采样的节点个数与贪婪采样一致. 本文采用重构信号和原始信号之间的相对误差(Relative error, RE) 评估算法的重构精度, 如式(19)所示. 其中${\boldsymbol{f}}_S^R$${{\boldsymbol{f}}_S}$分别表示重构信号和原始信号.

    图 3  图信号采样
    Fig. 3  Graph signals sampling
    $${{RE}} = \frac{{\left\| {{{\boldsymbol{f}}_S} - {\boldsymbol{f}}_S^R} \right\|_2^2}}{{\left\| {{{\boldsymbol{f}}_S}} \right\|_2^2}}$$ (19)

    在无噪情况中, 不同算法的重构性能如图4所示, 其中图4(a)图4(c)表示基于随机采样的${G_1}$${G_2}$数据集的重构性能, 图4(b)图4(d)表示基于贪婪采样的${G_1}$${G_2}$数据集的重构性能. 相比于ILSR和O-PGIR算法, 由于BGSR-GFS算法利用了信号的图高频分量特征; 无论采用随机采样或贪婪采样, 新算法都具有更优的迭代效率和重构误差. 此外, 由于BGSR-GFS-R算法基于残差信号的图高频分量特征进行重构, 该算法能够高效地估计未采样节点的残差信号, 相比于IPR和IGDR重构算法, 其迭代效率和重构精度均有提升.

    图 4  无噪环境下带限图信号重构性能对比
    Fig. 4  Comparison of graph band-limited signals reconstruction performances in noiseless environment

    图4(a)所示, 本文将重构算法应用于基于随机采样的${G_1}$数据集中, BGSR-GFS和BGSR-GFS-R的重构精度分别为$3.75 \times {10^{ - 15}}$$2.07 \times {10^{ - 15}}$, 算法ILSR, OPGIR, IPR和IGDR的重构精度分别为$1.05 \times {10^{ - 7}}$, $7.82 \times {10^{ - 13}}$, $8.13 \times {10^{ - 15}}$$6.51 \times $$ {10^{ - 15}}$. 而基于贪婪采样, BGSR和BGSR-GFS-R的重构精度分别为$3.79 \times {10^{ - 15}}$$2.70 \times {10^{ - 15}}$, ILSR, OPGIR, IPR和IGDR的重构精度分别为$1.47 \times $$ {10^{ - 14}}$, $9.01 \times {10^{ - 15}}$, $7.06 \times {10^{ - 15}}$$6.51 \times {10^{ - 15}}$. 新算法的重构精度提升40 % ~ 70 %.

    图4(b)所示, 本文将重构算法应用于数据集${G_2}$中, 新算法BGSR-GFS的重构精度分别为$1.97 \times {10^{ - 15}}$(随机)和$2.47 \times {10^{ - 15}}$(贪婪), 新算法BGSR-GFS-R的重构精度分别为$5.53 \times {10^{ - 16}}$(随机)和$7.51 \times {10^{ - 16}}$ (贪婪). 在随机采样中, 算法ILSR, OPGIR, IPR和IGDR的重构精度分别为$7.59 \times $$ {10^{ - 6}}$, $6.73 \times {10^{ - 11}}$, $2.76 \times {10^{ - 15}}$$3.42 \times {10^{ - 15}}$. 基于贪婪采样,算法ILSR, OPGIR, IPR和IGDR的重构精度分别为$1.65 \times {10^{ - 9}}$, $4.65 \times {10^{ - 15}}$, $1.43 \times {10^{ - 15}}$$3.40 \times {10^{ - 15}}$. 相比于其他算法, 新算法的重构精度提升约60 %.

    表1表2所示, 相比于ILSR和O-PGIR算法, BGSR-GFS算法的重构效率提升70 %. 相比于ILSR和O-PGIR算法, BGSR-GFS-R算法的重构效率提升75 %.

    表 1  无噪情况下基于随机采样的${G_1}$重构效率
    Table 1  ${G_1}$ reconstruction efficiency of random sampling in noiseless
    算法迭代次数运行时间 (s)
    ILSR 220 139.99
    OPGIR 114 108.78
    IPR 96 61.87
    IGDR 33 20.47
    BGSR-GFS 27 5.73
    BGSR-GFS-R 8 8.97
    下载: 导出CSV 
    | 显示表格
    表 2  无噪情况下基于随机采样的${G_2}$重构效率
    Table 2  ${G_2}$ reconstruction efficiency of random sampling in noiseless
    算法迭代次数运行时间 (s)
    ILSR 269 0.1509
    OPGIR 139 0.1291
    IPR 64 0.0405
    IGDR 34 0.0271
    BGSR-GFS 7 0.0065
    BGSR-GFS-R 5 0.0146
    下载: 导出CSV 
    | 显示表格

    为了对比噪声环境中的算法的鲁棒性, 本文在采样信号中分别加入信噪比为$20\;{\rm{dB}}$$40\;{\rm{dB}}$的随机高斯噪声. 信号重构性能对比如图5所示, 本文提出的重构算法和对比算法的抗噪鲁棒性相同, 然而BGSR-GFS和BGSR-GFS-R的迭代效率更高. 无论是本文算法还是对比算法均没有进行噪声抑制或消除的步骤, 导致无法消除噪声对于重构性能的影响.

    图 5  含噪环境下带限图信号重构性能对比
    Fig. 5  Comparison of graph band-limited signals reconstruction performances in noisy environment

    在第3组仿真中, 本文将针对分段带限图信号进行重构性能对比. 本文将第1组仿真实验中的图信号加入高频分量. 即随机选取${{Q}}$个连续的高频分量后, 再通过图傅里叶逆变换得到分段带限图信号(${G_1}$${G_2}$${{Q}}$值分别为10和3). 为了确保对比试验的公平性, 本文将对比算法中的低通图滤波器调整为带通图滤波器.

    图6所示, 无论是基于随机采样或贪婪采样, 本文算法都具有良好的重构精度和迭代效率. 由于ILSR和O-PGIR算法都是利用图信号的低频分量相似性原则设计重构算法, 而没有考虑到图信号的高频段分量的差异性, 所以迭代效率十分有限. 算法IPR在ILSR的基础上, 基于相邻节点残差信号等值传递的原则进行迭代过程中增量的估计, 而算法IGDR在IPR的基础上增加了扩散策略, 进一步提高了迭代效率; 两种基于残差法的重构策略实质上都是利用了残差信号低频分量之间的相似性, 同样无法实现高效的信号重构. 与上述4种算法不同的是, 由于本文提出的两种算法同时考虑了图低频相似性和图高频差异性, 通过图谱域移位策略重构分段带限图信号, 具有较高的重构精度和迭代效率.

    图 6  分段带限图信号重构性能对比
    Fig. 6  Comparison of graph separate band-limited signals reconstruction performances

    本文针对带限图信号的重构问题, 提出了基于图带限分量恒等特性的重构模型. 通过将该重构模型转化为最小二乘问题, 本文提出了两种基于图谱域移位的重构算法. 此外, 本文所提出的新算法同样适用于分段带限图信号的重构问题. 最后, 数值仿真表明, 相比于其他重构算法, 本文算法的重构性能更优.

  • 图  1  不同类型模糊图像示例

    Fig.  1  Examples for different kinds of blurred images

    图  2  基于空域/频域的NR-IQA方法分类

    Fig.  2  Classification of spatial/spectral domain-based NR-IQA methods

    图  3  基于学习的NR-IQA方法分类

    Fig.  3  Classification of learning-based NR-IQA methods

    图  4  不同类型NR-IQA方法在不同人工模糊数据集中平均性能评价指标值比较

    Fig.  4  Average performance evaluation result comparison through different types of NR-IQA methods for different artificial blur databases

    图  5  不同类型NR-IQA方法在不同自然模糊数据集中平均性能评价指标值比较

    Fig.  5  Average performance evaluation result comparison through different types of NR-IQA methods for different natural blur databases

    表  1  含有模糊图像的主要图像质量评价数据集

    Table  1  Main image quality assessment databases including blurred images

    数据集时间参考图像模糊图像模糊类型主观评价分值范围
    IVC[28]2005420高斯模糊MOS模糊−清晰 [1 5]
    LIVE[22]200629145高斯模糊DMOS清晰−模糊 [0 100]
    A57[30]200739高斯模糊DMOS清晰−模糊 [0 1]
    TID2008[26]200925100高斯模糊MOS模糊−清晰 [0 9]
    CSIQ[25]200930150高斯模糊DMOS清晰−模糊 [0 1]
    VCL@FER[29]201223138高斯模糊MOS模糊−清晰 [0 100]
    TID2013[27]201325125高斯模糊MOS模糊−清晰 [0 9]
    KADID-10k 1[31]201981405高斯模糊MOS模糊−清晰 [1 5]
    KADID-10k 2[31]201981405镜头模糊MOS模糊−清晰 [1 5]
    KADID-10k 3[31]201981405运动模糊MOS模糊−清晰 [1 5]
    MLIVE1[33]201215225高斯模糊和高斯白噪声DMOS清晰−模糊 [0 100]
    MLIVE2[33]201215225高斯模糊和JEPG压缩DMOS清晰−模糊 [0 100]
    MDID2013[32]201312324高斯模糊、JEPG压缩和白噪声DMOS清晰−模糊 [0 1]
    MDID[34]2017201600高斯模糊、对比度变化、高斯噪声、
    JPEG或JPEG2000
    MOS模糊−清晰 [0 8]
    BID[21]2011586自然模糊MOS模糊−清晰 [0 5]
    CID2013[35]2013480自然模糊MOS模糊−清晰 [0 100]
    CLIVE[36-37]20161162自然模糊MOS模糊−清晰 [0 100]
    KonIQ-10k [38]201810073自然模糊MOS模糊−清晰 [1 5]
    下载: 导出CSV

    表  2  基于空域/频域的不同方法优缺点对比

    Table  2  Advantage and disadvantage comparison for different methods based on spatial/spectral domain

    方法分类优点缺点
    边缘信息概念直观、计算复杂度低容易因图像中缺少锐利边缘而影响评价结果
    再模糊理论对图像内容依赖小, 计算复杂度低准确性依赖 FR-IQA 方法
    奇异值分解能较好地提取图像结构、边缘、纹理信息计算复杂度较高
    自由能理论外部输入信号与其生成模型可解释部分之间的
    差距与视觉感受的图像质量密切相关
    计算复杂度高
    DFT/DCT/小波变换综合了图像的频域特性和多尺度特征, 准确性和鲁棒性更高计算复杂度高
    下载: 导出CSV

    表  3  基于学习的不同方法优缺点对比

    Table  3  Advantage and disadvantage comparison for different methods based on learning

    方法分类优点缺点
    SVM在小样本训练集上能够取得比其他算法更好的效果评价结果的好坏由提取的特征决定
    NN具有很好的非线性映射能力样本较少时, 容易出现过拟合现象, 且
    计算复杂度随着数据量的增加而增大
    深度学习可以从大量数据中自动学习图像特征的多层表示对数据集中数据量要求大
    字典/码本可以获得图像中的高级特征字典/码本的大小减小时, 性能显著下降
    MVG无需图像的 MOS/DMOS 值模型建立困难, 对数据集中数据量要求较大
    下载: 导出CSV

    表  4  用于对比的不同NR-IQA方法

    Table  4  Different NR-IQA methods for comparison

    方法类别方法特征模糊/通用
    空域/频域空域边缘信息JNB[43]计算边缘分块所对应的边缘宽度模糊
    边缘信息CPBD[44]计算模糊检测的累积概率模糊
    边缘信息MLV[47]计算图像的最大局部变化得到反映图像对比度信息的映射图模糊
    自由能理论ARISM[63]每个像素 AR 模型系数的能量差和对比度差模糊
    边缘信息BIBLE[49]图像的梯度和 Tchebichef 矩量模糊
    边缘信息Zhan 等[14]图像中最大梯度及梯度变化量模糊
    频域


    DFT变换S3[65]在频域测量幅度谱的斜率, 在空域测量空间变化情况模糊
    小波变换LPC-SI[81]LPC 强度变化作为指标模糊
    小波变换BISHARP[77]计算图像的均方根来获取图像局部对比度信息,
    同时利用小波变换中对角线小波系数
    模糊
    HVS滤波器HVS-MaxPol[85]利用 MaxPol 卷积滤波器分解与图像清晰度相关的有意义特征模糊
    学习机器学习SVM+SVRBIQI[86]对图像进行小波变换后, 利用 GGD 对得到的子带系数进行参数化通用
    SVM+SVRDIIVINE[87]从小波子带系数中提取一系列的统计特征通用
    SVM+SVRSSEQ[88]空间−频域熵特征通用
    SVM+SVRBLIINDS-II[91]多尺度下的广义高斯模型形状参数特征、频率变化系数特征、
    能量子带特征、基于定位模型的特征
    通用
    SVRBRISQUE[96]GGD 拟合 MSCN 系数作为特征, AGGD 拟合 4 个相邻元素乘积系数作为特征通用
    SVRRISE[107]多尺度图像空间中的梯度值和奇异值特征, 以及多分辨率图像的熵特征模糊
    SVRLiu 等[109]局部模式算子提取图像结构信息, Toggle 算子提取边缘信息模糊
    SVRCai 等[110]输入图像与其重新模糊版本之间的 Log-Gabor 滤波器响应差异和基于方向
    选择性的模式差异, 以及输入图像与其 4 个下采样图像之间的自相似性
    模糊
    深度学习CNNKang's CNN[116]对图像分块进行局部对比度归一化通用
    浅层CNN+GRNNYu's CNN[127]对图像分块进行局部对比度归一化模糊
    聚类技术+RBMMSFF[139]Gabor 滤波器提取不同方向和尺度的原始图像特征,
    然后由 RBMs 生成特征描述符
    通用
    DNNMEON[132]原始图像作为输入通用
    CNNDIQaM-NR[131]使用 CNN 提取失真图像块和参考图像块的特征通用
    CNNDIQA[118]图像归一化后, 通过下采样及上采样得到低频图像通用
    CNNSGDNet[133]使用 DCNN 作为特征提取器获取图像特征通用
    秩学习Rank Learning[141]选取一定比例的图像块集合作为输入, 梯度信息被用来指导图像块选择过程模糊
    DCNN+SFASFA[128]多个图像块作为输入, 并使用预先训练好的 DCNN 模型提取特征模糊
    DNN+NSSNSSADNN[134]每个图像块归一化后用 CNNs 提取特征, 得到 1024 维向量通用
    CNNDB-CNN[123]用预训练的 S-CNN 及 VGG-16 分别提取合成失真与真实图像的相关特征通用
    CNNCGFA-CNN[124]用 VGG-16 以提取失真图像的相关特征通用
    字典/码本聚类算法+码本CORNIA[145]未标记图像块中提取局部特征进行 K-means 聚类以构建码本通用
    聚类算法+码本QAC[147]用比例池化策略估计每个分块的局部质量,
    通过 QAC 学习不同质量级别上的质心作为码本
    通用
    稀疏学习+字典SPARISH[143]以图像块的方式表示模糊图像, 并使用稀疏系数计算块能量模糊
    MVGMVG模型NIQE[150]提取 MSCN 系数, 再用 GGD 和 AGGD 拟合得到特征通用
    下载: 导出CSV

    表  5  基于深度学习的方法所采用的不同网络结构

    Table  5  Different network structures of deep learning-based methods

    方法网络结构
    Kang's CNN[116]包括一个含有最大/最小池化的卷积层, 两个全连接层及一个输出结点
    Yu's CNN[127]采用单一特征层挖掘图像内在特征, 利用 GRNN 评价图像质量
    MSFF[139]图像的多个特征作为输入, 通过端到端训练学习特征权重
    MEON[132]由失真判别网络和质量预测网络两个子网络组成, 并采用 GDN 作为激活函数
    DIQaM-NR[131]包含 10 个卷积层和 5 个池化层用于特征提取, 以及 2 个全连接层进行回归分析
    DIQA[118]网络训练分为客观失真部分及与人类视觉系统相关部分两个阶段
    SGDNet[133]包括视觉显著性预测和图像质量预测的两个子任务
    Rank Learning[141]结合了 Siamese Mobilenet 及多尺度 patch 提取方法
    SFA[128]包括 4 个步骤: 图像的多 patch 表示, 预先训练好的 DCNN 模型提取特征,
    通过 3 种不同统计结构进行特征聚合, 部分最小二乘回归进行质量预测
    NSSADNN[134]采用多任务学习方式设计, 包括自然场景统计 (NSS) 特征预测任务和质量分数预测任务
    DB-CNN[123]两个卷积神经网络分别专注于两种失真图像特征提取, 并采用双线性池化实现质量预测
    CGFA-CNN[124]采用两阶段策略, 首先基于 VGG-16 网络的子网络 1 识别图像中的失真类型, 而后利用子网络 2 实现失真量化
    下载: 导出CSV

    表  6  基于空域/频域的不同NR-IQA方法在不同数据集中比较结果

    Table  6  Comparison of different spatial/spectral domain-based NR-IQA methods for different databases

    方法发表时间
    LIVE CSIQ
    PLCCSROCCRMSEMAEPLCCSROCCRMSEMAE
    JNB[43]20090.8430.84211.7069.241 0.7860.7620.1800.122
    CPBD[44]20110.9130.9438.8826.8200.8740.8850.1400.111
    S3[65]20120.9190.9638.5787.3350.8940.9060.1350.110
    LPC-SI[81]20130.9070.9239.1777.2750.9230.9220.1110.093
    MLV[47]20140.9590.9576.1714.8960.9490.9250.0910.071
    ARISM[63]20150.9620.9685.9324.5120.9440.9250.0950.076
    BIBLE[49]20160.9630.9735.8834.6050.9400.9130.0980.077
    Zhan 等[14]20180.9600.9636.0784.6970.9670.9500.0730.057
    BISHARP[77]20180.9520.9606.6945.2800.9420.9270.0970.078
    HVS-MaxPol[85]20190.9570.9606.3185.0760.9430.9210.0950.077
    方法发表时间
    TID2008 TID2013
    PLCCSROCCRMSEMAEPLCCSROCCRMSEMAE
    JNB[43]20090.6610.6670.8810.673 0.6950.6900.8980.687
    CPBD[44]20110.8200.8410.6720.5240.8540.8520.6490.526
    S3[65]20120.8510.8420.6170.4780.8790.8610.5950.480
    LPC-SI[81]20130.8610.8960.5990.4780.8690.9190.6210.507
    MLV[47]20140.8580.8550.6020.4680.8830.8790.5870.460
    ARISM[63]20150.8430.8510.6320.4920.8950.8980.5580.442
    BIBLE[49]20160.8930.8920.5280.4130.9050.8990.5310.426
    Zhan 等[14]20180.9370.9420.4100.3200.9540.9610.3740.288
    BISHARP[77]20180.8770.8800.5640.4390.8920.8960.5650.449
    HVS-MaxPol[85]20190.8530.8510.6120.4840.8770.8750.5990.484
    下载: 导出CSV

    表  7  基于学习的不同NR-IQA方法在不同人工模糊数据集中比较结果

    Table  7  Comparison of different learning-based NR-IQA methods for different artificial blur databases

    方法发表
    时间
    LIVE CSIQ TID2008 TID2013
    PLCCSROCCPLCCSROCCPLCCSROCCPLCCSROCC
    BIQI[86]20100.9200.914 0.8460.773 0.7940.799 0.8250.815
    DIIVINE[87]20110.9430.9360.8860.8790.8350.8290.8470.842
    BLIINDS-II[91]20120.9390.9310.8860.8920.8420.8590.8570.862
    BRISQUE[96]20120.9510.9430.9210.9070.8660.8650.8620.861
    CORNIA[145]20120.9680.9690.7810.7140.9320.9320.9040.912
    NIQE[150]20130.9390.9300.9180.8910.8320.8230.8160.807
    QAC[147]20130.9160.9030.8310.8310.8130.8120.8480.847
    SSEQ[88]20140.9610.9480.8710.8700.8580.8520.8630.862
    Kang's CNN[116]20140.9630.9830.7740.7810.8800.8500.9310.922
    SPARISH[143]20160.9600.9600.9390.9140.8960.8960.9020.894
    Yu's CNN[127]20170.9730.9650.9420.9250.9370.9190.9220.914
    RISE[107]20170.9620.9490.9460.9280.9290.9220.9420.934
    MEON[132]20180.9480.9400.9160.9050.8910.880
    DIQaM-NR[131]20180.9720.9600.8930.8850.9150.908
    DIQA[118]20190.9520.9510.8710.8650.9210.918
    SGDNet[133]20190.9460.9390.8660.8600.9280.914
    Rank Learning[141]20190.9690.9540.9790.9530.9590.9490.9650.955
    SFA[128]20190.9720.9630.9460.9370.9540.948
    NSSADNN[134]20190.9710.9810.9230.9300.8570.840
    CGFA-CNN[124]20200.9740.9680.9550.941
    MSFF[139]20200.9540.9620.9250.9280.9210.928
    DB-CNN[123]20200.9560.9350.9690.9470.8570.844
    Liu 等[109]20200.9800.9730.9550.9360.9720.964
    Cai 等[110]20200.9580.9550.9520.9230.9570.941
    下载: 导出CSV

    表  8  基于学习的不同NR-IQA方法在不同自然模糊数据集中比较结果

    Table  8  Comparison of different learning-based NR-IQA methods for different natural blur databases

    方法发表
    时间
    BID CID2013 CLIVE
    PLCCSROCCPLCCSROCCPLCCSROCC
    BIQI[86]20100.6040.572 0.7770.744 0.5400.519
    DIIVINE[87]20110.5060.4890.4990.4770.5580.509
    BLIINDS-II[91]20120.5580.5300.7310.7010.5070.463
    BRISQUE[96]20120.6120.5900.7140.6820.6450.607
    CORNIA[145]20120.6800.6240.6650.618
    NIQE[150]20130.4710.4690.6930.6330.4780.421
    QAC[147]20130.3210.3180.1870.1620.3180.298
    SSEQ[88]20140.6040.5810.6890.676
    Kang's CNN[116]20140.4980.4820.5230.5260.5220.496
    SPARISH[143]20160.3560.3070.6780.6610.4840.402
    Yu's CNN[127]20170.5600.5570.7150.7040.5010.502
    RISE[107]20170.6020.5840.7930.7690.5550.515
    MEON[132]20180.4820.4700.7030.7010.6930.688
    DIQaM-NR[131]20180.4760.4610.6860.6740.6010.606
    DIQA[118]20190.5060.4920.7200.7080.7040.703
    SGDNet[133]20190.4220.4170.6530.6440.8720.851
    Rank Learning[141]20190.7510.7190.8630.836
    SFA[128]20190.8400.8260.8330.812
    NSSADNN[134]20190.5740.5680.8250.7480.8130.745
    CGFA-CNN[124]20200.8460.837
    DB-CNN[123]20200.4750.4640.6860.6720.8690.851
    Cai 等[110]20200.6330.6030.8800.874
    下载: 导出CSV
  • [1] Jayageetha J, Vasanthanayaki C. Medical image quality assessment using CSO based deep neural network. Journal of Medical Systems, 2018, 42(11): Article No. 224
    [2] Ma J J, Nakarmi U, Kin C Y S, Sandino C M, Cheng J Y, Syed A B, et al. Diagnostic image quality assessment and classification in medical imaging: Opportunities and challenges. In: Proceedings of the 17th International Symposium on Biomedical Imaging (ISBI). Iowa City, USA: IEEE, 2020. 337−340
    [3] Chen G B, Zhai M T. Quality assessment on remote sensing image based on neural networks. Journal of Visual Communication and Image Representation, 2019, 63: Article No. 102580
    [4] Hombalimath A, Manjula H T, Khanam A, Girish K. Image quality assessment for iris recognition. International Journal of Scientific and Research Publications, 2018, 8(6): 100-103
    [5] Zhai G T, Min X K. Perceptual image quality assessment: A survey. Science China Information Sciences, 2020, 63(11): Article No. 211301
    [6] 王烨茹. 基于数字图像处理的自动对焦方法研究 [博士学位论文], 浙江大学, 中国, 2018.

    Wang Ye-Ru. Research on Auto-focus Methods Based on Digital Imaging Processing [Ph.D. dissertation], Zhejiang University, China, 2018.
    [7] 尤玉虎, 刘通, 刘佳文. 基于图像处理的自动对焦技术综述. 激光与红外, 2013, 43(2): 132-136 doi: 10.3969/j.issn.1001-5078.2013.02.003

    You Yu-Hu, Liu Tong, Liu Jia-Wen. Survey of the auto-focus methods based on image processing. Laser and Infrared, 2013, 43(2): 132-136 doi: 10.3969/j.issn.1001-5078.2013.02.003
    [8] Cannon M. Blind deconvolution of spatially invariant image blurs with phase. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1976, 24(1): 58-63 doi: 10.1109/TASSP.1976.1162770
    [9] Tekalp A M, Kaufman H, Woods J W. Identification of image and blur parameters for the restoration of noncausal blurs. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1986, 34(4): 963-972 doi: 10.1109/TASSP.1986.1164886
    [10] Pavlovic G, Tekalp A M. Maximum likelihood parametric blur identification based on a continuous spatial domain model. IEEE Transactions on Image Processing, 1992, 1(4): 496-504 doi: 10.1109/83.199919
    [11] Kim S K, Park S R, Paik J K. Simultaneous out-of-focus blur estimation and restoration for digital auto-focusing system. IEEE Transactions on Consumer Electronics, 1998, 44(3): 1071-1075 doi: 10.1109/30.713236
    [12] Sada M M, Mahesh G M. Image deblurring techniques-a detail review. International Journal of Scientific Research in Science, Engineering and Technology, 2018, 4(2): 176-188
    [13] Wang R X, Tao D C. Recent progress in image deblurring. arXiv:1409.6838, 2014.
    [14] Zhan Y B, Zhang R. No-reference image sharpness assessment based on maximum gradient and variability of gradients. IEEE Transactions on Multimedia, 2018, 20(7): 1796-1808 doi: 10.1109/TMM.2017.2780770
    [15] Wang X W, Liang X, Zheng J J, Zhou H J. Fast detection and segmentation of partial image blur based on discrete Walsh-Hadamard transform. Signal Processing: Image Communication, 2019, 70: 47-56 doi: 10.1016/j.image.2018.09.007
    [16] Liao L F, Zhang X, Zhao F Q, Zhong T, Pei Y C, Xu X M, et al. Joint image quality assessment and brain extraction of fetal MRI using deep learning. In: Proceedings of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham, Germany: Springer, 2020. 415−424
    [17] Li D Q, Jiang T T. Blur-specific no-reference image quality assessment: A classification and review of representative methods. In: Proceedings of the 2019 International Conference on Sensing and Imaging. Cham, Germany: Springer, 2019. 45−68
    [18] Dharmishtha P, Jaliya U K, Vasava H D. A review: No-reference/blind image quality assessment. International Research Journal of Engineering and Technology, 2017, 4(1): 339-343
    [19] Yang X H, Li F, Liu H T. A survey of DNN methods for blind image quality assessment. IEEE Access, 2019, 7: 123788-123806 doi: 10.1109/ACCESS.2019.2938900
    [20] 王志明. 无参考图像质量评价综述. 自动化学报, 2015, 41(6): 1062-1079

    Wang Zhi-Ming. Review of no-reference image quality assessment. Acta Automatica Sinica, 2015, 41(6): 1062-1079
    [21] Ciancio A, da Costa A L N T T, da Silva E A B, Said A, Samadani R, Obrador P. No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Transactions on Image Processing, 2011, 20(1): 64-75 doi: 10.1109/TIP.2010.2053549
    [22] Sheikh H R, Sabir M F, Bovik A C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on Image Processing, 2006, 15(11): 3440-3451 doi: 10.1109/TIP.2006.881959
    [23] Zhu X, Milanfar P. Removing atmospheric turbulence via space-invariant deconvolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 157-170 doi: 10.1109/TPAMI.2012.82
    [24] Franzen R. Kodak Lossless True Color Image Suite [Online], available: http://www.r0k.us/graphics/kodak/, May 1, 1999
    [25] Larson E C, Chandler D M. Most apparent distortion: Full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging, 2010, 19(1): Article No. 011006
    [26] Ponomarenko N N, Lukin V V, Zelensky A, Egiazarian K, Astola J, Carli M, et al. TID2008 - a database for evaluation of full-reference visual quality assessment metrics. Advances of Modern Radioelectronics, 2009, 10: 30-45
    [27] Ponomarenko N, Ieremeiev O, Lukin V, Egiazarian K, Jin L N, Astola J, et al. Color image database TID2013: Peculiarities and preliminary results. In: Proceedings of the 2013 European Workshop on Visual Information Processing (EUVIP). Paris, France: IEEE, 2013. 106−111
    [28] Le Callet P, Autrusseau F. Subjective quality assessment IRCCyN/IVC database [Online], available: http://www.irccyn.ec-nantes.fr/ivcdb/, February 4, 2015
    [29] Zarić A E, Tatalović N, Brajković N, Hlevnjak H, Lončarić M, Dumić E, et al. VCL@FER image quality assessment database. Automatika, 2012, 53(4): 344-354 doi: 10.7305/automatika.53-4.241
    [30] Chandler D M, Hemami S S. VSNR: A wavelet-based visual signal-to-noise ratio for natural images. IEEE Transactions on Image Processing, 2007, 16(9): 2284-2298 doi: 10.1109/TIP.2007.901820
    [31] Lin H H, Hosu V, Saupe D. KADID-10k: A large-scale artificially distorted IQA database. In: Proceedings of the 11th International Conference on Quality of Multimedia Experience (QoMEX). Berlin, Germany: IEEE, 2019. 1−3
    [32] Gu K, Zhai G T, Yang X K, Zhang W J. Hybrid no-reference quality metric for singly and multiply distorted images. IEEE Transactions on Broadcasting, 2014, 60(3): 555-567 doi: 10.1109/TBC.2014.2344471
    [33] Jayaraman D, Mittal A, Moorthy A K, Bovik A C. Objective quality assessment of multiply distorted images. In: Proceedings of the 2012 Conference Record of the 46th Asilomar Conference on Signals, Systems and Computers (ASILOMAR). Pacific Grove, USA: IEEE, 2012. 1693−1697
    [34] Sun W, Zhou F, Liao Q M. MDID: A multiply distorted image database for image quality assessment. Pattern Recognition, 2017, 61: 153-168 doi: 10.1016/j.patcog.2016.07.033
    [35] Virtanen T, Nuutinen M, Vaahteranoksa M, Oittinen P, Häkkinen J. CID2013: A database for evaluating no-reference image quality assessment algorithms. IEEE Transactions on Image Processing, 2015, 24(1): 390-402 doi: 10.1109/TIP.2014.2378061
    [36] Ghadiyaram D, Bovik A C. Massive online crowdsourced study of subjective and objective picture quality. IEEE Transactions on Image Processing, 2016, 25(1): 372-387 doi: 10.1109/TIP.2015.2500021
    [37] Ghadiyaram D, Bovik A C. LIVE in the wild image quality challenge database. [Online], available: http://live.ece.utexas.edu/research/ChallengeDB/index.html, 2015.
    [38] Hosu V, Lin H H, Sziranyi T, Saupe D. KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing, 2020, 29: 4041-4056 doi: 10.1109/TIP.2020.2967829
    [39] Zhu X, Milanfar P. Image reconstruction from videos distorted by atmospheric turbulence. In: Proceedings of the SPIE 7543, Visual Information Processing and Communication. San Jose, USA: SPIE, 2010. 75430S
    [40] Marziliano P, Dufaux F, Winkler S, Ebrahimi T. Perceptual blur and ringing metrics: Application to JPEG2000. Signal Processing: Image Communication, 2004, 19(2): 163-172 doi: 10.1016/j.image.2003.08.003
    [41] 赵巨峰, 冯华君, 徐之海, 李奇. 基于模糊度和噪声水平的图像质量评价方法. 光电子•激光, 2010, 21(7): 1062-1066

    Zhao Ju-Feng, Feng Hua-Jun, Xu Zhi-Hai, Li Qi. Image quality assessment based on blurring and noise level. Journal of Optoelectronics • Laser, 2010, 21(7): 1062-1066
    [42] Zhang F Y, Roysam B. Blind quality metric for multidistortion images based on cartoon and texture decomposition. IEEE Signal Processing Letters, 2016, 23(9): 1265-1269 doi: 10.1109/LSP.2016.2594166
    [43] Ferzli R, Karam L J. A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE Transactions on Image Processing, 2009, 18(4): 717-728 doi: 10.1109/TIP.2008.2011760
    [44] Narvekar N D, Karam L J. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). IEEE Transactions on Image Processing, 2011, 20(9): 2678-2683 doi: 10.1109/TIP.2011.2131660
    [45] Wu S Q, Lin W S, Xie S L, Lu Z K, Ong E P, Yao S S. Blind blur assessment for vision-based applications. Journal of Visual Communication and Image Representation, 2009, 20(4): 231-241 doi: 10.1016/j.jvcir.2009.03.002
    [46] Ong E P, Lin W S, Lu Z K, Yang X K, Yao S S, Pan F, et al. A no-reference quality metric for measuring image blur. In: Proceedings of the 7th International Symposium on Signal Processing and Its Applications. Paris, France: IEEE, 2003. 469−472
    [47] Bahrami K, Kot A C. A fast approach for no-reference image sharpness assessment based on maximum local variation. IEEE Signal Processing Letters, 2014, 21(6): 751-755 doi: 10.1109/LSP.2014.2314487
    [48] 蒋平, 张建州. 基于局部最大梯度的无参考图像质量评价. 电子与信息学报, 2015, 37(11): 2587-2593

    Jiang Ping, Zhang Jian-Zhou. No-reference image quality assessment based on local maximum gradient. Journal of Electronics & Information Technology, 2015, 37(11): 2587-2593
    [49] Li L D, Lin W S, Wang X S, Yang G B, Bahrami K, Kot A C. No-reference image blur assessment based on discrete orthogonal moments. IEEE Transactions on Cybernetics, 2016, 46(1): 39-50 doi: 10.1109/TCYB.2015.2392129
    [50] Crete F, Dolmiere T, Ladret P, Nicolas M. The blur effect: Perception and estimation with a new no-reference perceptual blur metric. In: Proceedings of the SPIE 6492, Human Vision and Electronic Imaging XII. San Jose, USA: SPIE, 2007. 64920I
    [51] Wang Z, Bovik A C, Sheikh H R, Simoncelli E P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 2004, 13(4): 600-612 doi: 10.1109/TIP.2003.819861
    [52] 桑庆兵, 苏媛媛, 李朝锋, 吴小俊. 基于梯度结构相似度的无参考模糊图像质量评价. 光电子•激光, 2013, 24(3): 573-577

    Sang Qing-Bing, Su Yuan-Yuan, Li Chao-Feng, Wu Xiao-Jun. No-reference blur image quality assemssment based on gradient similarity. Journal of Optoelectronics • Laser, 2013, 24(3): 573-577
    [53] 邵宇, 孙富春, 李洪波. 基于视觉特性的无参考型遥感图像质量评价方法. 清华大学学报(自然科学版), 2013, 53(4): 550-555

    Shao Yu, Sun Fu-Chun, Li Hong-Bo. No-reference remote sensing image quality assessment method using visual properties. Journal of Tsinghua University (Science & Technology), 2013, 53(4): 550-555
    [54] Wang T, Hu C, Wu S Q, Cui J L, Zhang L Y, Yang Y P, et al. NRFSIM: A no-reference image blur metric based on FSIM and re-blur approach. In: Proceedings of the 2017 IEEE International Conference on Information and Automation (ICIA). Macau, China: IEEE, 2017. 698−703
    [55] Zhang L, Zhang L, Mou X Q, Zhang D. FSIM: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 2011, 20(8): 2378-2386 doi: 10.1109/TIP.2011.2109730
    [56] Bong D B L, Khoo B E. An efficient and training-free blind image blur assessment in the spatial domain. IEICE Transactions on Information and Systems, 2014, E97-D(7): 1864-1871 doi: 10.1587/transinf.E97.D.1864
    [57] 王红玉, 冯筠, 牛维, 卜起荣, 贺小伟. 基于再模糊理论的无参考图像质量评价. 仪器仪表学报, 2016, 37(7): 1647-1655 doi: 10.3969/j.issn.0254-3087.2016.07.026

    Wang Hong-Yu, Feng Jun, Niu Wei, Bu Qi-Rong, He Xiao-Wei. No-reference image quality assessment based on re-blur theory. Chinese Journal of Scientific Instrument, 2016, 37(7): 1647-1655 doi: 10.3969/j.issn.0254-3087.2016.07.026
    [58] 王冠军, 吴志勇, 云海姣, 梁敏华, 杨华. 结合图像二次模糊范围和奇异值分解的无参考模糊图像质量评价. 计算机辅助设计与图形学学报, 2016, 28(4): 653-661 doi: 10.3969/j.issn.1003-9775.2016.04.016

    Wang Guan-Jun, Wu Zhi-Yong, Yun Hai-Jiao, Liang Min-Hua, Yang Hua. No-reference quality assessment for blur image combined with re-blur range and singular value decomposition. Journal of Computer-Aided Design and Computer Graphics, 2016, 28(4): 653-661 doi: 10.3969/j.issn.1003-9775.2016.04.016
    [59] Chetouani A, Mostafaoui G, Beghdadi A. A new free reference image quality index based on perceptual blur estimation. In: Proceedings of the 10th Pacific-Rim Conference on Multimedia. Bangkok, Thailand: Springer, 2009. 1185−1196
    [60] Sang Q B, Qi H X, Wu X J, Li C F, Bovik A C. No-reference image blur index based on singular value curve. Journal of Visual Communication and Image Representation, 2014, 25(7): 1625-1630 doi: 10.1016/j.jvcir.2014.08.002
    [61] Qureshi M A, Deriche M, Beghdadi A. Quantifying blur in colour images using higher order singular values. Electronics Letters, 2016, 52(21): 1755-1757 doi: 10.1049/el.2016.1792
    [62] Zhai G T, Wu X L, Yang X K, Lin W S, Zhang W J. A psychovisual quality metric in free-energy principle. IEEE Transactions on Image Processing, 2012, 21(1): 41-52 doi: 10.1109/TIP.2011.2161092
    [63] Gu K, Zhai G T, Lin W S, Yang X K, Zhang W J. No-reference image sharpness assessment in autoregressive parameter space. IEEE Transactions on Image Processing, 2015, 24(10): 3218-3231 doi: 10.1109/TIP.2015.2439035
    [64] Chetouani A, Beghdadi A, Deriche M. A new reference-free image quality index for blur estimation in the frequency domain. In: Proceedings of the 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). Ajman, United Arab Emirates: IEEE, 2009. 155−159
    [65] Vu C T, Phan T D, Chandler D M. S3: A spectral and spatial measure of local perceived sharpness in natural images. IEEE Transactions on Image Processing, 2012, 21(3): 934-945 doi: 10.1109/TIP.2011.2169974
    [66] 卢彦飞, 张涛, 郑健, 李铭, 章程. 基于局部标准差与显著图的模糊图像质量评价方法. 吉林大学学报(工学版), 2016, 46(4): 1337-1343

    Lu Yan-Fei, Zhang Tao, Zheng Jian, LI Ming, Zhang Cheng. No-reference blurring image quality assessment based on local standard deviation and saliency map. Journal of Jilin University (Engineering and Technology Edition), 2016, 46(4): 1337-1343
    [67] Marichal X, Ma W Y, Zhang H J. Blur determination in the compressed domain using DCT information. In: Proceedings of the 1999 International Conference on Image Processing (Cat. 99CH36348). Kobe, Japan: IEEE, 1999. 386−390
    [68] Caviedes J, Oberti F. A new sharpness metric based on local kurtosis, edge and energy information. Signal Processing: Image Communication, 2004, 19(2): 147-161 doi: 10.1016/j.image.2003.08.002
    [69] 张士杰, 李俊山, 杨亚威, 张仲敏. 湍流退化红外图像降晰函数辨识. 光学 精密工程, 2013, 21(2): 514-521 doi: 10.3788/OPE.20132102.0514

    Zhang Shi-Jie, Li Jun-Shan, Yang Ya-Wei, Zhang Zhong-Min. Blur identification of turbulence-degraded IR images. Optics and Precision Engineering, 2013, 21(2): 514-521 doi: 10.3788/OPE.20132102.0514
    [70] Zhang S Q, Wu T, Xu X H, Cheng Z M, Chang C C. No-reference image blur assessment based on SIFT and DCT. Journal of Information Hiding and Multimedia Signal Processing, 2018, 9(1): 219-231
    [71] Zhang S Q, Li P C, Xu X H, Li L, Chang C C. No-reference image blur assessment based on response function of singular values. Symmetry, 2018, 10(8): Article No. 304
    [72] 卢亚楠, 谢凤英, 周世新, 姜志国, 孟如松. 皮肤镜图像散焦模糊与光照不均混叠时的无参考质量评价. 自动化学报, 2014, 40(3): 480-488

    Lu Ya-Nan, Xie Feng-Ying, Zhou Shi-Xin, Jiang Zhi-Guo, Meng Ru-Song. Non-reference quality assessment of dermoscopy images with defocus blur and uneven illumination distortion. Acta Automatica Sinica, 2014, 40(3): 480-488
    [73] Tong H H, Li M J, Zhang H J, Zhang C S. Blur detection for digital images using wavelet transform. In: Proceedings of the 2004 IEEE International Conference on Multimedia and Expo (ICME). Taipei, China: IEEE, 2004. 17−20
    [74] Ferzli R, Karam L J. No-reference objective wavelet based noise immune image sharpness metric. In: Proceedings of the 2005 IEEE International Conference on Image Processing. Genova, Italy: IEEE, 2005. Article No. I-405
    [75] Kerouh F. A no reference quality metric for measuring image blur in wavelet domain. International Journal of Digital Information and Wireless Communications, 2012, 4(1): 803-812
    [76] Vu P V, Chandler D M. A fast wavelet-based algorithm for global and local image sharpness estimation. IEEE Signal Processing Letters, 2012, 19(7): 423-426 doi: 10.1109/LSP.2012.2199980
    [77] Gvozden G, Grgic S, Grgic M. Blind image sharpness assessment based on local contrast map statistics. Journal of Visual Communication and Image Representation, 2018, 50: 145-158 doi: 10.1016/j.jvcir.2017.11.017
    [78] Wang Z, Simoncelli E P. Local phase coherence and the perception of blur. In: Proceedings of the 16th International Conference on Neural Information Processing Systems. Whistler British Columbia, Canada: MIT Press, 2003. 1435−1442
    [79] Ciancio A, da Costa A L N T, da Silva E A B, Said A, Samadani R, Obrador P. Objective no-reference image blur metric based on local phase coherence. Electronics Letters, 2009, 45(23): 1162-1163 doi: 10.1049/el.2009.1800
    [80] Hassen R, Wang Z, Salama M. No-reference image sharpness assessment based on local phase coherence measurement. In: Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. Dallas, USA: IEEE, 2010. 2434−2437
    [81] Hassen R, Wang Z, Salama M M A. Image sharpness assessment based on local phase coherence. IEEE Transactions on Image Processing, 2013, 22(7): 2798-2810 doi: 10.1109/TIP.2013.2251643
    [82] Do M N, Vetterli M. The contourlet transform: An efficient directional multiresolution image representation. IEEE Transactions on Image Processing, 2005, 14(12): 2091-2106 doi: 10.1109/TIP.2005.859376
    [83] 楼斌, 沈海斌, 赵武锋, 严晓浪. 基于自然图像统计的无参考图像质量评价. 浙江大学学报(工学版), 2010, 44(2): 248-252 doi: 10.3785/j.issn.1008-973X.2010.02.007

    Lou Bin, Shen Hai-Bin, Zhao Wu-Feng, Yan Xiao-Lang. No-reference image quality assessment based on statistical model of natural image. Journal of Zhejiang University (Engineering Science), 2010, 44(2): 248-252 doi: 10.3785/j.issn.1008-973X.2010.02.007
    [84] 焦淑红, 齐欢, 林维斯, 唐琳, 申维和. 基于Contourlet统计特性的无参考图像质量评价. 吉林大学学报(工学版), 2016, 46(2): 639-645

    Jiao Shu-Hong, Qi Huan, Lin Wei-Si, Tang Lin, Shen Wei-He. No-reference quality assessment based on the statistics in Contourlet domain. Journal of Jilin University (Engineering and Technology Edition), 2016, 46(2): 639-645
    [85] Hosseini M S, Zhang Y Y, Plataniotis K N. Encoding visual sensitivity by MaxPol convolution filters for image sharpness assessment. IEEE Transactions on Image Processing, 2019, 28(9): 4510-4525 doi: 10.1109/TIP.2019.2906582
    [86] Moorthy A K, Bovik A C. A two-step framework for constructing blind image quality indices. IEEE Signal Processing Letters, 2010, 17(5): 513-516 doi: 10.1109/LSP.2010.2043888
    [87] Moorthy A K, Bovik A C. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Transactions on Image Processing, 2011, 20(12): 3350-3364 doi: 10.1109/TIP.2011.2147325
    [88] Liu L X, Liu B, Huang H, Bovik A C. No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication, 2014, 29(8): 856-863 doi: 10.1016/j.image.2014.06.006
    [89] 陈勇, 帅锋, 樊强. 基于自然统计特征分布的无参考图像质量评价. 电子与信息学报, 2016, 38(7): 1645-1653

    Chen Yong, Shuai Feng, Fan Qiang. A no-reference image quality assessment based on distribution characteristics of natural statistics. Journal of Electronics and Information Technology, 2016, 38(7): 1645-1653
    [90] Zhang Y, Chandler D M. Opinion-unaware blind quality assessment of multiply and singly distorted images via distortion parameter estimation. IEEE Transactions on Image Processing, 2018, 27(11): 5433-5448 doi: 10.1109/TIP.2018.2857413
    [91] Saad M A, Bovik A C, Charrier C. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Transactions on Image Processing, 2012, 21(8): 3339-3352 doi: 10.1109/TIP.2012.2191563
    [92] Saad M A, Bovik A C, Charrier C. A DCT statistics-based blind image quality index. IEEE Signal Processing Letters, 2010, 17(6): 583-586 doi: 10.1109/LSP.2010.2045550
    [93] Liu L X, Dong H P, Huang H, Bovik A C. No-reference image quality assessment in curvelet domain. Signal Processing: Image Communication, 2014, 29(4): 494-505 doi: 10.1016/j.image.2014.02.004
    [94] Zhang Y, Chandler D M. No-reference image quality assessment based on log-derivative statistics of natural scenes. Journal of Electronic Imaging, 2013, 22(4): Article No. 043025
    [95] 李俊峰. 基于RGB色彩空间自然场景统计的无参考图像质量评价. 自动化学报, 2015, 41(9): 1601-1615

    Li Jun-Feng. No-reference image quality assessment based on natural scene statistics in RGB color space. Acta Automatica Sinica, 2015, 41(9): 1601-1615
    [96] Mittal A, Moorthy A K, Bovik A C. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 2012, 21(12): 4695-4708 doi: 10.1109/TIP.2012.2214050
    [97] 唐祎玲, 江顺亮, 徐少平. 基于非零均值广义高斯模型与全局结构相关性的BRISQUE改进算法. 计算机辅助设计与图形学学报, 2018, 30(2): 298-308

    Tang Yi-Ling, Jiang Shun-Liang, Xu Shao-Ping. An improved BRISQUE algorithm based on non-zero mean generalized Gaussian model and global structural correlation coefficients. Journal of Computer-Aided Design & Computer Graphics, 2018, 30(2): 298-308
    [98] Ye P, Doermann D. No-reference image quality assessment using visual codebooks. IEEE Transactions on Image Processing, 2012, 21(7): 3129-3138 doi: 10.1109/TIP.2012.2190086
    [99] Xue W F, Mou X Q, Zhang L, Bovik A C, Feng X C. Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Transactions on Image Processing, 2014, 23(11): 4850-4862 doi: 10.1109/TIP.2014.2355716
    [100] Smola A J, Schölkopf B. A tutorial on support vector regression. Statistics and Computing, 2004, 14(3): 199-222 doi: 10.1023/B:STCO.0000035301.49549.88
    [101] 陈勇, 吴明明, 房昊, 刘焕淋. 基于差异激励的无参考图像质量评价. 自动化学报, 2020, 46(8): 1727-1737

    Chen Yong, Wu Ming-Ming, Fang Hao, Liu Huan-Lin. No-reference image quality assessment based on differential excitation. Acta Automatica Sinica, 2020, 46(8): 1727-1737
    [102] Li Q H, Lin W S, Xu J T, Fang Y M. Blind image quality assessment using statistical structural and luminance features. IEEE Transactions on Multimedia, 2016, 18(12): 2457-2469 doi: 10.1109/TMM.2016.2601028
    [103] Li C F, Zhang Y, Wu X J, Zheng Y H. A multi-scale learning local phase and amplitude blind image quality assessment for multiply distorted images. IEEE Access, 2018, 6: 64577-64586 doi: 10.1109/ACCESS.2018.2877714
    [104] Gao F, Tao D C, Gao X B, Li X L. Learning to rank for blind image quality assessment. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26(10): 2275-2290 doi: 10.1109/TNNLS.2014.2377181
    [105] 桑庆兵, 李朝锋, 吴小俊. 基于灰度共生矩阵的无参考模糊图像质量评价方法. 模式识别与人工智能, 2013, 26(5): 492-497 doi: 10.3969/j.issn.1003-6059.2013.05.012

    Sang Qing-Bing, Li Chao-Feng, Wu Xiao-Jun. No-reference blurred image quality assessment based on gray level co-occurrence matrix. Pattern Recognition and Artificial Intelligence, 2013, 26(5): 492-497 doi: 10.3969/j.issn.1003-6059.2013.05.012
    [106] Oh T, Park J, Seshadrinathan K, Lee S, Bovik A C. No-reference sharpness assessment of camera-shaken images by analysis of spectral structure. IEEE Transactions on Image Processing, 2014, 23(12): 5428-5439 doi: 10.1109/TIP.2014.2364925
    [107] Li L D, Xia W H, Lin W S, Fang Y M, Wang S Q. No-reference and robust image sharpness evaluation based on multiscale spatial and spectral features. IEEE Transactions on Multimedia, 2017, 19(5): 1030-1040 doi: 10.1109/TMM.2016.2640762
    [108] Li L D, Yan Y, Lu Z L, Wu J J, Gu K, Wang S Q. No-reference quality assessment of deblurred images based on natural scene statistics. IEEE Access, 2017, 5: 2163-2171 doi: 10.1109/ACCESS.2017.2661858
    [109] Liu L X, Gong J C, Huang H, Sang Q B. Blind image blur metric based on orientation-aware local patterns. Signal Processing: Image Communication, 2020, 80: Article No. 115654
    [110] Cai H, Wang M J, Mao W D, Gong M L. No-reference image sharpness assessment based on discrepancy measures of structural degradation. Journal of Visual Communication and Image Representation, 2020, 71: Article No. 102861
    [111] 李朝锋, 唐国凤, 吴小俊, 琚宜文. 学习相位一致特征的无参考图像质量评价. 电子与信息学报, 2013, 35(2): 484-488

    Li Chao-Feng, Tang Guo-Feng, Wu Xiao-Jun, Ju Yi-Wen. No-reference image quality assessment with learning phase congruency feature. Journal of Electronics and Information Technology, 2013, 35(2): 484-488
    [112] Li C F, Bovik A C, Wu X J. Blind image quality assessment using a general regression neural network. IEEE Transactions on Neural Networks, 2011, 22(5): 793-799 doi: 10.1109/TNN.2011.2120620
    [113] Liu L X, Hua Y, Zhao Q J, Huang H, Bovik A C. Blind image quality assessment by relative gradient statistics and adaboosting neural network. Signal Processing: Image Communication, 2016, 40: 1-15 doi: 10.1016/j.image.2015.10.005
    [114] 沈丽丽, 杭宁. 联合多种边缘检测算子的无参考质量评价算法. 工程科学学报, 2018, 40(8): 996-1004

    Shen Li-Li, Hang Ning. No-reference image quality assessment using joint multiple edge detection. Chinese Journal of Engineering, 2018, 40(8): 996-1004
    [115] Liu Y T, Gu K, Wang S Q, Zhao D B, Gao W. Blind quality assessment of camera images based on low-level and high-level statistical features. IEEE Transactions on Multimedia, 2019, 21(1): 135-146 doi: 10.1109/TMM.2018.2849602
    [116] Kang L, Ye P, Li Y, Doermann D. Convolutional neural networks for no-reference image quality assessment. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Columbus, USA: IEEE, 2014. 1733−1740
    [117] Kim J, Lee S. Fully deep blind image quality predictor. IEEE Journal of Selected Topics in Signal Processing, 2017, 11(1): 206-220 doi: 10.1109/JSTSP.2016.2639328
    [118] Kim J, Nguyen A D, Lee S. Deep CNN-based blind image quality predictor. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(1): 11-24 doi: 10.1109/TNNLS.2018.2829819
    [119] Guan J W, Yi S, Zeng X Y, Cham W K, Wang X G. Visual importance and distortion guided deep image quality assessment framework. IEEE Transactions on Multimedia, 2017, 19(11): 2505-2520 doi: 10.1109/TMM.2017.2703148
    [120] Bianco S, Celona L, Napoletano P, Schettini R. On the use of deep learning for blind image quality assessment. Signal, Image and Video Processing, 2018, 12(2): 355-362 doi: 10.1007/s11760-017-1166-8
    [121] Pan D, Shi P, Hou M, Ying Z F, Fu S Z, Zhang Y. Blind predicting similar quality map for image quality assessment. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE, 2018. 6373−6382
    [122] He L H, Zhong Y Z, Lu W, Gao X B. A visual residual perception optimized network for blind image quality assessment. IEEE Access, 2019, 7: 176087-176098 doi: 10.1109/ACCESS.2019.2957292
    [123] Zhang W X, Ma K D, Yan J, Deng D X, Wang Z. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(1): 36-47 doi: 10.1109/TCSVT.2018.2886771
    [124] Cai W P, Fan C E, Zou L, Liu Y F, Ma Y, Wu M Y. Blind image quality assessment based on classification guidance and feature aggregation. Electronics, 2020, 9(11): Article No. 1811
    [125] Li D Q, Jiang T T, Jiang M. Exploiting high-level semantics for no-reference image quality assessment of realistic blur images. In: Proceedings of the 25th ACM International Conference on Multimedia. Mountain View, USA: ACM, 2017. 378−386
    [126] Yu S D, Jiang F, Li L D, Xie Y Q. CNN-GRNN for image sharpness assessment. In: Proceedings of the 2016 Asian Conference on Computer Vision. Taipei, China: Springer, 2016. 50−61
    [127] Yu S D, Wu S B, Wang L, Jiang F, Xie Y Q, Li L D. A shallow convolutional neural network for blind image sharpness assessment. PLoS One, 2017, 12(5): Article No. e0176632
    [128] Li D Q, Jiang T T, Lin W S, Jiang M. Which has better visual quality: The clear blue sky or a blurry animal?. IEEE Transactions on Multimedia, 2019, 21(5): 1221-1234 doi: 10.1109/TMM.2018.2875354
    [129] Li Y M, Po L M, Xu X Y, Feng L T, Yuan F, Cheung C H, et al. No-reference image quality assessment with shearlet transform and deep neural networks. Neurocomputing, 2015, 154: 94-109 doi: 10.1016/j.neucom.2014.12.015
    [130] Gao F, Yu J, Zhu S G, Huang Q M, Tian Q. Blind image quality prediction by exploiting multi-level deep representations. Pattern Recognition, 2018, 81: 432-442 doi: 10.1016/j.patcog.2018.04.016
    [131] Bosse S, Maniry D, Müller K R, Wiegand T, Samek W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Transactions on Image Processing, 2018, 27(1): 206-219 doi: 10.1109/TIP.2017.2760518
    [132] Ma K D, Liu W T, Zhang K, Duanmu Z F, Wang Z, Zuo W M. End-to-end blind image quality assessment using deep neural networks. IEEE Transactions on Image Processing, 2018, 27(3): 1202-1213 doi: 10.1109/TIP.2017.2774045
    [133] Yang S, Jiang Q P, Lin W S, Wang Y T. SGDNet: An end-to-end saliency-guided deep neural network for no-reference image quality assessment. In: Proceedings of the 27th ACM International Conference on Multimedia. Nice, France: ACM, 2019. 1383−1391
    [134] Yan B, Bare B, Tan W M. Naturalness-aware deep no-reference image quality assessment. IEEE Transactions on Multimedia, 2019, 21(10): 2603-2615 doi: 10.1109/TMM.2019.2904879
    [135] Yan Q S, Gong D, Zhang Y N. Two-stream convolutional networks for blind image quality assessment. IEEE Transactions on Image Processing, 2019, 28(5): 2200-2211 doi: 10.1109/TIP.2018.2883741
    [136] Lin K Y, Wang G X. Hallucinated-IQA: No-reference image quality assessment via adversarial learning. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE, 2018. 732−741
    [137] Yang H T, Shi P, Zhong D X, Pan D, Ying Z F. Blind image quality assessment of natural distorted image based on generative adversarial networks. IEEE Access, 2019, 7: 179290-179303 doi: 10.1109/ACCESS.2019.2957235
    [138] Hou W L, Gao X B, Tao D C, Li X L. Blind image quality assessment via deep learning. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26(6): 1275-1286 doi: 10.1109/TNNLS.2014.2336852
    [139] He S Y, Liu Z Z. Image quality assessment based on adaptive multiple Skyline query. Signal Processing: Image Communication, 2020, 80: Article No. 115676
    [140] Ma K D, Liu W T, Liu T L, Wang Z, Tao D C. dipIQ: Blind image quality assessment by learning-to-rank discriminable image pairs. IEEE Transactions on Image Processing, 2017, 26(8): 3951-3964 doi: 10.1109/TIP.2017.2708503
    [141] Zhang Y B, Wang H Q, Tan F F, Chen, W J, Wu Z R. No-reference image sharpness assessment based on rank learning. In: Proceedings of the 2019 International Conference on Image Processing (ICIP). Taipei, China: IEEE, 2019. 2359−2363
    [142] Yang J C, Sim K, Jiang B, Lu W. Blind image quality assessment utilising local mean eigenvalues. Electronics Letters, 2018, 54(12): 754-756 doi: 10.1049/el.2018.0958
    [143] Li L D, Wu D, Wu J J, Li H L, Lin W S, Kot A C. Image sharpness assessment by sparse representation. IEEE Transactions on Multimedia, 2016, 18(6): 1085-1097 doi: 10.1109/TMM.2016.2545398
    [144] Lu Q B, Zhou W G, Li H Q. A no-reference Image sharpness metric based on structural information using sparse representation. Information Sciences, 2016, 369: 334-346 doi: 10.1016/j.ins.2016.06.042
    [145] Ye P, Kumar J, Kang L, Doermann D. Unsupervised feature learning framework for no-reference image quality assessment. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, USA: IEEE, 2012. 1098−1105
    [146] Xu J T, Ye P, Li Q H, Du H Q, Liu Y, Doermann D. Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing, 2016, 25(9): 4444-4457 doi: 10.1109/TIP.2016.2585880
    [147] Xue W F, Zhang L, Mou X Q. Learning without human scores for blind image quality assessment. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA: IEEE, 2013. 995−1002
    [148] Wu Q B, Li H L, Meng F M, Ngan K N, Luo B, Huang C, et al. Blind image quality assessment based on multichannel feature fusion and label transfer. IEEE Transactions on Circuits and Systems for Video Technology, 2016, 26(3): 425-440 doi: 10.1109/TCSVT.2015.2412773
    [149] Jiang Q P, Shao F, Lin W S, Gu K, Jiang G Y, Sun H F. Optimizing multistage discriminative dictionaries for blind image quality assessment. IEEE Transactions on Multimedia, 2018, 20(8): 2035-2048 doi: 10.1109/TMM.2017.2763321
    [150] Mittal A, Soundararajan R, Bovik A C. Making a "completely blind" image quality analyzer. IEEE Signal Processing Letters, 2013, 20(3): 209-212 doi: 10.1109/LSP.2012.2227726
    [151] Zhang L, Zhang L, Bovik A C. A feature-enriched completely blind image quality evaluator. IEEE Transactions on Image Processing, 2015, 24(8): 2579-2591 doi: 10.1109/TIP.2015.2426416
    [152] Jiao S H, Qi H, Lin W S, Shen W H. Fast and efficient blind image quality index in spatial domain. Electronics Letters, 2013, 49(18): 1137-1138 doi: 10.1049/el.2013.1837
    [153] Abdalmajeed S, Jiao S H. No-reference image quality assessment algorithm based on Weibull statistics of log-derivatives of natural scenes. Electronics Letters, 2014, 50(8): 595-596 doi: 10.1049/el.2013.3585
    [154] 南栋, 毕笃彦, 查宇飞, 张泽, 李权合. 基于参数估计的无参考型图像质量评价算法. 电子与信息学报, 2013, 35(9): 2066-2072

    Nan Dong, Bi Du-Yan, Zha Yu-Fei, Zhang Ze, Li Quan-He. A no-reference image quality assessment method based on parameter estimation. Journal of Electronics & Information Technology, 2013, 35(9): 2066-2072
    [155] Panetta K, Gao C, Agaian S. No reference color image contrast and quality measures. IEEE Transactions on Consumer Electronics, 2013, 59(3): 643-651 doi: 10.1109/TCE.2013.6626251
    [156] Gu J, Meng G F, Redi J A, Xiang S M, Pan C H. Blind image quality assessment via vector regression and object oriented pooling. IEEE Transactions on Multimedia, 2018, 20(5): 1140-1153 doi: 10.1109/TMM.2017.2761993
    [157] Wu Q B, Li H L, Wang Z, Meng F M, Luo B, Li W, et al. Blind image quality assessment based on rank-order regularized regression. IEEE Transactions on Multimedia, 2017, 19(11): 2490-2504 doi: 10.1109/TMM.2017.2700206
    [158] Al-Bandawi H, Deng G. Blind image quality assessment based on Benford’s law. IET Image Processing, 2018, 12(11): 1983-1993 doi: 10.1049/iet-ipr.2018.5385
    [159] Wu Q B, Li H L, Ngan K N, Ma K D. Blind image quality assessment using local consistency aware retriever and uncertainty aware evaluator. IEEE Transactions on Circuits and Systems for Video Technology, 2018, 28(9): 2078-2089 doi: 10.1109/TCSVT.2017.2710419
    [160] Deng C W, Wang S G, Li Z, Huang G B, Lin W S. Content-insensitive blind image blurriness assessment using Weibull statistics and sparse extreme learning machine. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2019, 49(3): 516-527 doi: 10.1109/TSMC.2017.2718180
    [161] Wang Z, Li Q. Information content weighting for perceptual image quality assessment. IEEE Transactions on Image Processing, 2011, 20(5): 1185-1198 doi: 10.1109/TIP.2010.2092435
  • 期刊类型引用(4)

    1. 郑薇. 基于混沌BP算法的数字温度传感器温度误差模糊控制方法. 工业仪表与自动化装置. 2023(03): 122-126+133 . 百度学术
    2. 易利群,盛玉霞,柴利. 融合MRI信息的PET图像去噪:基于图小波的方法. 自动化学报. 2023(12): 2605-2614 . 本站查看
    3. 王涵予,姜永元,张守亮,吴任翔,孙伟. 人工气候加速试验箱温湿度图信号重构智能监测算法. 计算机应用. 2022(S1): 376-379 . 百度学术
    4. 张戍育,余国文. 基于无人机蜂群的雷达波束特征提取方案. 空军预警学院学报. 2021(05): 353-358 . 百度学术

    其他类型引用(5)

  • 加载中
图(5) / 表(8)
计量
  • 文章访问数:  2420
  • HTML全文浏览量:  2073
  • PDF下载量:  599
  • 被引次数: 9
出版历程
  • 收稿日期:  2020-12-17
  • 录用日期:  2021-05-12
  • 网络出版日期:  2021-06-20
  • 刊出日期:  2022-03-25

目录

/

返回文章
返回