-
摘要: 图像的模糊问题影响人们对信息的感知、获取及图像的后续处理. 无参考模糊图像质量评价是该问题的主要研究方向之一. 本文分析了近20年来无参考模糊图像质量评价相关技术的发展. 首先, 本文结合主要数据集对图像模糊失真进行分类说明; 其次, 对主要的无参考模糊图像质量评价方法进行分类介绍与详细分析; 随后, 介绍了用来比较无参考模糊图像质量评价方法性能优劣的主要评价指标; 接着, 选择典型数据集及评价指标, 并采用常见的无参考模糊图像质量评价方法进行性能比较; 最后, 对无参考模糊图像质量评价的相关技术及发展趋势进行总结与展望.Abstract: The blurriness distortion of image affects information perception, acquisition and subsequent processing. No-reference blurred image quality assessment is one of main research directions for the problem. This paper analyzes the relevant technique development of no-reference blurred image quality assessment in recent 20 years. Firstly, combining with main databases, different types of blurriness distortions are described. Secondly, main methods for no-reference blurred image quality assessment are classified and analyzed in detail. Thirdly, performance measures for no-reference blurred image assessment are introduced. Then, the typical databases, performance measures and methods are introduced for performance comparisons. Finally, the relevant technologies and development trends of no-reference blurred image assessment are summarized and prospected.
-
传统发电调控框架在保持多区域互联大电网的系统有功平衡, 维持系统频率稳定等方面发挥了重要作用.随着相关研究的不断深入, 传统发电调控框架逐渐发展成为存在三种不同时间尺度问题的调控框架[1-2]: 1)机组组合(Unit commitment, UC)[3-4]; 2)经济调度(Economic dispatch, ED)[5]; 3)自动发电控制(Automatic generating control, AGC)和发电指令调度(Generation command dispatch, GCD)[6-9].然而, 传统发电调控框架在以下方面可以改善: 1)在传统发电调控框架中, 较长时间尺度下调控有可能导致不准确控制指令的产生.同时, 不同时间尺度调控之间存在的不协调问题有可能导致反向调节现象的产生. 2)在传统发电调控框架中, UC和ED问题解决是以下一时间段负荷预测结果作为条件, 而实时AGC和GCD却是基于AGC机组特性所得指令.从长时间尺度的角度来看, AGC和GCD做出的控制结果并不是一个最优的控制结果. 3)一般情况下, 不同时间尺度下的优化目标均不相同.因此, 无论是对长期还是短期而言, 仅依据这些优化结果做出的调控指令, 都不是最优的.
研究者为了解决传统框架中存在的部分问题, 提出了大量集成算法或集成框架.文献[10]提出针对微电网实时调度的AGC和ED集成方法.文献[11]研究了考虑含有AGC仿射索引过程的鲁棒经济调度.文献[12]从优化的角度, 将ED和AGC控制器相结合.然而, 这些算法均不能完整地对传统发电调控框架进行改善.
强化学习(Reinforcement learning, RL), 又称再励学习、评价学习, 既可看作是人工智能领域中一种重要的机器学习方法, 也被认为是属于马尔科夫决策过程(Markov decision process, MDP)和动态优化方法的一个独立分支.互联电网AGC是一个动态多级决策问题, 其控制过程可视为马尔科夫决策过程.文献[13]针对微电网孤岛运行模式下新能源发电强随机性导致的系统频率波动, 提出基于多智能体相关均衡强化学习(Correlated equilibrium Q ($\lambda$), CEQ ($\lambda$))的微电网智能发电控制方法.文献[14]针对非马尔科夫环境下火电占优的互联电网AGC控制策略, 引入随机最优控制中Q($\lambda$)学习的"后向估计"原理, 有效解决火电机组大时滞环节带来的延时回报问题.然而, 这些方法的采用均没有从整体上对传统发电调控框架进行改善.
为了完整地解决传统发电调控框架中存在的问题, 本文提出一种实时经济调度与控制(Real-time economic generation dispatch and control, REG)框架替代传统的发电控制框架.除此之外, 为适应REG框架, 还提出一种懒惰强化学习(Lazy reinforcement learning, LRL)算法.由于懒惰强化学习算法是一种需要大量数据的算法, 所提算法需要大量数据进行训练.因此, 采用基于人工社会-计算实验-平行执行(Artificial societies-Computational experiments-Parallel execution, ACP)和社会系统的平行系统, 在短时间内产生大量数据以适应所提算法的需要.文献[15]提出基于ACP的平行系统进行社会计算的理论.文献[16]提出一种可用于信息和控制的基于信息-物理系统和ACP的分散自治系统.平行系统或平行时代的理论已经被应用到很多领域, 例如, 平行管理系统[17]、区块链领域[18]、机器学习[19]和核电站安全可靠性的分析[20]等.在一个实际系统中, 社会目标也被考虑在CPS中, 也可称为信息物理社会融合系统(CPSS)[21]; 同时, CPS的概念中应当加入社会系统, 即"智能电网"或"能源互联网"[22].
因此, 基于REG框架的控制方法是一种适用于互联大电网发电调度和控制的统一时间尺度的调控方法.
虽然采用基于ACP和社会系统的平行系统可以快速获取海量的数据, 但是这些数据中既存在调控效果较好的数据, 也有调控效果较差的数据.为了解决这一问题, 设计了一种选择算子对有利于LRL训练的数据进行筛选保留.另外, 由于AGC机组存在大量约束限制.设计了一种松弛算子对优化结果进行限制.
为了对比人工神经网络(Artificial neural network, ANN)和LRL的调控效果, 本文设计了一种基于人工神经网络和松弛算子结合的松弛人工神经网络算法(Relaxed artificial neural network, RANN).本文提出的LRL算法的特性归纳如下:
1) 作为一种统一时间尺度的控制器, 从长远角度来看, LRL可以避免不同时间尺度需要协同调控问题.
2) 为LRL设计了一个强化网络, 可为一个区域的所有AGC机组提供多个输出.且采用松弛机满足AGC机组的约束.
3) 懒惰学习的控制策略可以采用从平行系统不断产生的海量数据进行在线更新.这有利于LRL进行训练.
1. 传统发电调控框架概述
如图 1所示, 传统发电调控框架包含UC, ED, AGC和GCD四个过程.
UC负责制定长期(1天)的机组开停和有功出力计划; 然后ED重新制定短期(15分钟)所有已开启的机组的发电指令; 最后AGC和GCD为所有AGC机组再次重新制定实时发电指令.
1.1 模型分析
1.1.1 机组组合模型
UC的目标是在给定时间周期内制定出最优的机组开停和生产出力计划.因此, UC问题是一个随机混合0-1整数规划问题, 可以采用优化算法进行求解.
UC问题的优化目标是使总发电成本最低, UC问题的约束包括:有功平衡约束、热备用约束、有功出力限制约束以及发电机调节比率约束, 其目标函数表达式及约束条件为
$ \begin{align} &\min \sum\limits_{t = 1}^T {\sum\limits_{j = 1}^{{J_i}} {[{F_j}({P_{j, t}}){u_{j, t}} + S{U_{j, t}}(1 - {u_{j, t - 1}}){u_{j, t}}]} }\notag\\ &\, \mathrm{s.t.} \begin{cases} \sum\limits_{j = 1}^{{J_i}} {{P_{j, t}}{u_{j, t}} = P{D_{i, t}}} \\[1mm] \sum\limits_{j = 1}^{{J_i}} {P_j^{\max }{u_{j, t}} \ge P{D_{i, t}} + S{R_{i, t}}} \\[1mm] {u_{j, t}}P_j^{\min } \le {P_{j, t}} \le {u_{j, t}}P_j^{\max }\\[1mm] 0 \le {P_{j, t}} - {P_{j, (t - 1)}} \le P_j^{{\rm{up}}}\\[1mm] 0 \le {P_{j, t}} - {P_{j, (t - 1)}} \le P_j^{{\rm{down}}} \end{cases} \end{align} $
(1) 其中, $T$为给定时间周期内的时间断面的个数, 一般设定为24; $J_i$为第$i$个区域内的发电机组个数; $u_{j, t}$为第$j$个发电机组在第$t$时间断面的状态, $u_{j, t}$取值为1或0, 分别代表机组开启和关停状态; 总发电成本包括燃料成本$F_j(P_{j, t})$和启动成本$SU_{j, t}$; $P{D_{i, t}}$为第$i$个区域内在第$t$时间段内的负荷需求总量; $P_j^{\min }$和$P_j^{\max }$分别为在第$i$区域的第$j$个发电机组的有功出力的最小值和最大值; $S{R_{i, t}}$为第$i$个区域内在第$t$时间段内所需的热备用容量; $P_j^{{\rm{up}}}$和$P_j^{{\rm{down}}}$分别为第$j$台发电机组的上调和下调的最大幅度限制; $T_j^{\min\mbox{-}\rm{up}}$为第$j$个发电机组的持续开启时间的最小值; $T_j^{\min\mbox{-}\rm{dowm}}$为第$j$个发电机组的持续停机时间的最小值.
燃料成本$F_j(P_{j, t})$, 启动成本$SU_{j, t}$以及约束$u_{j, t}$的计算公式如下:
$ {F_j}({P_{j, t}}) = {a_j} + {b_j}{P_{j, t}} + {c_j}P_{j, t}^2 $
(2) $ \begin{align} &S{U_{j, t}} =\notag\\ &\ \ \ \begin{cases} S{U_{{\rm{H}}, j}}, & T_j^{{\rm{min\mbox{-}down}}} \le T_{j, t}^{{\rm{up}}} \le T_j^{{\rm{min\mbox{-}down}}} + T_j^{{\rm{cold}}}\\ S{U_{{\rm{C}}, j}}, &T_{j, t}^{{\rm{down}}} > T_j^{{\rm{min\mbox{-}down}}} + T_j^{{\rm{cold}}} \end{cases} \end{align} $
(3) $ \begin{align} \begin{cases} T_{j}^{{\rm{up}}} \geq T_j^{\min\mbox{-}{\rm{up}}}\\ T_{j}^{{\rm{down}}} \geq T_j^{\min\mbox{-}{\rm{down}}} \end{cases} \end{align} $
(4) 其中, $P_{j, t}$为第$j$台发电机组在第$t$个时间断面时的有功出力; $a_j$, $b_j$和$c_j$分别是发电成本的常数因子, 一次项因子和二次项因子; $T_{j}^{{\rm{up}}}$和$T_{j}^{{\rm{down}}}$分别为第$j$台发电机组开启和关停的累积时间; $T_j^{{\rm{cold}}}$是第$j$台发电机组从完全关停状态进行冷启动所需的时间; $SU_{H, j}$和$SU_{C, j}$分别为第$j$台发电机组进行热启动和冷启动所需的成本.
1.1.2 经济调度模型
ED采用优化算法从经济角度重新制定发电命令.通常ED的优化目标包括两部分:经济目标和碳排放目标.将两种优化目标进行线性权重结合, 得到最终的ED的模型如下:
$ \begin{align} &\min {F_{{\rm{total}}}} = \sum\limits_{j = 1}^{{J_i}} {(\omega F_j^{\rm{e}}({P_j}) + (1 - \omega )F_j^{\rm{c}}({P_j}))}\notag \\ &\, \mathrm{s.t.}\begin{cases} P{D_i} - \sum\limits_{j = 1}^{{J_i}} {{P_j} = 0} \\ P_j^{\min } \le {P_j} \le P_j^{\max }\\ {P_{j, t}} - {P_{j, t - 1}} \le P_j^{{\rm{up}}}\\ {P_{j, t - 1}} - {P_{j, t}} \le P_j^{{\rm{down}}} \end{cases} \end{align} $
(5) 其中, $PD_i$为第$i$个区域的系统总负荷量, $\omega$为经济目标权重.
经济目标和碳排放目标具体表达如下:
$ \begin{align} F_{{\rm{total}}}^{\rm{e}} = \sum\limits_{j = 1}^{{J_i}} {F_j^{\rm{e}}} ({P_j}) = \sum\limits_{j = 1}^{{J_i}} {({c_j}P_j^2 + {b_j}{P_j} + {a_j})} \end{align} $
(6) $ \begin{align} F_{{\rm{total}}}^{\rm{c}} = \sum\limits_{j = 1}^{{J_i}} {F_j^{\rm{c}}} ({P_j}) = \sum\limits_{j = 1}^{{J_i}} {({\alpha _j}P_j^2 + {\beta _j}{P_j} + {\gamma _j})} \end{align} $
(7) 式中, $F_j^{\rm{e}}({P_j})$为第$j$台发电机组的发电成本; ${P_j}$为第$j$台发电机组的有功出力; $F_j^{\rm{c}}({P_j})$为第$j$台发电机组的碳排放量; $\gamma _j$, $\beta _j$和$\alpha _j$分别表示第$j$台发电机组关于碳排放的常数因子、一次项因子和二次项因子.
1.1.3 自动发电控制模型
图 2是传统实时控制系统中包含两个区域的电力系统AGC模型. AGC控制器的输入为第$i$个区域的频率误差和区域控制误差(Area control error, ACE) $e_i$, 输出为第$i$个区域的发电命令. AGC模型的控制周期为秒级, 一般设定为4秒或8秒.
1.1.4 发电命令调度模型
GCD的输入为ACG产生的发电指令, 输出为第$i$个区域内所有AGC机组的发电命令$\Delta {P_{i, j}}$.进而, ACG单元的实际发电指令$P_{i, j}^{{\rm{actual}}}$取ED和GCD的发电指令之和, 即$P_{i, j}^{{\rm{actual}}} = {P_{i, j}} + \Delta {P_{i, j}}$.在实际工程中, GCD的目标采用如式(5)所示的经济目标.
1.2 传统控制算法和优化算法分析
频率控制包含三种调节方式:一次调频、二次调频以及三次调频.一次调频通过调节发电机组在短时间内的有功出力, 进而调节系统频率.但是, 一次调频是一种有差调节方式.为了更好地平衡发电机和负荷之间的有功功率, 电力系统引入了二次调频和三次调频方式.二次调频和三次调频包含了多种算法的集成, 即集成了UC, ED, AGC和GCD.其中, AGC采用的是控制算法, 而UC, ED和GCD均为优化算法.因此, 传统发电调控算法是一种"优化算法+优化算法+控制算法+优化算法"的组合形式.
大量的优化算法被运用到UC, ED和GCD之中.常用的优化算法有: GA[23]、PSO[24]、模拟退火算法[25]、多元优化算法[26]、灰狼优化算法[27]、多目标极值优化算法[28]、混沌多目标机制优化算法[29]等.同时, 多种控制算法被运用于AGC控制器中.诸如传统的PID算法、模糊逻辑控制算法[30]、模糊PID[31]、滑动模式控制器[32]、自抗扰控制器[33]分数阶PID[34]、Q学习[35]、Q ($\lambda$)学习[14]和R ($\lambda$)学习[36]以及分布式模型预测控制算法[37]等. 表 1展示了频率调节方式和传统发电调控框架之间的关系.
表 1 频率调节方式与传统发电调控框架之间的关系Table 1 Relationship between regulation processes and conventional generation control framework传统发电控制 调节方式 算法类型 时间间隔(s) 输入 输出 UC 三次调频 优化算法 86 400 $ PD_{i, t} $ $u_{i, t, j}, P_{j, t}$ ED 二次调频 优化算法 900 $PD_i$ $P_{i, j}$ AGC 二次调频 控制算法 4 $e_{i}, \Delta f_i$ $ \Delta P_i$ GCD 二次调频 优化算法 4 $\Delta P_i$ $\Delta P_{i, j}$ 在第$i$区域中, UC依据下一天的负荷预测值$PD_{i, t}$制定发电机的启动状态$u_{i, t, j}$以及出力水平$P_{j, t}$.其中时间周期为一天中的每小时, 即$t =\{ 1, 2$, $\cdots$, $24\}$; ED采用15分钟后的超短期负荷预测值$PD_i$制定有功出力值$P_{i, j}$; AGC控制器计算第$i$个区域的总发电需求量$\Delta P_i$; GCD将总的发电量$\Delta P_i$分配到每个AGC机组$\Delta P_{i, j}$.
2. 基于ACP的懒惰强化学习的实时经济调度与控制
2.1 懒惰强化学习和实时经济调度与控制
为了快速获取准确的发电调度与控制动作, 本文建立了大量的平行发电控制系统.如图 3所示, 在平行发电系统中, 多重虚拟发电控制系统被用来对真实发电控制系统不断地进行仿真.当虚拟控制发电系统的控制效果优于实际发电控制系统时, 它们之间会交换它们发电控制器的重要数据.即虚拟发电控制系统将重要的控制器参数传递到真实发电控制系统, 而真实发电系统则将更新后的系统模型参数反馈回虚拟发电控制系统.
由于通过平行系统可以获取海量的数据, 如果采用传统学习方法对控制算法学习进行训练将花费大量的时间.因此, 需要采用一种更有效的学习算法对海量数据进行学习.本文针对平行发电控制系统的特点, 提出一种懒惰强化学习算法(LRL).如图 4所示, LRL由懒惰学习、选择算子、强化网络以及松弛算子四部分构成.提出的LRL算法可以设计成为基于REG框架的控制器, 可以替代传统的组合算法(UC, ED, AGC和GCD).因此, 基于REG框架的控制器的输入为频率误差$\Delta {f_i}$和ACE $e_i$, 输出为所有AGC机组的发电命令$\Delta {P_{i, j}}$.
LRL的懒惰学习将对下一个系统状态进行预测.因此, 懒惰学习的输入为频率误差$\Delta {f_i}$和ACE $e_i$.此外, 懒惰学习可以依据电力系统当前采取的动作集${\bf \it {A}}$预测电力系统的下一状态$\Delta {F'_{i, (t + 1)}}$.其中, 初始动作集合${\bf \it{A}}$描述如下:
$ \begin{align} {\bf \it{A}} = \left[ {\begin{array}{*{20}{c}} {{a_{1, 1}}}&{{a_{1, 2}}}& \cdots &{{a_{1, k}}}\\ {{a_{2, 1}}}&{{a_{2, 2}}}& \cdots &{{a_{2, k}}}\\ \vdots & \vdots & \ddots & \vdots \\ {{a_{{J_i}, 1}}}&{{a_{{J_i}, 2}}}& \cdots &{{a_{{J_i}, k}}} \end{array}} \right] \end{align} $
(8) 其中, ${\bf \it{A}} $具有$k$列, 每一列都是一个AGC机组的发电命令动作向量.对下一状态的预测同样具有$k$列, 且每一列与每一个动作向量的预测相对应.因此, $\Delta {F'_{i, (t + 1)}}$是一个依据所有$k$列动作向量预测而组成的$k$列预测矩阵.
采用懒惰学习方法估计未知函数的值与映射$g:$ ${{\bf R}^m}$ $ \to {\bf R} $类似.懒惰学习方法的输入和输出可以从矩阵$\Phi $获取, 描述如下:
$ \begin{align} {\rm{\{ (}}{\varphi _1}{\rm{, }}{y_1}{\rm{), (}}{\varphi _2}{\rm{, }}{y_2}{\rm{), }} \cdots {\rm{, (}}{\varphi _{{N_{{\rm{lazy}}}}}}, {y_{{N_{{\rm{lazy}}}}}}{\rm{)\} }} \end{align} $
(9) 其中, $\varphi _i$为$N_{\rm{lazy}}\times k$的输入矩阵, $i=1, 2, \cdots$, $N_{\rm{lazy}}$; $y_i$为$N_{\rm{lazy}} \times 1$的输出向量.第$q$个查询点的预测值可以由下式计算.
$ \begin{align} \widehat {y}_q = \varphi _q^{\rm{T}}{({{\bf \it{Z}}^{\rm{T}}}{\bf \it{Z}})^{ - 1}}{{\bf \it{Z}}^{\rm{T}}}{\bf \it{v}} \end{align} $
(10) 其中, ${{Z}}={ {W\Phi}}$; ${\bf \it{v}}={\bf \it{Wy}}$. ${\bf \it{W}}$是一个对角矩阵, ${\bf \it{W}}_{ii}$ $=\omega_i$, 其中, $\omega_i$为从查询点$\varphi _q$到点$\varphi _i$的距离$d(\varphi _i, \varphi _q)$的权重函数.从而, $({\bf \it{Z}}^{\rm{T}}\bf \it{Z}) \beta={\bf \it{Z}}^{\rm{T}} {\bf \it{v}}$可以作为一个局部加权回归模型.在其训练过程的误差校验方法可为留一法交叉校验(Leave-one-out cross-validation, LOOCV), 计算方式为
$ \begin{align} &{\rm{MS}}{{\rm{E}}^{{\rm{CV}}}}({\varphi _q}) =\nonumber\\[1mm] &\qquad \displaystyle\frac{1} {{\sum\limits_i {w_i^2} }}\sum\limits_i {{{\left( {\frac{{{v_i} - z_i^{\rm{T}}{{({{\bf \it{Z}}^{\rm{T}}}{\bf \it{Z}})}^{ - 1}} {{\bf \it{Z}}^{\rm{T}}}{\bf \it{v}}}}{{1 - z_i^{\rm{T}}{{({{\bf \it{Z}}^{\rm{T}}}{\bf \it{Z}})}^{ - 1}}{z_i}}}} \right)}^2}} = \nonumber\\[1mm] &\qquad \displaystyle\frac{1}{{\sum\limits_i {w_i^2} }}\sum\limits_i {{{\left( {{w_i}\frac{{{y_i} - \varphi _i^{\rm{T}}{{({{\bf \it{Z}}^{\rm{T}}}{\bf \it{Z}})}^{ - 1}}{{\bf \it{Z}}^{\rm{T}}} {\bf \it{v}}}}{{1 - z_i^{\rm{T}}{{({{\bf \it{Z}}^{\rm{T}}}{\bf \it{Z}})}^{ - 1}}{z_i}}}} \right)}^2}} = \nonumber\\[1mm] &\qquad \displaystyle\frac{1}{{\sum\limits_i {w_i^2} }}\sum\limits_i {{{\left( {{w_i}{e^{{\rm{CV}}}}(i)} \right)}^2}} \end{align} $
(11) 其中, ${e^{{\rm{CV}}}}(i)$为第$i$个留一误差, 计算方式为
$ \begin{align} e_{n + 1}^{{\rm{CV}}}(i) = \dfrac{{{y_i} - \varphi _i^{\rm{T}}{\beta _{n + 1}}}}{{1 + \varphi _i^{\rm{T}}{{\bf \it{P}}_{n + 1}}{\varphi _i}}} \end{align} $
(12) 其中, ${{\bf \it{P}}_n}$为矩阵${({{\bf \it{Z}}^{\rm{T}}}{\bf \it{Z}})^{ - 1}}$的回归逼近; ${\beta _n}$为$n$邻近的最优最小二乘序列参数; 且在$e_n^{{\rm{CV}}}(i)$中满足$1$ $\le$ $i\le n$; ${\beta _{n + 1}}$的计算方法如下:
$ \begin{align} &{\beta _{n + 1}} = {\beta _n} + {\gamma _{n + 1}}{e_{n + 1}}\nonumber\\ & {e_{n + 1}} = {y_{n + 1}} - \varphi _{n + 1}^{\rm{T}}{\beta _n}\nonumber\\ & {\gamma _{n + 1}} = {{\bf \it{P}}_{n + 1}}{\varphi _{n + 1}}\nonumber\\ & {{\bf \it{P}}_{n + 1}} = {{\bf \it{P}}_n} - \frac{{{{\bf \it{P}}_n}{\varphi _{n + 1}}\varphi _{n + 1}^{\rm{T}}{{\bf \it{P}}_n}}}{{1 + \varphi _{n + 1}^{\rm{T}}{{\bf \it{P}}_n}{\varphi _{n + 1}}}} \end{align} $
(13) 因此, 针对REG问题, 所提LRL算法中懒惰学习离线学习和在线学习的输入和输出可见表 2.
表 2 懒惰强化学习输入输出量Table 2 Inputs and outputs of lazy reinforcement learning输入输出 懒惰学习 强化网络 懒惰强化学习 输入量 $\Delta {f_i}, {e_i}, {\bf \it {A}}$ $\Delta {F'_{i, (t + 1)}}$ $\Delta {f_i}, {e_i}$ 输出量 ${\Delta {f'_{i, (t + 1)}}}$ $\Delta {P_{i, j}}, $
$i = 1, 2, \cdots, {J_i}$$\Delta {P_{i, j}}, $
$i = 1, 2, \cdots, {J_i}$LRL中的选择过程可以从下一状态$(\Delta {F'_{i, (t + 1)}})$中选择最优的状态(最小的$| {\Delta {{f'}_{i, (t + 1)}}} |$).
LRL中的强化网络可以计算出总的发电命令$\Delta {P_i}$, 并分配$\Delta {P_{i, j}}$到第$i$个区域里的所有AGC机组上, 其中, $\Delta {P_i}=\sum_{j = 1}^{{J_i}} {\Delta {P_{i, j}}} $.强化网络由强化学习和一个反向传播神经网络(Back propagation neural network, BPNN)组成. Q学习是一种无需模型的控制算法.基于Q学习的控制器可以在线根据环境变化更新其控制策略.此类控制器的输入为状态值和奖励值, 输出为作用于环境的动作量.它们可以依据Q-矩阵$\bf \it{Q}$和概率分布矩阵$\bf \it{P}$, 针对当前的环境状态$s$, 制定应当进行的动作$a$.矩阵$\bf \it{Q}$和$\bf \it{P}$可以由奖励函数随后进行更新.
$ \begin{align} &Q(s, a) \leftarrow Q(s, a) + \alpha (R(s, s', a) \, + \nonumber\\ &\qquad\qquad\ \ \gamma \mathop {\max }\limits_{a \in A} Q(s', a) - Q(s, a)) \end{align} $
(14) $ \begin{align} &P(s, a) \leftarrow \begin{cases} P(s, a) - \beta (1 - P(s, a)), &s' = s\\ P(s, a)(1 - \beta ), &{\mbox{其他}} \end{cases} \end{align} $
(15) 其中, $\alpha$为学习率; $\gamma$为折扣系数; $\beta$为概率系数; $s$, $s'$分别为当前状态和下一状态; $R(s, s', a)$为奖励函数, 与当前状态$s$和由动作$a$导致的状态有关.当前状态$s$和下一状态$s'$同属于状态集合$\bf \it{S}$, 即$s \in {\bf \it{S}}$, $s'$ $\in$ ${\bf \it{S}}$.被选择的动作$a$输出动作集合$\bf \it{A}$, 即$a \in {\bf \it{A}}$.本文采用结构简单的三层感知器BPNN, 分配到多个机组的输出$y_i^{{\rm{bpnn}}}$的计算公式为
$ \begin{align} y_i^{{\rm{bpnn}}} = f\left(x_i^{{\rm{bpnn}}}\right) = f\left(\sum\limits_{j = 1}^{{n^{{\rm{bpnn}}}}} {\omega _{ji}^{{\rm{bpnn}}}x_i^{{\rm{bpnn}}} + b_i^{{\rm{bpnn}}}} \right) \end{align} $
(16) 其中, $\omega _{ji}^{{\rm{bpnn}}}$为权重值; $b_i^{{\rm{bpnn}}}$为补偿值; ${n^{{\rm{bpnn}}}}$为BP神经网络中的隐藏元的个数; $f(z)$为sigmoid函数.本文采用的sigmoid函数为
$ \begin{align} f(z)=\tanh (z) = \frac{{{\rm e}^z - {\rm e}^{ - z}}}{{{\rm e}^z + {\rm e}^{ - z}}} \end{align} $
(17) BPNN训练算法为莱文贝格-马夸特方法(Levenberg-Marquardt algorithm).
LRL的松弛算子类似一个操作员对强化网络的输出进行约束控制.因此, 松弛算子的约束可以表达为
$ \begin{align} \Delta {P_{i, j}} \leftarrow \frac{{[\Delta {P_{i, j}}{{u'}_{j, t}}]}}{{\sum\limits_{j = 1}^{{J_i}} {([\Delta {P_{i, j}}{{u'}_{j, t}}])} }}\sum\limits_{j = 1}^{{J_i}} {(\Delta {P_{i, j}})} \end{align} $
(18) 其中, $\left[{\Delta {P_{i, j}}{{u'}_{j, t}}} \right]$为约束函数, 表达式为
$ \begin{align} &\max \left\{ {{P_{j, (t - 1)}} - P_j^{{\rm{down}}}, {{u'}_{j, t}}P_j^{\min }} \right\} \le\notag \\ &\qquad\ \ \Delta {P_{i, j}}{{u'}_{j, t}} \le \min \left\{ {{P_{j, (t - 1)}} + P_j^{{\rm{up}}}, {{u'}_{j, t}}P_j^{\max }} \right\} \end{align} $
(19) 其中, ${u'_{j, t}}$为临时启动状态, 表达式为
$ \begin{align} {u'_{j, t}}=\!\begin{cases} 1, &\!\left[ {\Delta {P_{i, j}}} \right] > 0~\mbox{或}~ 1 < T_{j, (t - 1)}^{{\rm{up}}} < T_{j, (t - 1)}^{{\rm{min\mbox{-}up}}}\\ 0, &\!\left[ {\Delta {P_{i, j}}} \right] = 0~\mbox{或}~1 \le T_{j, (t - 1)}^{{\rm{down}}} < T_j^{{\rm{min\mbox{-}down}}} \end{cases} \end{align} $
(20) 2.2 离线训练过程
传统学习算法会对所有通过平行系统获取的数据进行学习.然而, 采用这些数据进行学习不一定能够取得比当前真实系统更优的控制效果.因此, 本文提出的LRL方法, 会筛选出那些更优的数据进行学习.即, 当在$t$时刻的状态$s_t$优于时刻的状态${s'_{(t + t), 1}}$, 而劣于$t + \Delta t$时刻的状态${s'_{(t + t), 2}}$, 那么算法将排除从$s_t$到${s'_{(t + t), 1}}$的变化过程数据, 而将保留从$s_t$到${s'_{(t + t), 2}}$的变化过程数据进行离线训练.
针对REG问题, 离线训练的输入与输出如表 2所示.但在对比状态${s'_{(t + t), 1}}$和${s'_{(t + t), 2}}$时, 可将状态设定为预测的区域i频率偏差, 即$\Delta {f'_{i, (t + 1)}}$, 也即从$\Delta {F'_{i, (t + 1)}}$选择最优值对应的输入和输出数据进行训练. 图 5是在平行系统下基于REG框架的懒惰强化学习的控制器运行步骤.
3. 算例结果
本文仿真均是在主频为2.20 GHz, 内存96 GB的AMAX XR-28201GK型服务器上基于MATLAB 9.1 (R2016b)平台实现的. 表 3是仿真中采用的所有算法, 其中各算法的含义见表 4.
表 3 仿真所用的算法Table 3 Algorithms for this simulation序号 UC ED AGC GCD 1 模拟退火算法(SAA) SAA PID控制 SAA 2 多元优化(MVO) MVO 滑模控制器 MVO 3 遗传算法(GA) GA 自抗扰控制 GA 4 灰狼算法(GWO) GWO 分数阶PID控制 GWO 5 粒子群优化(PSO) PSO 模糊逻辑控制器 PSO 6 生物地理优化(BBO) BBO Q学习 BBO 7 飞蛾扑火算法(MFO) MFO Q($\lambda$)学习 MFO 8 鲸鱼群算法(WOA) WOA R($\lambda$)学习 WOA 9 固定比例 10 松弛人工神经网络(RANN) 11 懒惰强化学习(LRL) 表 4 各对比算法的缩写Table 4 Abbreviation of compared algorithms缩写 全称 意义 UC Unit commitment 机组组合 ED Economical dispatch 经济调度 AGC Automatic generation control 自动发电控制 GCD Generation command dispatch 发电指令调度 RL Reinforcement learning 强化学习 REG Real-time economic generation dispatch and control 实时经济调度与控制 ACP Artificial societies- computational experiments-parallel execution 人工社会-计算实验-平行执行 CPS Cyber-physical system 信息物理系统 CPSS Cyber-physical-social systems 信息物理社会融合系统 LRL Lazy reinforcement learning 懒惰强化学习 RANN Relaxed artificial neural network 松弛人工神经网络 SAA Simulated annealing algorithm 模拟退火算法 MVO Multi-verse optimizer 多元优化 GA Genetic algorithm 遗传算法 GWO Gray wolf optimizer 灰狼算法 PSO Particle swarm optimization 粒子群优化 BBO Biogeography-based optimization 生物地理优化 MFO Moth-flame optimization 飞蛾扑火算法 WOA Whale optimization algorithm 鲸鱼群算法 LOOCV Leave-one-out cross-validation 留一法交叉校验 BPNN Back propagation neural network 反向传播神经网络 组合算法和REG控制器的仿真时间设定为1天或86 400秒.总共采用了有4 608种传统发电调控算法($8\times 8 \times 8 \times 9=4 608$种组合)和两种基于REG框架的算法进行仿真实验.总的设置仿真模拟时间为12.6301年或为($8\times 8 \times 8 \times 9+2$)天.所有的传统发电调控算法的参数设置详见附录A.
图 6是IEEE新英格兰10机39节点标准电力系统结构.从图 6可以看出, 仿真实验将该电力系统划分成3个区域.该系统中设置10台发电机, 发电机{30, 37, 39}划分至区域1, 发电机{31, 32, 33, 34, 35}划分至区域2, 剩下的发电机{36, 38}划分至区域3.除此之外, 光伏, 风电以及电动汽车也被纳入仿真模型之中(详细参数见图 7).其中, 电动汽车负荷需求曲线为5种不同车辆用户行为叠加而成的.各个机组参数如表 5和表 6所示.
表 5 机组参数表Table 5 Parameters of the generators机组编号 30 37 39 31 32 33 34 35 36 38 机组最小连续开机时间$T_j^{\mathrm{min-up}}$ (h) 8 8 5 5 6 3 3 1 1 1 机组最小连续关机时间$T_j^{\mathrm{min-down}}$ (h) 8 8 5 5 6 3 3 1 1 1 机组最大出力$P_j^{\min}$ (MW) 455 455 130 130 162 80 85 55 55 55 机组最小出力$P_j^{\max}$ (MW) 150 150 20 20 25 20 25 10 10 10 热启动成本$SU_{\mathrm{H}, j}$ (t/(MW $\cdot$ h)) 4 500 5 000 550 560 900 170 260 30 30 30 冷启动成本$SU_{\mathrm{C}, j}$ (t/(MW $\cdot$ h)) 9 000 10 000 1 100 1 120 1 800 340 520 60 60 60 冷启动时间$T_j^{\mathrm{cold}}$ (h) 5 5 4 4 4 2 2 0 0 0 ED成本系数$a_j$ 0.675 0.45 0.563 0.563 0.45 0.563 0.563 0.337 0.315 0.287 ED成本系数$b_j$ 360 240 299 299 240 299 299 181 168 145 ED成本系数$c_j$ 11 250 7 510 9 390 9 390 7 510 9 390 9 390 5 530 5 250 5 270 ED排放系数$\alpha _j$ 3.375 1.125 1.689 1.576 1.17 1.576 1.576 0.674 0.63 0.574 ED排放系数$\beta _j$ 1 800 600 897 837 624 837 837 362 404 290 ED排放系数$\gamma _j$ 56 250 18 770 28 170 26 290 19 530 26 290 26 290 11 060 13 800 10 540 表 6 机组组合问题参数表Table 6 Parameters for unit commitment problemUC问题的负荷时段(h) 1 2 3 4 5 6 7 8 9 10 11 12 UC问题的负荷值$PD_t$ (WM) 700 750 850 950 1 000 1 100 1 150 1 200 1 300 1 400 1 450 1 500 UC问题的旋转备用$SR_t$ (WM) 70 75 85 95 100 110 115 120 130 140 145 150 UC问题的负荷时段(h) 13 14 15 16 17 18 19 20 21 22 23 24 UC问题的负荷值$PD_t$ (WM) 1 400 1 300 1 200 1 050 1 000 1 100 1 200 1 400 1 300 1 100 900 800 UC问题的旋转备用$SR_t$ (WM) 140 130 120 105 100 110 120 140 130 110 90 80 仿真实验设置发电控制的控制周期为4 s. REG控制器每4 s计算一次.对于传统组合算法, UC每天进行一次, ED每15分钟优化一次, AGC和GCD每次控制周期中计算一次.松弛人工神经网络RANN算法由人工神经网络和所提LRL算法中的松弛算子组成. LRL整体的输入和输出分别作为RANN算法的输入和输出. RANN算法的松弛算子见式(18)~(20). BPNN选择的三层感知网络的隐含层神经元的个数设定为40个.每个松弛人工神经网络设置有40个隐藏元.在所提LRL算法中, 强化学习和懒惰学习的动作集$k$的列数设为121, 该列数一般可选范围较大; 动作值选为从$-300$~$300$ MW; 其中强化学习的学习率的范围为$\alpha \in (0, 1]$, 本文选为0.1;概率选择系数$\beta \in (0, 1]$, 本文设定为0.5;折扣系数$\lambda \in (0, 1]$, 本文设定为0.9.其中学习率选择的越大学习速度越快, 但会导致精度随之下降.
强化学习系列算法Q学习、Q($\lambda $)学习和R($\lambda $)学习算法的离线学习是时间分别为2.27 h, 2.49 h和2.95 h; 松弛人工神经网络算法的训练时间为15.50 h; 所提LRL算法的离线训练时间为6.60 h.虽然所提LRL算法较传统强化学习算法在离线训练效率方面不具有优势, 但是其具有最佳的控制效果.同时, 与统一时间尺度的松弛人工神经网络算法相比, LRL算法的离线训练时间较小且其控制效果更优.
表 7 UC算法仿真结果统计Table 7 Statistic of simulation results obtained by the UC算法 ACE1 (MW) $\Delta f_1$ (Hz) ACE2 (MW) $\Delta f_2$ (Hz) ACE3 (MW) $\Delta f_3$ (Hz) SAA 573.8904 0.038235 258.7798 0.03752 5 527.9746 1.3137 MVO 575.3672 0.038274 259.9265 0.037558 5 532.6202 1.3154 GA 603.4391 0.041805 258.6484 0.041041 6 052.2806 1.4428 GWO 616.064 0.043454 257.6107 0.042653 6 290.0843 1.5017 PSO 575.7172 0.038264 260.3543 0.037555 5 535.1644 1.3159 BBO 574.2769 0.038213 259.349 0.037499 5 522.5691 1.3131 MFO 569.7159 0.037685 259.1499 0.036984 5 441.3487 1.2932 WOA 645.5906 0.047207 255.8246 0.04639 6 844.8509 1.6369 RANN 553.4032 0.039963 224.1748 0.039083 5 431.2844 1.2907 LRL 441.9225 0.010254 389.9905 0.0095612 1 023.1919 0.23743 表 8 ED算法仿真结果统计Table 8 Statistic of simulation results obtained by the ED algorithms算法 ACE1 (MW) $\Delta f_1$ (Hz) ACE2 (MW) $\Delta f_2$ (Hz) ACE3 (MW) $\Delta f_3$ (Hz) SAA 587.8414 0.039976 258.2767 0.039234 5 777.5755 1.3756 MVO 588.177 0.039978 258.5125 0.039245 5 782.3567 1.3768 GA 589.4091 0.040193 257.6335 0.039479 5 818.9809 1.3856 GWO 587.6547 0.039959 258.0923 0.039228 5 780.4664 1.3763 PSO 587.858 0.039915 258.8111 0.039182 5 771.2924 1.3741 BBO 588.0198 0.039924 258.9211 0.039192 5 770.4608 1.3739 MFO 588.1836 0.039988 258.4948 0.03925 5 778.844 1.3759 WOA 588.6974 0.040103 257.7113 0.039387 5 805.4046 1.3823 RANN 553.4032 0.039963 224.1748 0.039083 5 431.2844 1.2907 LRL 441.9225 0.010254 389.9905 0.0095612 1 023.1919 0.23743 表 9 AGC算法仿真结果统计Table 9 Statistic of simulation results obtained by the AGC algorithms算法 ACE1 (MW) $\Delta f_1$ (Hz) ACE2 (MW) $\Delta f_2$ (Hz) ACE3 (MW) $\Delta f_3$ (Hz) PID控制 591.3081 0.040435 257.518 0.039717 5 854.0102 1.3939 滑动模式控制器 590.7335 0.040374 257.4495 0.039656 5 844.7291 1.3916 自抗扰控制 591.3771 0.040424 257.6773 0.039707 5 853.0488 1.3937 分数阶PID控制 591.1007 0.040437 257.3069 0.039715 5 852.7478 1.3936 模糊逻辑控制 591.951 0.040504 257.6024 0.039781 5 863.4785 1.3963 Q学习 591.3603 0.040452 257.4572 0.039727 5 855.1339 1.3942 Q($\lambda$)学习 591.0772 0.040419 257.4421 0.039696 5 849.9705 1.393 R($\lambda$)学习 591.7282 0.040494 257.469 0.03977 5 862.7832 1.3961 RANN 553.4032 0.039963 224.1748 0.039083 5 431.2844 1.2907 LRL 441.9225 0.010254 389.9905 0.0095612 1 023.1919 0.23743 表 10 GCD算法仿真结果统计Table 10 Statistic of simulation results obtained by the GCD algorithms算法 ACE1 (MW) $\Delta f_1$ (Hz) ACE2 (MW) $\Delta f_2$ (Hz) ACE3 (MW) $\Delta f_3$ (Hz) SAA 591.3081 0.040435 257.518 0.039717 5 854.0102 1.3939 MVO 590.7335 0.040374 257.4495 0.039656 5 844.7291 1.3916 GA 591.3771 0.040424 257.6773 0.039707 5 853.0488 1.3937 GWO 591.1007 0.040437 257.3069 0.039715 5 852.7478 1.3936 PSO 591.951 0.040504 257.6024 0.039781 5 863.4785 1.3963 BBO 591.3603 0.040452 257.4572 0.039727 5 855.1339 1.3942 MFO 591.0772 0.040419 257.4421 0.039696 5 849.9705 1.393 WOA 591.7282 0.040494 257.469 0.03977 5 862.7832 1.3961 固定比例 509.0391 0.028801 282.0332 0.027609 3 973.743 0.94347 RANN 553.4032 0.039963 224.1748 0.039083 5 431.2844 1.2907 LRL 441.9225 0.010254 389.9905 0.0095612 1 023.1919 0.23743 图 8是频率偏差、区域控制误差和仿真计算所用时间的统计结果, 其中所提LRL算法能得到最优的调控效果.
图 9是各个算法频率偏差的统计对比效果, 其中所提LRL算法能在所有区域均获得最小的频率偏差. 图 10是各个算法获得的区域控制误差的统计结果, 可以看出, 所提LRL算法不会导致大量牺牲某个区域的功率来满足其他区域的功率平衡.
图 11和图 12是利用平行系统仿真数据对所提LRL算法训练的收敛曲线图.可以看出, 经过667次的迭代, 能获得最优的收敛结果.
从图 9以及表 7~10可以看出, 与传统组合发电控制算法和松弛人工神经网络相比, 本文提出的LRL方法可以保持系统内的有功平衡, 并且能使电网频率偏差达到最低.因此, LRL能够在多区域大规模互联电网中取得最优的控制效果.
从图 8和图 10可以看出, 在仿真中, 由于LRL可以在最短时间内取得最低的频率偏差和最低的控制错误率, LRL的懒惰学习可以有效地对电力系统的下一状态进行预测.因此, LRL可以提供准确的AGC机组动作指令.
在应对多区域大规模互联电网的经济调度和发电控制问题时, REG控制器完全可以取代传统的组合算法方法.
从图 11和图 12可以看出, 由于仿真采用了平行系统, 降低了使用的真实仿真时间, 由于平行系统进行了迭代, 加速了仿真的过程.
4. 结论
为了解决多区域大规模互联电网经济调度和发电控制中存在的协同问题, 本文提出了一种REG框架.该框架可作为一种传统发电调控框架的替代.然后, 为REG控制器提出了一种基于人工社会-计算实验-平行执行方法的懒惰学习算法.基于REG控制器的LRL算法的特征可以总结如下:
1) 本文提出了一种统一时间尺度的REG控制框架, 并提出一种基于REG控制器的LRL算法.可以有效地对电力系统的下一运行状态进行预测并且输出满足UC问题的约束动作指令, 取得最优的控制效果.
2) LRL中的强化学习网络具有同时产生多个输出的能力.因此, 基于REG控制器LRL的可以不断地为存在于多区域大规模互联电网的所有AGC机组输出发电命令.
3) 通过搭建平行系统, 使得基于LRL的REG控制器可以用于解决多区域大规模互联电网经济调度和发电控制问题.
附录A
各算法重要参数设置如下:
1) PID控制:比例系数$k_{\mathrm{P}}=-0.006031543250198, $积分系数$k_{\mathrm{I}}=0.00043250;$
2) 滑模控制器:开通/关断点$k_{\mathrm{point}}=\pm 0.1$ Hz, 开通/关断输出$k_{\mathrm{v}}=\pm80$ MW;
3) 自抗扰控制:扩张状态观测器
$ \begin{align*} &A = \left[ {\begin{array}{*{20}{c}} 0&{0.0001}&0&0\\ 0&0&{0.0001}&0\\ 0&0&0&{0.0001}\\ 0&0&0&0 \end{array}} \right]\\ &B = \left[ {\begin{array}{*{20}{c}} 0&0\\ 0&0\\ {0.0001}&{0.0001}\\ 0&0 \end{array}} \right]\\ &C = {\rm diag}\left\{ {\begin{array}{*{20}{c}} {0.1}&{0.1}&{0.1}&{0.1} \end{array}} \right\}\\ &D = {0_{4 \times 2}}\\ &k_1=15.0, \ k_2=5.5, \ k_3=2.0, \ k_4=1 \end{align*} $
4) 分数阶PID控制:比例系数$k_{\mathrm{P}}=-1, $积分系数$k_{\mathrm{I}}$ $=$ $0.43250, $ $\lambda=1.3, $ $\mu=200;$
5) 模糊逻辑控制器: $X$ (输入, $\Delta f$)在[$-$0.2, 0.2] Hz等间隔选取21个区间, $Y$ (输入, $\int \Delta f{\rm d}t$)在[$-$1, 1] Hz等间隔选取21个区间, $Z$ (输出, $\Delta P$)在[$-$150, 150] MW等间隔选取441个区间;
6) Q学习:动作集$A=\{-300, -240, -180, -120$, $-60, 0, 60, 120, 180, 240, 300\}$, 学习率$\alpha=0.1, $概率分布常数$\beta=0.5, $未来奖励折扣系数$\gamma=0.9, $ $\lambda=0.9$;
7) Q($\lambda$)学习: $A=\{-300, -240, -180, -120, -60, 0$, $60, 120, 180, 240, 300\}$, $\alpha=0.1$, $\beta=0.5$, $\gamma=0.9$, $\lambda=0.9$;
8) R($\lambda$)学习: $A=\{-300, -240, -180, -120, -60, 0$, $60, 120, 180, 240, 300\}$, $\alpha=0.1$, $\beta=0.5$, $\gamma=0.9$, $\lambda=0.9$, $R_0$ $=0;$
9) 对于所有用于UC的优化算法:进化代数$N_{\mathrm{g}}=50$, 种群数目$P_{\mathrm{s}}=10$;
10) 对于所有用于ED的优化算法:进化代数$N_{\mathrm{g}}=30$, 种群数目$P_{\mathrm{s}}=10$;
11) 对于所有用于GCD的优化算法:进化代数$N_{\mathrm{g}}=5$, 种群数目$P_{\mathrm{s}}=10$;
12) 固定比例GCD控制: ${k_j} = {{\Delta P_j^{\max }}}/ {{\sum {\Delta P_j^{\max }} }}\Delta {P_j}$, $j$ $= 1, 2, \cdots, {J_i}$, $i = 1, 2, \cdots, 3$.
-
表 1 含有模糊图像的主要图像质量评价数据集
Table 1 Main image quality assessment databases including blurred images
数据集 时间 参考图像 模糊图像 模糊类型 主观评价 分值范围 IVC[28] 2005 4 20 高斯模糊 MOS 模糊−清晰 [1 5] LIVE[22] 2006 29 145 高斯模糊 DMOS 清晰−模糊 [0 100] A57[30] 2007 3 9 高斯模糊 DMOS 清晰−模糊 [0 1] TID2008[26] 2009 25 100 高斯模糊 MOS 模糊−清晰 [0 9] CSIQ[25] 2009 30 150 高斯模糊 DMOS 清晰−模糊 [0 1] VCL@FER[29] 2012 23 138 高斯模糊 MOS 模糊−清晰 [0 100] TID2013[27] 2013 25 125 高斯模糊 MOS 模糊−清晰 [0 9] KADID-10k 1[31] 2019 81 405 高斯模糊 MOS 模糊−清晰 [1 5] KADID-10k 2[31] 2019 81 405 镜头模糊 MOS 模糊−清晰 [1 5] KADID-10k 3[31] 2019 81 405 运动模糊 MOS 模糊−清晰 [1 5] MLIVE1[33] 2012 15 225 高斯模糊和高斯白噪声 DMOS 清晰−模糊 [0 100] MLIVE2[33] 2012 15 225 高斯模糊和JEPG压缩 DMOS 清晰−模糊 [0 100] MDID2013[32] 2013 12 324 高斯模糊、JEPG压缩和白噪声 DMOS 清晰−模糊 [0 1] MDID[34] 2017 20 1600 高斯模糊、对比度变化、高斯噪声、
JPEG或JPEG2000MOS 模糊−清晰 [0 8] BID[21] 2011 — 586 自然模糊 MOS 模糊−清晰 [0 5] CID2013[35] 2013 — 480 自然模糊 MOS 模糊−清晰 [0 100] CLIVE[36-37] 2016 — 1162 自然模糊 MOS 模糊−清晰 [0 100] KonIQ-10k [38] 2018 — 10073 自然模糊 MOS 模糊−清晰 [1 5] 表 2 基于空域/频域的不同方法优缺点对比
Table 2 Advantage and disadvantage comparison for different methods based on spatial/spectral domain
方法分类 优点 缺点 边缘信息 概念直观、计算复杂度低 容易因图像中缺少锐利边缘而影响评价结果 再模糊理论 对图像内容依赖小, 计算复杂度低 准确性依赖 FR-IQA 方法 奇异值分解 能较好地提取图像结构、边缘、纹理信息 计算复杂度较高 自由能理论 外部输入信号与其生成模型可解释部分之间的
差距与视觉感受的图像质量密切相关计算复杂度高 DFT/DCT/小波变换 综合了图像的频域特性和多尺度特征, 准确性和鲁棒性更高 计算复杂度高 表 3 基于学习的不同方法优缺点对比
Table 3 Advantage and disadvantage comparison for different methods based on learning
方法分类 优点 缺点 SVM 在小样本训练集上能够取得比其他算法更好的效果 评价结果的好坏由提取的特征决定 NN 具有很好的非线性映射能力 样本较少时, 容易出现过拟合现象, 且
计算复杂度随着数据量的增加而增大深度学习 可以从大量数据中自动学习图像特征的多层表示 对数据集中数据量要求大 字典/码本 可以获得图像中的高级特征 字典/码本的大小减小时, 性能显著下降 MVG 无需图像的 MOS/DMOS 值 模型建立困难, 对数据集中数据量要求较大 表 4 用于对比的不同NR-IQA方法
Table 4 Different NR-IQA methods for comparison
方法类别 方法 特征 模糊/通用 空域/频域 空域 边缘信息 JNB[43] 计算边缘分块所对应的边缘宽度 模糊 边缘信息 CPBD[44] 计算模糊检测的累积概率 模糊 边缘信息 MLV[47] 计算图像的最大局部变化得到反映图像对比度信息的映射图 模糊 自由能理论 ARISM[63] 每个像素 AR 模型系数的能量差和对比度差 模糊 边缘信息 BIBLE[49] 图像的梯度和 Tchebichef 矩量 模糊 边缘信息 Zhan 等[14] 图像中最大梯度及梯度变化量 模糊 频域 DFT变换 S3[65] 在频域测量幅度谱的斜率, 在空域测量空间变化情况 模糊 小波变换 LPC-SI[81] LPC 强度变化作为指标 模糊 小波变换 BISHARP[77] 计算图像的均方根来获取图像局部对比度信息,
同时利用小波变换中对角线小波系数模糊 HVS滤波器 HVS-MaxPol[85] 利用 MaxPol 卷积滤波器分解与图像清晰度相关的有意义特征 模糊 学习 机器学习 SVM+SVR BIQI[86] 对图像进行小波变换后, 利用 GGD 对得到的子带系数进行参数化 通用 SVM+SVR DIIVINE[87] 从小波子带系数中提取一系列的统计特征 通用 SVM+SVR SSEQ[88] 空间−频域熵特征 通用 SVM+SVR BLIINDS-II[91] 多尺度下的广义高斯模型形状参数特征、频率变化系数特征、
能量子带特征、基于定位模型的特征通用 SVR BRISQUE[96] GGD 拟合 MSCN 系数作为特征, AGGD 拟合 4 个相邻元素乘积系数作为特征 通用 SVR RISE[107] 多尺度图像空间中的梯度值和奇异值特征, 以及多分辨率图像的熵特征 模糊 SVR Liu 等[109] 局部模式算子提取图像结构信息, Toggle 算子提取边缘信息 模糊 SVR Cai 等[110] 输入图像与其重新模糊版本之间的 Log-Gabor 滤波器响应差异和基于方向
选择性的模式差异, 以及输入图像与其 4 个下采样图像之间的自相似性模糊 深度学习 CNN Kang's CNN[116] 对图像分块进行局部对比度归一化 通用 浅层CNN+GRNN Yu's CNN[127] 对图像分块进行局部对比度归一化 模糊 聚类技术+RBM MSFF[139] Gabor 滤波器提取不同方向和尺度的原始图像特征,
然后由 RBMs 生成特征描述符通用 DNN MEON[132] 原始图像作为输入 通用 CNN DIQaM-NR[131] 使用 CNN 提取失真图像块和参考图像块的特征 通用 CNN DIQA[118] 图像归一化后, 通过下采样及上采样得到低频图像 通用 CNN SGDNet[133] 使用 DCNN 作为特征提取器获取图像特征 通用 秩学习 Rank Learning[141] 选取一定比例的图像块集合作为输入, 梯度信息被用来指导图像块选择过程 模糊 DCNN+SFA SFA[128] 多个图像块作为输入, 并使用预先训练好的 DCNN 模型提取特征 模糊 DNN+NSS NSSADNN[134] 每个图像块归一化后用 CNNs 提取特征, 得到 1024 维向量 通用 CNN DB-CNN[123] 用预训练的 S-CNN 及 VGG-16 分别提取合成失真与真实图像的相关特征 通用 CNN CGFA-CNN[124] 用 VGG-16 以提取失真图像的相关特征 通用 字典/码本 聚类算法+码本 CORNIA[145] 未标记图像块中提取局部特征进行 K-means 聚类以构建码本 通用 聚类算法+码本 QAC[147] 用比例池化策略估计每个分块的局部质量,
通过 QAC 学习不同质量级别上的质心作为码本通用 稀疏学习+字典 SPARISH[143] 以图像块的方式表示模糊图像, 并使用稀疏系数计算块能量 模糊 MVG MVG模型 NIQE[150] 提取 MSCN 系数, 再用 GGD 和 AGGD 拟合得到特征 通用 表 5 基于深度学习的方法所采用的不同网络结构
Table 5 Different network structures of deep learning-based methods
方法 网络结构 Kang's CNN[116] 包括一个含有最大/最小池化的卷积层, 两个全连接层及一个输出结点 Yu's CNN[127] 采用单一特征层挖掘图像内在特征, 利用 GRNN 评价图像质量 MSFF[139] 图像的多个特征作为输入, 通过端到端训练学习特征权重 MEON[132] 由失真判别网络和质量预测网络两个子网络组成, 并采用 GDN 作为激活函数 DIQaM-NR[131] 包含 10 个卷积层和 5 个池化层用于特征提取, 以及 2 个全连接层进行回归分析 DIQA[118] 网络训练分为客观失真部分及与人类视觉系统相关部分两个阶段 SGDNet[133] 包括视觉显著性预测和图像质量预测的两个子任务 Rank Learning[141] 结合了 Siamese Mobilenet 及多尺度 patch 提取方法 SFA[128] 包括 4 个步骤: 图像的多 patch 表示, 预先训练好的 DCNN 模型提取特征,
通过 3 种不同统计结构进行特征聚合, 部分最小二乘回归进行质量预测NSSADNN[134] 采用多任务学习方式设计, 包括自然场景统计 (NSS) 特征预测任务和质量分数预测任务 DB-CNN[123] 两个卷积神经网络分别专注于两种失真图像特征提取, 并采用双线性池化实现质量预测 CGFA-CNN[124] 采用两阶段策略, 首先基于 VGG-16 网络的子网络 1 识别图像中的失真类型, 而后利用子网络 2 实现失真量化 表 6 基于空域/频域的不同NR-IQA方法在不同数据集中比较结果
Table 6 Comparison of different spatial/spectral domain-based NR-IQA methods for different databases
方法 发表时间 LIVE CSIQ PLCC SROCC RMSE MAE PLCC SROCC RMSE MAE JNB[43] 2009 0.843 0.842 11.706 9.241 0.786 0.762 0.180 0.122 CPBD[44] 2011 0.913 0.943 8.882 6.820 0.874 0.885 0.140 0.111 S3[65] 2012 0.919 0.963 8.578 7.335 0.894 0.906 0.135 0.110 LPC-SI[81] 2013 0.907 0.923 9.177 7.275 0.923 0.922 0.111 0.093 MLV[47] 2014 0.959 0.957 6.171 4.896 0.949 0.925 0.091 0.071 ARISM[63] 2015 0.962 0.968 5.932 4.512 0.944 0.925 0.095 0.076 BIBLE[49] 2016 0.963 0.973 5.883 4.605 0.940 0.913 0.098 0.077 Zhan 等[14] 2018 0.960 0.963 6.078 4.697 0.967 0.950 0.073 0.057 BISHARP[77] 2018 0.952 0.960 6.694 5.280 0.942 0.927 0.097 0.078 HVS-MaxPol[85] 2019 0.957 0.960 6.318 5.076 0.943 0.921 0.095 0.077 方法 发表时间 TID2008 TID2013 PLCC SROCC RMSE MAE PLCC SROCC RMSE MAE JNB[43] 2009 0.661 0.667 0.881 0.673 0.695 0.690 0.898 0.687 CPBD[44] 2011 0.820 0.841 0.672 0.524 0.854 0.852 0.649 0.526 S3[65] 2012 0.851 0.842 0.617 0.478 0.879 0.861 0.595 0.480 LPC-SI[81] 2013 0.861 0.896 0.599 0.478 0.869 0.919 0.621 0.507 MLV[47] 2014 0.858 0.855 0.602 0.468 0.883 0.879 0.587 0.460 ARISM[63] 2015 0.843 0.851 0.632 0.492 0.895 0.898 0.558 0.442 BIBLE[49] 2016 0.893 0.892 0.528 0.413 0.905 0.899 0.531 0.426 Zhan 等[14] 2018 0.937 0.942 0.410 0.320 0.954 0.961 0.374 0.288 BISHARP[77] 2018 0.877 0.880 0.564 0.439 0.892 0.896 0.565 0.449 HVS-MaxPol[85] 2019 0.853 0.851 0.612 0.484 0.877 0.875 0.599 0.484 表 7 基于学习的不同NR-IQA方法在不同人工模糊数据集中比较结果
Table 7 Comparison of different learning-based NR-IQA methods for different artificial blur databases
方法 发表
时间LIVE CSIQ TID2008 TID2013 PLCC SROCC PLCC SROCC PLCC SROCC PLCC SROCC BIQI[86] 2010 0.920 0.914 0.846 0.773 0.794 0.799 0.825 0.815 DIIVINE[87] 2011 0.943 0.936 0.886 0.879 0.835 0.829 0.847 0.842 BLIINDS-II[91] 2012 0.939 0.931 0.886 0.892 0.842 0.859 0.857 0.862 BRISQUE[96] 2012 0.951 0.943 0.921 0.907 0.866 0.865 0.862 0.861 CORNIA[145] 2012 0.968 0.969 0.781 0.714 0.932 0.932 0.904 0.912 NIQE[150] 2013 0.939 0.930 0.918 0.891 0.832 0.823 0.816 0.807 QAC[147] 2013 0.916 0.903 0.831 0.831 0.813 0.812 0.848 0.847 SSEQ[88] 2014 0.961 0.948 0.871 0.870 0.858 0.852 0.863 0.862 Kang's CNN[116] 2014 0.963 0.983 0.774 0.781 0.880 0.850 0.931 0.922 SPARISH[143] 2016 0.960 0.960 0.939 0.914 0.896 0.896 0.902 0.894 Yu's CNN[127] 2017 0.973 0.965 0.942 0.925 0.937 0.919 0.922 0.914 RISE[107] 2017 0.962 0.949 0.946 0.928 0.929 0.922 0.942 0.934 MEON[132] 2018 0.948 0.940 0.916 0.905 — — 0.891 0.880 DIQaM-NR[131] 2018 0.972 0.960 0.893 0.885 — — 0.915 0.908 DIQA[118] 2019 0.952 0.951 0.871 0.865 — — 0.921 0.918 SGDNet[133] 2019 0.946 0.939 0.866 0.860 — — 0.928 0.914 Rank Learning[141] 2019 0.969 0.954 0.979 0.953 0.959 0.949 0.965 0.955 SFA[128] 2019 0.972 0.963 — — 0.946 0.937 0.954 0.948 NSSADNN[134] 2019 0.971 0.981 0.923 0.930 — — 0.857 0.840 CGFA-CNN[124] 2020 0.974 0.968 0.955 0.941 — — — — MSFF[139] 2020 0.954 0.962 — — 0.925 0.928 0.921 0.928 DB-CNN[123] 2020 0.956 0.935 0.969 0.947 — — 0.857 0.844 Liu 等[109] 2020 0.980 0.973 0.955 0.936 — — 0.972 0.964 Cai 等[110] 2020 0.958 0.955 0.952 0.923 — — 0.957 0.941 表 8 基于学习的不同NR-IQA方法在不同自然模糊数据集中比较结果
Table 8 Comparison of different learning-based NR-IQA methods for different natural blur databases
方法 发表
时间BID CID2013 CLIVE PLCC SROCC PLCC SROCC PLCC SROCC BIQI[86] 2010 0.604 0.572 0.777 0.744 0.540 0.519 DIIVINE[87] 2011 0.506 0.489 0.499 0.477 0.558 0.509 BLIINDS-II[91] 2012 0.558 0.530 0.731 0.701 0.507 0.463 BRISQUE[96] 2012 0.612 0.590 0.714 0.682 0.645 0.607 CORNIA[145] 2012 — — 0.680 0.624 0.665 0.618 NIQE[150] 2013 0.471 0.469 0.693 0.633 0.478 0.421 QAC[147] 2013 0.321 0.318 0.187 0.162 0.318 0.298 SSEQ[88] 2014 0.604 0.581 0.689 0.676 — — Kang's CNN[116] 2014 0.498 0.482 0.523 0.526 0.522 0.496 SPARISH[143] 2016 0.356 0.307 0.678 0.661 0.484 0.402 Yu's CNN[127] 2017 0.560 0.557 0.715 0.704 0.501 0.502 RISE[107] 2017 0.602 0.584 0.793 0.769 0.555 0.515 MEON[132] 2018 0.482 0.470 0.703 0.701 0.693 0.688 DIQaM-NR[131] 2018 0.476 0.461 0.686 0.674 0.601 0.606 DIQA[118] 2019 0.506 0.492 0.720 0.708 0.704 0.703 SGDNet[133] 2019 0.422 0.417 0.653 0.644 0.872 0.851 Rank Learning[141] 2019 0.751 0.719 0.863 0.836 — — SFA[128] 2019 0.840 0.826 — — 0.833 0.812 NSSADNN[134] 2019 0.574 0.568 0.825 0.748 0.813 0.745 CGFA-CNN[124] 2020 — — — — 0.846 0.837 DB-CNN[123] 2020 0.475 0.464 0.686 0.672 0.869 0.851 Cai 等[110] 2020 0.633 0.603 0.880 0.874 — — -
[1] Jayageetha J, Vasanthanayaki C. Medical image quality assessment using CSO based deep neural network. Journal of Medical Systems, 2018, 42(11): Article No. 224 [2] Ma J J, Nakarmi U, Kin C Y S, Sandino C M, Cheng J Y, Syed A B, et al. Diagnostic image quality assessment and classification in medical imaging: Opportunities and challenges. In: Proceedings of the 17th International Symposium on Biomedical Imaging (ISBI). Iowa City, USA: IEEE, 2020. 337−340 [3] Chen G B, Zhai M T. Quality assessment on remote sensing image based on neural networks. Journal of Visual Communication and Image Representation, 2019, 63: Article No. 102580 [4] Hombalimath A, Manjula H T, Khanam A, Girish K. Image quality assessment for iris recognition. International Journal of Scientific and Research Publications, 2018, 8(6): 100-103 [5] Zhai G T, Min X K. Perceptual image quality assessment: A survey. Science China Information Sciences, 2020, 63(11): Article No. 211301 [6] 王烨茹. 基于数字图像处理的自动对焦方法研究 [博士学位论文], 浙江大学, 中国, 2018.Wang Ye-Ru. Research on Auto-focus Methods Based on Digital Imaging Processing [Ph.D. dissertation], Zhejiang University, China, 2018. [7] 尤玉虎, 刘通, 刘佳文. 基于图像处理的自动对焦技术综述. 激光与红外, 2013, 43(2): 132-136 doi: 10.3969/j.issn.1001-5078.2013.02.003You Yu-Hu, Liu Tong, Liu Jia-Wen. Survey of the auto-focus methods based on image processing. Laser and Infrared, 2013, 43(2): 132-136 doi: 10.3969/j.issn.1001-5078.2013.02.003 [8] Cannon M. Blind deconvolution of spatially invariant image blurs with phase. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1976, 24(1): 58-63 doi: 10.1109/TASSP.1976.1162770 [9] Tekalp A M, Kaufman H, Woods J W. Identification of image and blur parameters for the restoration of noncausal blurs. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1986, 34(4): 963-972 doi: 10.1109/TASSP.1986.1164886 [10] Pavlovic G, Tekalp A M. Maximum likelihood parametric blur identification based on a continuous spatial domain model. IEEE Transactions on Image Processing, 1992, 1(4): 496-504 doi: 10.1109/83.199919 [11] Kim S K, Park S R, Paik J K. Simultaneous out-of-focus blur estimation and restoration for digital auto-focusing system. IEEE Transactions on Consumer Electronics, 1998, 44(3): 1071-1075 doi: 10.1109/30.713236 [12] Sada M M, Mahesh G M. Image deblurring techniques-a detail review. International Journal of Scientific Research in Science, Engineering and Technology, 2018, 4(2): 176-188 [13] Wang R X, Tao D C. Recent progress in image deblurring. arXiv:1409.6838, 2014. [14] Zhan Y B, Zhang R. No-reference image sharpness assessment based on maximum gradient and variability of gradients. IEEE Transactions on Multimedia, 2018, 20(7): 1796-1808 doi: 10.1109/TMM.2017.2780770 [15] Wang X W, Liang X, Zheng J J, Zhou H J. Fast detection and segmentation of partial image blur based on discrete Walsh-Hadamard transform. Signal Processing: Image Communication, 2019, 70: 47-56 doi: 10.1016/j.image.2018.09.007 [16] Liao L F, Zhang X, Zhao F Q, Zhong T, Pei Y C, Xu X M, et al. Joint image quality assessment and brain extraction of fetal MRI using deep learning. In: Proceedings of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham, Germany: Springer, 2020. 415−424 [17] Li D Q, Jiang T T. Blur-specific no-reference image quality assessment: A classification and review of representative methods. In: Proceedings of the 2019 International Conference on Sensing and Imaging. Cham, Germany: Springer, 2019. 45−68 [18] Dharmishtha P, Jaliya U K, Vasava H D. A review: No-reference/blind image quality assessment. International Research Journal of Engineering and Technology, 2017, 4(1): 339-343 [19] Yang X H, Li F, Liu H T. A survey of DNN methods for blind image quality assessment. IEEE Access, 2019, 7: 123788-123806 doi: 10.1109/ACCESS.2019.2938900 [20] 王志明. 无参考图像质量评价综述. 自动化学报, 2015, 41(6): 1062-1079Wang Zhi-Ming. Review of no-reference image quality assessment. Acta Automatica Sinica, 2015, 41(6): 1062-1079 [21] Ciancio A, da Costa A L N T T, da Silva E A B, Said A, Samadani R, Obrador P. No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Transactions on Image Processing, 2011, 20(1): 64-75 doi: 10.1109/TIP.2010.2053549 [22] Sheikh H R, Sabir M F, Bovik A C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on Image Processing, 2006, 15(11): 3440-3451 doi: 10.1109/TIP.2006.881959 [23] Zhu X, Milanfar P. Removing atmospheric turbulence via space-invariant deconvolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 157-170 doi: 10.1109/TPAMI.2012.82 [24] Franzen R. Kodak Lossless True Color Image Suite [Online], available: http://www.r0k.us/graphics/kodak/, May 1, 1999 [25] Larson E C, Chandler D M. Most apparent distortion: Full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging, 2010, 19(1): Article No. 011006 [26] Ponomarenko N N, Lukin V V, Zelensky A, Egiazarian K, Astola J, Carli M, et al. TID2008 - a database for evaluation of full-reference visual quality assessment metrics. Advances of Modern Radioelectronics, 2009, 10: 30-45 [27] Ponomarenko N, Ieremeiev O, Lukin V, Egiazarian K, Jin L N, Astola J, et al. Color image database TID2013: Peculiarities and preliminary results. In: Proceedings of the 2013 European Workshop on Visual Information Processing (EUVIP). Paris, France: IEEE, 2013. 106−111 [28] Le Callet P, Autrusseau F. Subjective quality assessment IRCCyN/IVC database [Online], available: http://www.irccyn.ec-nantes.fr/ivcdb/, February 4, 2015 [29] Zarić A E, Tatalović N, Brajković N, Hlevnjak H, Lončarić M, Dumić E, et al. VCL@FER image quality assessment database. Automatika, 2012, 53(4): 344-354 doi: 10.7305/automatika.53-4.241 [30] Chandler D M, Hemami S S. VSNR: A wavelet-based visual signal-to-noise ratio for natural images. IEEE Transactions on Image Processing, 2007, 16(9): 2284-2298 doi: 10.1109/TIP.2007.901820 [31] Lin H H, Hosu V, Saupe D. KADID-10k: A large-scale artificially distorted IQA database. In: Proceedings of the 11th International Conference on Quality of Multimedia Experience (QoMEX). Berlin, Germany: IEEE, 2019. 1−3 [32] Gu K, Zhai G T, Yang X K, Zhang W J. Hybrid no-reference quality metric for singly and multiply distorted images. IEEE Transactions on Broadcasting, 2014, 60(3): 555-567 doi: 10.1109/TBC.2014.2344471 [33] Jayaraman D, Mittal A, Moorthy A K, Bovik A C. Objective quality assessment of multiply distorted images. In: Proceedings of the 2012 Conference Record of the 46th Asilomar Conference on Signals, Systems and Computers (ASILOMAR). Pacific Grove, USA: IEEE, 2012. 1693−1697 [34] Sun W, Zhou F, Liao Q M. MDID: A multiply distorted image database for image quality assessment. Pattern Recognition, 2017, 61: 153-168 doi: 10.1016/j.patcog.2016.07.033 [35] Virtanen T, Nuutinen M, Vaahteranoksa M, Oittinen P, Häkkinen J. CID2013: A database for evaluating no-reference image quality assessment algorithms. IEEE Transactions on Image Processing, 2015, 24(1): 390-402 doi: 10.1109/TIP.2014.2378061 [36] Ghadiyaram D, Bovik A C. Massive online crowdsourced study of subjective and objective picture quality. IEEE Transactions on Image Processing, 2016, 25(1): 372-387 doi: 10.1109/TIP.2015.2500021 [37] Ghadiyaram D, Bovik A C. LIVE in the wild image quality challenge database. [Online], available: http://live.ece.utexas.edu/research/ChallengeDB/index.html, 2015. [38] Hosu V, Lin H H, Sziranyi T, Saupe D. KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing, 2020, 29: 4041-4056 doi: 10.1109/TIP.2020.2967829 [39] Zhu X, Milanfar P. Image reconstruction from videos distorted by atmospheric turbulence. In: Proceedings of the SPIE 7543, Visual Information Processing and Communication. San Jose, USA: SPIE, 2010. 75430S [40] Marziliano P, Dufaux F, Winkler S, Ebrahimi T. Perceptual blur and ringing metrics: Application to JPEG2000. Signal Processing: Image Communication, 2004, 19(2): 163-172 doi: 10.1016/j.image.2003.08.003 [41] 赵巨峰, 冯华君, 徐之海, 李奇. 基于模糊度和噪声水平的图像质量评价方法. 光电子•激光, 2010, 21(7): 1062-1066Zhao Ju-Feng, Feng Hua-Jun, Xu Zhi-Hai, Li Qi. Image quality assessment based on blurring and noise level. Journal of Optoelectronics • Laser, 2010, 21(7): 1062-1066 [42] Zhang F Y, Roysam B. Blind quality metric for multidistortion images based on cartoon and texture decomposition. IEEE Signal Processing Letters, 2016, 23(9): 1265-1269 doi: 10.1109/LSP.2016.2594166 [43] Ferzli R, Karam L J. A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE Transactions on Image Processing, 2009, 18(4): 717-728 doi: 10.1109/TIP.2008.2011760 [44] Narvekar N D, Karam L J. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). IEEE Transactions on Image Processing, 2011, 20(9): 2678-2683 doi: 10.1109/TIP.2011.2131660 [45] Wu S Q, Lin W S, Xie S L, Lu Z K, Ong E P, Yao S S. Blind blur assessment for vision-based applications. Journal of Visual Communication and Image Representation, 2009, 20(4): 231-241 doi: 10.1016/j.jvcir.2009.03.002 [46] Ong E P, Lin W S, Lu Z K, Yang X K, Yao S S, Pan F, et al. A no-reference quality metric for measuring image blur. In: Proceedings of the 7th International Symposium on Signal Processing and Its Applications. Paris, France: IEEE, 2003. 469−472 [47] Bahrami K, Kot A C. A fast approach for no-reference image sharpness assessment based on maximum local variation. IEEE Signal Processing Letters, 2014, 21(6): 751-755 doi: 10.1109/LSP.2014.2314487 [48] 蒋平, 张建州. 基于局部最大梯度的无参考图像质量评价. 电子与信息学报, 2015, 37(11): 2587-2593Jiang Ping, Zhang Jian-Zhou. No-reference image quality assessment based on local maximum gradient. Journal of Electronics & Information Technology, 2015, 37(11): 2587-2593 [49] Li L D, Lin W S, Wang X S, Yang G B, Bahrami K, Kot A C. No-reference image blur assessment based on discrete orthogonal moments. IEEE Transactions on Cybernetics, 2016, 46(1): 39-50 doi: 10.1109/TCYB.2015.2392129 [50] Crete F, Dolmiere T, Ladret P, Nicolas M. The blur effect: Perception and estimation with a new no-reference perceptual blur metric. In: Proceedings of the SPIE 6492, Human Vision and Electronic Imaging XII. San Jose, USA: SPIE, 2007. 64920I [51] Wang Z, Bovik A C, Sheikh H R, Simoncelli E P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 2004, 13(4): 600-612 doi: 10.1109/TIP.2003.819861 [52] 桑庆兵, 苏媛媛, 李朝锋, 吴小俊. 基于梯度结构相似度的无参考模糊图像质量评价. 光电子•激光, 2013, 24(3): 573-577Sang Qing-Bing, Su Yuan-Yuan, Li Chao-Feng, Wu Xiao-Jun. No-reference blur image quality assemssment based on gradient similarity. Journal of Optoelectronics • Laser, 2013, 24(3): 573-577 [53] 邵宇, 孙富春, 李洪波. 基于视觉特性的无参考型遥感图像质量评价方法. 清华大学学报(自然科学版), 2013, 53(4): 550-555Shao Yu, Sun Fu-Chun, Li Hong-Bo. No-reference remote sensing image quality assessment method using visual properties. Journal of Tsinghua University (Science & Technology), 2013, 53(4): 550-555 [54] Wang T, Hu C, Wu S Q, Cui J L, Zhang L Y, Yang Y P, et al. NRFSIM: A no-reference image blur metric based on FSIM and re-blur approach. In: Proceedings of the 2017 IEEE International Conference on Information and Automation (ICIA). Macau, China: IEEE, 2017. 698−703 [55] Zhang L, Zhang L, Mou X Q, Zhang D. FSIM: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 2011, 20(8): 2378-2386 doi: 10.1109/TIP.2011.2109730 [56] Bong D B L, Khoo B E. An efficient and training-free blind image blur assessment in the spatial domain. IEICE Transactions on Information and Systems, 2014, E97-D(7): 1864-1871 doi: 10.1587/transinf.E97.D.1864 [57] 王红玉, 冯筠, 牛维, 卜起荣, 贺小伟. 基于再模糊理论的无参考图像质量评价. 仪器仪表学报, 2016, 37(7): 1647-1655 doi: 10.3969/j.issn.0254-3087.2016.07.026Wang Hong-Yu, Feng Jun, Niu Wei, Bu Qi-Rong, He Xiao-Wei. No-reference image quality assessment based on re-blur theory. Chinese Journal of Scientific Instrument, 2016, 37(7): 1647-1655 doi: 10.3969/j.issn.0254-3087.2016.07.026 [58] 王冠军, 吴志勇, 云海姣, 梁敏华, 杨华. 结合图像二次模糊范围和奇异值分解的无参考模糊图像质量评价. 计算机辅助设计与图形学学报, 2016, 28(4): 653-661 doi: 10.3969/j.issn.1003-9775.2016.04.016Wang Guan-Jun, Wu Zhi-Yong, Yun Hai-Jiao, Liang Min-Hua, Yang Hua. No-reference quality assessment for blur image combined with re-blur range and singular value decomposition. Journal of Computer-Aided Design and Computer Graphics, 2016, 28(4): 653-661 doi: 10.3969/j.issn.1003-9775.2016.04.016 [59] Chetouani A, Mostafaoui G, Beghdadi A. A new free reference image quality index based on perceptual blur estimation. In: Proceedings of the 10th Pacific-Rim Conference on Multimedia. Bangkok, Thailand: Springer, 2009. 1185−1196 [60] Sang Q B, Qi H X, Wu X J, Li C F, Bovik A C. No-reference image blur index based on singular value curve. Journal of Visual Communication and Image Representation, 2014, 25(7): 1625-1630 doi: 10.1016/j.jvcir.2014.08.002 [61] Qureshi M A, Deriche M, Beghdadi A. Quantifying blur in colour images using higher order singular values. Electronics Letters, 2016, 52(21): 1755-1757 doi: 10.1049/el.2016.1792 [62] Zhai G T, Wu X L, Yang X K, Lin W S, Zhang W J. A psychovisual quality metric in free-energy principle. IEEE Transactions on Image Processing, 2012, 21(1): 41-52 doi: 10.1109/TIP.2011.2161092 [63] Gu K, Zhai G T, Lin W S, Yang X K, Zhang W J. No-reference image sharpness assessment in autoregressive parameter space. IEEE Transactions on Image Processing, 2015, 24(10): 3218-3231 doi: 10.1109/TIP.2015.2439035 [64] Chetouani A, Beghdadi A, Deriche M. A new reference-free image quality index for blur estimation in the frequency domain. In: Proceedings of the 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). Ajman, United Arab Emirates: IEEE, 2009. 155−159 [65] Vu C T, Phan T D, Chandler D M. S3: A spectral and spatial measure of local perceived sharpness in natural images. IEEE Transactions on Image Processing, 2012, 21(3): 934-945 doi: 10.1109/TIP.2011.2169974 [66] 卢彦飞, 张涛, 郑健, 李铭, 章程. 基于局部标准差与显著图的模糊图像质量评价方法. 吉林大学学报(工学版), 2016, 46(4): 1337-1343Lu Yan-Fei, Zhang Tao, Zheng Jian, LI Ming, Zhang Cheng. No-reference blurring image quality assessment based on local standard deviation and saliency map. Journal of Jilin University (Engineering and Technology Edition), 2016, 46(4): 1337-1343 [67] Marichal X, Ma W Y, Zhang H J. Blur determination in the compressed domain using DCT information. In: Proceedings of the 1999 International Conference on Image Processing (Cat. 99CH36348). Kobe, Japan: IEEE, 1999. 386−390 [68] Caviedes J, Oberti F. A new sharpness metric based on local kurtosis, edge and energy information. Signal Processing: Image Communication, 2004, 19(2): 147-161 doi: 10.1016/j.image.2003.08.002 [69] 张士杰, 李俊山, 杨亚威, 张仲敏. 湍流退化红外图像降晰函数辨识. 光学 精密工程, 2013, 21(2): 514-521 doi: 10.3788/OPE.20132102.0514Zhang Shi-Jie, Li Jun-Shan, Yang Ya-Wei, Zhang Zhong-Min. Blur identification of turbulence-degraded IR images. Optics and Precision Engineering, 2013, 21(2): 514-521 doi: 10.3788/OPE.20132102.0514 [70] Zhang S Q, Wu T, Xu X H, Cheng Z M, Chang C C. No-reference image blur assessment based on SIFT and DCT. Journal of Information Hiding and Multimedia Signal Processing, 2018, 9(1): 219-231 [71] Zhang S Q, Li P C, Xu X H, Li L, Chang C C. No-reference image blur assessment based on response function of singular values. Symmetry, 2018, 10(8): Article No. 304 [72] 卢亚楠, 谢凤英, 周世新, 姜志国, 孟如松. 皮肤镜图像散焦模糊与光照不均混叠时的无参考质量评价. 自动化学报, 2014, 40(3): 480-488Lu Ya-Nan, Xie Feng-Ying, Zhou Shi-Xin, Jiang Zhi-Guo, Meng Ru-Song. Non-reference quality assessment of dermoscopy images with defocus blur and uneven illumination distortion. Acta Automatica Sinica, 2014, 40(3): 480-488 [73] Tong H H, Li M J, Zhang H J, Zhang C S. Blur detection for digital images using wavelet transform. In: Proceedings of the 2004 IEEE International Conference on Multimedia and Expo (ICME). Taipei, China: IEEE, 2004. 17−20 [74] Ferzli R, Karam L J. No-reference objective wavelet based noise immune image sharpness metric. In: Proceedings of the 2005 IEEE International Conference on Image Processing. Genova, Italy: IEEE, 2005. Article No. I-405 [75] Kerouh F. A no reference quality metric for measuring image blur in wavelet domain. International Journal of Digital Information and Wireless Communications, 2012, 4(1): 803-812 [76] Vu P V, Chandler D M. A fast wavelet-based algorithm for global and local image sharpness estimation. IEEE Signal Processing Letters, 2012, 19(7): 423-426 doi: 10.1109/LSP.2012.2199980 [77] Gvozden G, Grgic S, Grgic M. Blind image sharpness assessment based on local contrast map statistics. Journal of Visual Communication and Image Representation, 2018, 50: 145-158 doi: 10.1016/j.jvcir.2017.11.017 [78] Wang Z, Simoncelli E P. Local phase coherence and the perception of blur. In: Proceedings of the 16th International Conference on Neural Information Processing Systems. Whistler British Columbia, Canada: MIT Press, 2003. 1435−1442 [79] Ciancio A, da Costa A L N T, da Silva E A B, Said A, Samadani R, Obrador P. Objective no-reference image blur metric based on local phase coherence. Electronics Letters, 2009, 45(23): 1162-1163 doi: 10.1049/el.2009.1800 [80] Hassen R, Wang Z, Salama M. No-reference image sharpness assessment based on local phase coherence measurement. In: Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. Dallas, USA: IEEE, 2010. 2434−2437 [81] Hassen R, Wang Z, Salama M M A. Image sharpness assessment based on local phase coherence. IEEE Transactions on Image Processing, 2013, 22(7): 2798-2810 doi: 10.1109/TIP.2013.2251643 [82] Do M N, Vetterli M. The contourlet transform: An efficient directional multiresolution image representation. IEEE Transactions on Image Processing, 2005, 14(12): 2091-2106 doi: 10.1109/TIP.2005.859376 [83] 楼斌, 沈海斌, 赵武锋, 严晓浪. 基于自然图像统计的无参考图像质量评价. 浙江大学学报(工学版), 2010, 44(2): 248-252 doi: 10.3785/j.issn.1008-973X.2010.02.007Lou Bin, Shen Hai-Bin, Zhao Wu-Feng, Yan Xiao-Lang. No-reference image quality assessment based on statistical model of natural image. Journal of Zhejiang University (Engineering Science), 2010, 44(2): 248-252 doi: 10.3785/j.issn.1008-973X.2010.02.007 [84] 焦淑红, 齐欢, 林维斯, 唐琳, 申维和. 基于Contourlet统计特性的无参考图像质量评价. 吉林大学学报(工学版), 2016, 46(2): 639-645Jiao Shu-Hong, Qi Huan, Lin Wei-Si, Tang Lin, Shen Wei-He. No-reference quality assessment based on the statistics in Contourlet domain. Journal of Jilin University (Engineering and Technology Edition), 2016, 46(2): 639-645 [85] Hosseini M S, Zhang Y Y, Plataniotis K N. Encoding visual sensitivity by MaxPol convolution filters for image sharpness assessment. IEEE Transactions on Image Processing, 2019, 28(9): 4510-4525 doi: 10.1109/TIP.2019.2906582 [86] Moorthy A K, Bovik A C. A two-step framework for constructing blind image quality indices. IEEE Signal Processing Letters, 2010, 17(5): 513-516 doi: 10.1109/LSP.2010.2043888 [87] Moorthy A K, Bovik A C. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Transactions on Image Processing, 2011, 20(12): 3350-3364 doi: 10.1109/TIP.2011.2147325 [88] Liu L X, Liu B, Huang H, Bovik A C. No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication, 2014, 29(8): 856-863 doi: 10.1016/j.image.2014.06.006 [89] 陈勇, 帅锋, 樊强. 基于自然统计特征分布的无参考图像质量评价. 电子与信息学报, 2016, 38(7): 1645-1653Chen Yong, Shuai Feng, Fan Qiang. A no-reference image quality assessment based on distribution characteristics of natural statistics. Journal of Electronics and Information Technology, 2016, 38(7): 1645-1653 [90] Zhang Y, Chandler D M. Opinion-unaware blind quality assessment of multiply and singly distorted images via distortion parameter estimation. IEEE Transactions on Image Processing, 2018, 27(11): 5433-5448 doi: 10.1109/TIP.2018.2857413 [91] Saad M A, Bovik A C, Charrier C. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Transactions on Image Processing, 2012, 21(8): 3339-3352 doi: 10.1109/TIP.2012.2191563 [92] Saad M A, Bovik A C, Charrier C. A DCT statistics-based blind image quality index. IEEE Signal Processing Letters, 2010, 17(6): 583-586 doi: 10.1109/LSP.2010.2045550 [93] Liu L X, Dong H P, Huang H, Bovik A C. No-reference image quality assessment in curvelet domain. Signal Processing: Image Communication, 2014, 29(4): 494-505 doi: 10.1016/j.image.2014.02.004 [94] Zhang Y, Chandler D M. No-reference image quality assessment based on log-derivative statistics of natural scenes. Journal of Electronic Imaging, 2013, 22(4): Article No. 043025 [95] 李俊峰. 基于RGB色彩空间自然场景统计的无参考图像质量评价. 自动化学报, 2015, 41(9): 1601-1615Li Jun-Feng. No-reference image quality assessment based on natural scene statistics in RGB color space. Acta Automatica Sinica, 2015, 41(9): 1601-1615 [96] Mittal A, Moorthy A K, Bovik A C. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 2012, 21(12): 4695-4708 doi: 10.1109/TIP.2012.2214050 [97] 唐祎玲, 江顺亮, 徐少平. 基于非零均值广义高斯模型与全局结构相关性的BRISQUE改进算法. 计算机辅助设计与图形学学报, 2018, 30(2): 298-308Tang Yi-Ling, Jiang Shun-Liang, Xu Shao-Ping. An improved BRISQUE algorithm based on non-zero mean generalized Gaussian model and global structural correlation coefficients. Journal of Computer-Aided Design & Computer Graphics, 2018, 30(2): 298-308 [98] Ye P, Doermann D. No-reference image quality assessment using visual codebooks. IEEE Transactions on Image Processing, 2012, 21(7): 3129-3138 doi: 10.1109/TIP.2012.2190086 [99] Xue W F, Mou X Q, Zhang L, Bovik A C, Feng X C. Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Transactions on Image Processing, 2014, 23(11): 4850-4862 doi: 10.1109/TIP.2014.2355716 [100] Smola A J, Schölkopf B. A tutorial on support vector regression. Statistics and Computing, 2004, 14(3): 199-222 doi: 10.1023/B:STCO.0000035301.49549.88 [101] 陈勇, 吴明明, 房昊, 刘焕淋. 基于差异激励的无参考图像质量评价. 自动化学报, 2020, 46(8): 1727-1737Chen Yong, Wu Ming-Ming, Fang Hao, Liu Huan-Lin. No-reference image quality assessment based on differential excitation. Acta Automatica Sinica, 2020, 46(8): 1727-1737 [102] Li Q H, Lin W S, Xu J T, Fang Y M. Blind image quality assessment using statistical structural and luminance features. IEEE Transactions on Multimedia, 2016, 18(12): 2457-2469 doi: 10.1109/TMM.2016.2601028 [103] Li C F, Zhang Y, Wu X J, Zheng Y H. A multi-scale learning local phase and amplitude blind image quality assessment for multiply distorted images. IEEE Access, 2018, 6: 64577-64586 doi: 10.1109/ACCESS.2018.2877714 [104] Gao F, Tao D C, Gao X B, Li X L. Learning to rank for blind image quality assessment. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26(10): 2275-2290 doi: 10.1109/TNNLS.2014.2377181 [105] 桑庆兵, 李朝锋, 吴小俊. 基于灰度共生矩阵的无参考模糊图像质量评价方法. 模式识别与人工智能, 2013, 26(5): 492-497 doi: 10.3969/j.issn.1003-6059.2013.05.012Sang Qing-Bing, Li Chao-Feng, Wu Xiao-Jun. No-reference blurred image quality assessment based on gray level co-occurrence matrix. Pattern Recognition and Artificial Intelligence, 2013, 26(5): 492-497 doi: 10.3969/j.issn.1003-6059.2013.05.012 [106] Oh T, Park J, Seshadrinathan K, Lee S, Bovik A C. No-reference sharpness assessment of camera-shaken images by analysis of spectral structure. IEEE Transactions on Image Processing, 2014, 23(12): 5428-5439 doi: 10.1109/TIP.2014.2364925 [107] Li L D, Xia W H, Lin W S, Fang Y M, Wang S Q. No-reference and robust image sharpness evaluation based on multiscale spatial and spectral features. IEEE Transactions on Multimedia, 2017, 19(5): 1030-1040 doi: 10.1109/TMM.2016.2640762 [108] Li L D, Yan Y, Lu Z L, Wu J J, Gu K, Wang S Q. No-reference quality assessment of deblurred images based on natural scene statistics. IEEE Access, 2017, 5: 2163-2171 doi: 10.1109/ACCESS.2017.2661858 [109] Liu L X, Gong J C, Huang H, Sang Q B. Blind image blur metric based on orientation-aware local patterns. Signal Processing: Image Communication, 2020, 80: Article No. 115654 [110] Cai H, Wang M J, Mao W D, Gong M L. No-reference image sharpness assessment based on discrepancy measures of structural degradation. Journal of Visual Communication and Image Representation, 2020, 71: Article No. 102861 [111] 李朝锋, 唐国凤, 吴小俊, 琚宜文. 学习相位一致特征的无参考图像质量评价. 电子与信息学报, 2013, 35(2): 484-488Li Chao-Feng, Tang Guo-Feng, Wu Xiao-Jun, Ju Yi-Wen. No-reference image quality assessment with learning phase congruency feature. Journal of Electronics and Information Technology, 2013, 35(2): 484-488 [112] Li C F, Bovik A C, Wu X J. Blind image quality assessment using a general regression neural network. IEEE Transactions on Neural Networks, 2011, 22(5): 793-799 doi: 10.1109/TNN.2011.2120620 [113] Liu L X, Hua Y, Zhao Q J, Huang H, Bovik A C. Blind image quality assessment by relative gradient statistics and adaboosting neural network. Signal Processing: Image Communication, 2016, 40: 1-15 doi: 10.1016/j.image.2015.10.005 [114] 沈丽丽, 杭宁. 联合多种边缘检测算子的无参考质量评价算法. 工程科学学报, 2018, 40(8): 996-1004Shen Li-Li, Hang Ning. No-reference image quality assessment using joint multiple edge detection. Chinese Journal of Engineering, 2018, 40(8): 996-1004 [115] Liu Y T, Gu K, Wang S Q, Zhao D B, Gao W. Blind quality assessment of camera images based on low-level and high-level statistical features. IEEE Transactions on Multimedia, 2019, 21(1): 135-146 doi: 10.1109/TMM.2018.2849602 [116] Kang L, Ye P, Li Y, Doermann D. Convolutional neural networks for no-reference image quality assessment. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Columbus, USA: IEEE, 2014. 1733−1740 [117] Kim J, Lee S. Fully deep blind image quality predictor. IEEE Journal of Selected Topics in Signal Processing, 2017, 11(1): 206-220 doi: 10.1109/JSTSP.2016.2639328 [118] Kim J, Nguyen A D, Lee S. Deep CNN-based blind image quality predictor. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(1): 11-24 doi: 10.1109/TNNLS.2018.2829819 [119] Guan J W, Yi S, Zeng X Y, Cham W K, Wang X G. Visual importance and distortion guided deep image quality assessment framework. IEEE Transactions on Multimedia, 2017, 19(11): 2505-2520 doi: 10.1109/TMM.2017.2703148 [120] Bianco S, Celona L, Napoletano P, Schettini R. On the use of deep learning for blind image quality assessment. Signal, Image and Video Processing, 2018, 12(2): 355-362 doi: 10.1007/s11760-017-1166-8 [121] Pan D, Shi P, Hou M, Ying Z F, Fu S Z, Zhang Y. Blind predicting similar quality map for image quality assessment. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE, 2018. 6373−6382 [122] He L H, Zhong Y Z, Lu W, Gao X B. A visual residual perception optimized network for blind image quality assessment. IEEE Access, 2019, 7: 176087-176098 doi: 10.1109/ACCESS.2019.2957292 [123] Zhang W X, Ma K D, Yan J, Deng D X, Wang Z. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(1): 36-47 doi: 10.1109/TCSVT.2018.2886771 [124] Cai W P, Fan C E, Zou L, Liu Y F, Ma Y, Wu M Y. Blind image quality assessment based on classification guidance and feature aggregation. Electronics, 2020, 9(11): Article No. 1811 [125] Li D Q, Jiang T T, Jiang M. Exploiting high-level semantics for no-reference image quality assessment of realistic blur images. In: Proceedings of the 25th ACM International Conference on Multimedia. Mountain View, USA: ACM, 2017. 378−386 [126] Yu S D, Jiang F, Li L D, Xie Y Q. CNN-GRNN for image sharpness assessment. In: Proceedings of the 2016 Asian Conference on Computer Vision. Taipei, China: Springer, 2016. 50−61 [127] Yu S D, Wu S B, Wang L, Jiang F, Xie Y Q, Li L D. A shallow convolutional neural network for blind image sharpness assessment. PLoS One, 2017, 12(5): Article No. e0176632 [128] Li D Q, Jiang T T, Lin W S, Jiang M. Which has better visual quality: The clear blue sky or a blurry animal?. IEEE Transactions on Multimedia, 2019, 21(5): 1221-1234 doi: 10.1109/TMM.2018.2875354 [129] Li Y M, Po L M, Xu X Y, Feng L T, Yuan F, Cheung C H, et al. No-reference image quality assessment with shearlet transform and deep neural networks. Neurocomputing, 2015, 154: 94-109 doi: 10.1016/j.neucom.2014.12.015 [130] Gao F, Yu J, Zhu S G, Huang Q M, Tian Q. Blind image quality prediction by exploiting multi-level deep representations. Pattern Recognition, 2018, 81: 432-442 doi: 10.1016/j.patcog.2018.04.016 [131] Bosse S, Maniry D, Müller K R, Wiegand T, Samek W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Transactions on Image Processing, 2018, 27(1): 206-219 doi: 10.1109/TIP.2017.2760518 [132] Ma K D, Liu W T, Zhang K, Duanmu Z F, Wang Z, Zuo W M. End-to-end blind image quality assessment using deep neural networks. IEEE Transactions on Image Processing, 2018, 27(3): 1202-1213 doi: 10.1109/TIP.2017.2774045 [133] Yang S, Jiang Q P, Lin W S, Wang Y T. SGDNet: An end-to-end saliency-guided deep neural network for no-reference image quality assessment. In: Proceedings of the 27th ACM International Conference on Multimedia. Nice, France: ACM, 2019. 1383−1391 [134] Yan B, Bare B, Tan W M. Naturalness-aware deep no-reference image quality assessment. IEEE Transactions on Multimedia, 2019, 21(10): 2603-2615 doi: 10.1109/TMM.2019.2904879 [135] Yan Q S, Gong D, Zhang Y N. Two-stream convolutional networks for blind image quality assessment. IEEE Transactions on Image Processing, 2019, 28(5): 2200-2211 doi: 10.1109/TIP.2018.2883741 [136] Lin K Y, Wang G X. Hallucinated-IQA: No-reference image quality assessment via adversarial learning. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE, 2018. 732−741 [137] Yang H T, Shi P, Zhong D X, Pan D, Ying Z F. Blind image quality assessment of natural distorted image based on generative adversarial networks. IEEE Access, 2019, 7: 179290-179303 doi: 10.1109/ACCESS.2019.2957235 [138] Hou W L, Gao X B, Tao D C, Li X L. Blind image quality assessment via deep learning. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26(6): 1275-1286 doi: 10.1109/TNNLS.2014.2336852 [139] He S Y, Liu Z Z. Image quality assessment based on adaptive multiple Skyline query. Signal Processing: Image Communication, 2020, 80: Article No. 115676 [140] Ma K D, Liu W T, Liu T L, Wang Z, Tao D C. dipIQ: Blind image quality assessment by learning-to-rank discriminable image pairs. IEEE Transactions on Image Processing, 2017, 26(8): 3951-3964 doi: 10.1109/TIP.2017.2708503 [141] Zhang Y B, Wang H Q, Tan F F, Chen, W J, Wu Z R. No-reference image sharpness assessment based on rank learning. In: Proceedings of the 2019 International Conference on Image Processing (ICIP). Taipei, China: IEEE, 2019. 2359−2363 [142] Yang J C, Sim K, Jiang B, Lu W. Blind image quality assessment utilising local mean eigenvalues. Electronics Letters, 2018, 54(12): 754-756 doi: 10.1049/el.2018.0958 [143] Li L D, Wu D, Wu J J, Li H L, Lin W S, Kot A C. Image sharpness assessment by sparse representation. IEEE Transactions on Multimedia, 2016, 18(6): 1085-1097 doi: 10.1109/TMM.2016.2545398 [144] Lu Q B, Zhou W G, Li H Q. A no-reference Image sharpness metric based on structural information using sparse representation. Information Sciences, 2016, 369: 334-346 doi: 10.1016/j.ins.2016.06.042 [145] Ye P, Kumar J, Kang L, Doermann D. Unsupervised feature learning framework for no-reference image quality assessment. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, USA: IEEE, 2012. 1098−1105 [146] Xu J T, Ye P, Li Q H, Du H Q, Liu Y, Doermann D. Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing, 2016, 25(9): 4444-4457 doi: 10.1109/TIP.2016.2585880 [147] Xue W F, Zhang L, Mou X Q. Learning without human scores for blind image quality assessment. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA: IEEE, 2013. 995−1002 [148] Wu Q B, Li H L, Meng F M, Ngan K N, Luo B, Huang C, et al. Blind image quality assessment based on multichannel feature fusion and label transfer. IEEE Transactions on Circuits and Systems for Video Technology, 2016, 26(3): 425-440 doi: 10.1109/TCSVT.2015.2412773 [149] Jiang Q P, Shao F, Lin W S, Gu K, Jiang G Y, Sun H F. Optimizing multistage discriminative dictionaries for blind image quality assessment. IEEE Transactions on Multimedia, 2018, 20(8): 2035-2048 doi: 10.1109/TMM.2017.2763321 [150] Mittal A, Soundararajan R, Bovik A C. Making a "completely blind" image quality analyzer. IEEE Signal Processing Letters, 2013, 20(3): 209-212 doi: 10.1109/LSP.2012.2227726 [151] Zhang L, Zhang L, Bovik A C. A feature-enriched completely blind image quality evaluator. IEEE Transactions on Image Processing, 2015, 24(8): 2579-2591 doi: 10.1109/TIP.2015.2426416 [152] Jiao S H, Qi H, Lin W S, Shen W H. Fast and efficient blind image quality index in spatial domain. Electronics Letters, 2013, 49(18): 1137-1138 doi: 10.1049/el.2013.1837 [153] Abdalmajeed S, Jiao S H. No-reference image quality assessment algorithm based on Weibull statistics of log-derivatives of natural scenes. Electronics Letters, 2014, 50(8): 595-596 doi: 10.1049/el.2013.3585 [154] 南栋, 毕笃彦, 查宇飞, 张泽, 李权合. 基于参数估计的无参考型图像质量评价算法. 电子与信息学报, 2013, 35(9): 2066-2072Nan Dong, Bi Du-Yan, Zha Yu-Fei, Zhang Ze, Li Quan-He. A no-reference image quality assessment method based on parameter estimation. Journal of Electronics & Information Technology, 2013, 35(9): 2066-2072 [155] Panetta K, Gao C, Agaian S. No reference color image contrast and quality measures. IEEE Transactions on Consumer Electronics, 2013, 59(3): 643-651 doi: 10.1109/TCE.2013.6626251 [156] Gu J, Meng G F, Redi J A, Xiang S M, Pan C H. Blind image quality assessment via vector regression and object oriented pooling. IEEE Transactions on Multimedia, 2018, 20(5): 1140-1153 doi: 10.1109/TMM.2017.2761993 [157] Wu Q B, Li H L, Wang Z, Meng F M, Luo B, Li W, et al. Blind image quality assessment based on rank-order regularized regression. IEEE Transactions on Multimedia, 2017, 19(11): 2490-2504 doi: 10.1109/TMM.2017.2700206 [158] Al-Bandawi H, Deng G. Blind image quality assessment based on Benford’s law. IET Image Processing, 2018, 12(11): 1983-1993 doi: 10.1049/iet-ipr.2018.5385 [159] Wu Q B, Li H L, Ngan K N, Ma K D. Blind image quality assessment using local consistency aware retriever and uncertainty aware evaluator. IEEE Transactions on Circuits and Systems for Video Technology, 2018, 28(9): 2078-2089 doi: 10.1109/TCSVT.2017.2710419 [160] Deng C W, Wang S G, Li Z, Huang G B, Lin W S. Content-insensitive blind image blurriness assessment using Weibull statistics and sparse extreme learning machine. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2019, 49(3): 516-527 doi: 10.1109/TSMC.2017.2718180 [161] Wang Z, Li Q. Information content weighting for perceptual image quality assessment. IEEE Transactions on Image Processing, 2011, 20(5): 1185-1198 doi: 10.1109/TIP.2010.2092435 -