2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

模糊失真图像无参考质量评价综述

陈健 李诗云 林丽 王猛 李佐勇

刘小明, 唐少虎, 朱凤华, 陈兆盟. 基于MFD的城市区域过饱和交通信号优化控制. 自动化学报, 2017, 43(7): 1220-1233. doi: 10.16383/j.aas.2017.c160250
引用本文: 陈健, 李诗云, 林丽, 王猛, 李佐勇. 模糊失真图像无参考质量评价综述. 自动化学报, 2022, 48(3): 689−711 doi: 10.16383/j.aas.c201030
LIU Xiao-Ming, TANG Shao-Hu, ZHU Feng-Hua, CHEN Zhao-Meng. Urban Area Oversaturated Traffic Signal Optimization Control Based on MFD. ACTA AUTOMATICA SINICA, 2017, 43(7): 1220-1233. doi: 10.16383/j.aas.2017.c160250
Citation: Chen Jian, Li Shi-Yun, Lin Li, Wang Meng, Li Zuo-Yong. A review on no-reference quality assessment for blurred image. Acta Automatica Sinica, 2022, 48(3): 689−711 doi: 10.16383/j.aas.c201030

模糊失真图像无参考质量评价综述

doi: 10.16383/j.aas.c201030
基金项目: 国家自然科学基金(61972187), 福建省自然科学基金(2020J02024, 2018J01637), 福州市科技计划项目(2020-RC-186), 福建省信息处理与智能控制重点实验室(闽江学院)开放课题(MJUKF-IPIC202110)资助
详细信息
    作者简介:

    陈健:福建工程学院电子电气与物理学院副教授. 2015年获得福州大学通信与信息系统专业博士学位. 主要研究方向为计算机视觉, 深度学习, 医学图像处理与分析. 本文通信作者. E-mail: jchen321@126.com

    李诗云:福建工程学院电子电气与物理学院硕士研究生. 主要研究方向为图像处理和机器学习. E-mail: 13997691527@163.com

    林丽:福建工程学院电子电气与物理学院讲师. 2009年获得福州大学信号与信息处理专业硕士学位. 主要研究方向为机器视觉及信号处理. E-mail: linli@fjut.edu.cn

    王猛:福建工程学院电子电气与物理学院硕士研究生. 主要研究方向为计算机视觉. E-mail: wm15720503705@163.com

    李佐勇:闽江学院计算机与控制工程学院教授. 2010年获得南京理工大学计算机应用专业博士学位. 主要研究方向为图像处理, 模式识别及深度学习. E-mail: fzulzytdq@126.com

A Review on No-reference Quality Assessment for Blurred Image

Funds: Supported by National Natural Science Foundation of China (61972187), Natural Science Foundation of Fujian Province (2020J02024, 2018J01637), Fuzhou Science and Technology Project (2020-RC-186), and Open Fund Project of Fujian Provincial Key Laboratory of Information Processing and Intelligent Control (Minjiang University) (MJUKF-IPIC202110)
More Information
    Author Bio:

    CHEN Jian Associate professor at the School of Electronic, Electrical Engineering and Physics, Fujian University of Technology. He received his Ph.D. degree in communication and information system from Fuzhou University in 2015. His research interest covers computer vision, deep learning, and medical image processing and analysis. Corresponding author of this paper

    LI Shi-Yun Master student at the School of Electronic, Electrical Engineering and Physics, Fujian University of Technology. His research interest covers image processing and machine learning

    LIN Li Lecturer at the School of Electronic, Electrical and Physics, Fujian University of Technology. She received her master degree in signal and information processing from Fuzhou University in 2009. Her research interest covers machine vision and signal processing

    WANG Meng Master student at the School of Electronic, Electrical Engineering and Physics, Fujian University of Technology. His main research interest is computer vision

    LI Zuo-Yong Professor at the College of Computer and Control Engineering, Minjiang University. He received his Ph.D. degree in computer application from Nanjing University of Science and Technology in 2010. His research interest covers image processing, pattern recognition, and deep learning

  • 摘要: 图像的模糊问题影响人们对信息的感知、获取及图像的后续处理. 无参考模糊图像质量评价是该问题的主要研究方向之一. 本文分析了近20年来无参考模糊图像质量评价相关技术的发展. 首先, 本文结合主要数据集对图像模糊失真进行分类说明; 其次, 对主要的无参考模糊图像质量评价方法进行分类介绍与详细分析; 随后, 介绍了用来比较无参考模糊图像质量评价方法性能优劣的主要评价指标; 接着, 选择典型数据集及评价指标, 并采用常见的无参考模糊图像质量评价方法进行性能比较; 最后, 对无参考模糊图像质量评价的相关技术及发展趋势进行总结与展望.
  • 传统发电调控框架在保持多区域互联大电网的系统有功平衡, 维持系统频率稳定等方面发挥了重要作用.随着相关研究的不断深入, 传统发电调控框架逐渐发展成为存在三种不同时间尺度问题的调控框架[1-2]: 1)机组组合(Unit commitment, UC)[3-4]; 2)经济调度(Economic dispatch, ED)[5]; 3)自动发电控制(Automatic generating control, AGC)和发电指令调度(Generation command dispatch, GCD)[6-9].然而, 传统发电调控框架在以下方面可以改善: 1)在传统发电调控框架中, 较长时间尺度下调控有可能导致不准确控制指令的产生.同时, 不同时间尺度调控之间存在的不协调问题有可能导致反向调节现象的产生. 2)在传统发电调控框架中, UC和ED问题解决是以下一时间段负荷预测结果作为条件, 而实时AGC和GCD却是基于AGC机组特性所得指令.从长时间尺度的角度来看, AGC和GCD做出的控制结果并不是一个最优的控制结果. 3)一般情况下, 不同时间尺度下的优化目标均不相同.因此, 无论是对长期还是短期而言, 仅依据这些优化结果做出的调控指令, 都不是最优的.

    研究者为了解决传统框架中存在的部分问题, 提出了大量集成算法或集成框架.文献[10]提出针对微电网实时调度的AGC和ED集成方法.文献[11]研究了考虑含有AGC仿射索引过程的鲁棒经济调度.文献[12]从优化的角度, 将ED和AGC控制器相结合.然而, 这些算法均不能完整地对传统发电调控框架进行改善.

    强化学习(Reinforcement learning, RL), 又称再励学习、评价学习, 既可看作是人工智能领域中一种重要的机器学习方法, 也被认为是属于马尔科夫决策过程(Markov decision process, MDP)和动态优化方法的一个独立分支.互联电网AGC是一个动态多级决策问题, 其控制过程可视为马尔科夫决策过程.文献[13]针对微电网孤岛运行模式下新能源发电强随机性导致的系统频率波动, 提出基于多智能体相关均衡强化学习(Correlated equilibrium Q ($\lambda$), CEQ ($\lambda$))的微电网智能发电控制方法.文献[14]针对非马尔科夫环境下火电占优的互联电网AGC控制策略, 引入随机最优控制中Q($\lambda$)学习的"后向估计"原理, 有效解决火电机组大时滞环节带来的延时回报问题.然而, 这些方法的采用均没有从整体上对传统发电调控框架进行改善.

    为了完整地解决传统发电调控框架中存在的问题, 本文提出一种实时经济调度与控制(Real-time economic generation dispatch and control, REG)框架替代传统的发电控制框架.除此之外, 为适应REG框架, 还提出一种懒惰强化学习(Lazy reinforcement learning, LRL)算法.由于懒惰强化学习算法是一种需要大量数据的算法, 所提算法需要大量数据进行训练.因此, 采用基于人工社会-计算实验-平行执行(Artificial societies-Computational experiments-Parallel execution, ACP)和社会系统的平行系统, 在短时间内产生大量数据以适应所提算法的需要.文献[15]提出基于ACP的平行系统进行社会计算的理论.文献[16]提出一种可用于信息和控制的基于信息-物理系统和ACP的分散自治系统.平行系统或平行时代的理论已经被应用到很多领域, 例如, 平行管理系统[17]、区块链领域[18]、机器学习[19]和核电站安全可靠性的分析[20]等.在一个实际系统中, 社会目标也被考虑在CPS中, 也可称为信息物理社会融合系统(CPSS)[21]; 同时, CPS的概念中应当加入社会系统, 即"智能电网"或"能源互联网"[22].

    因此, 基于REG框架的控制方法是一种适用于互联大电网发电调度和控制的统一时间尺度的调控方法.

    虽然采用基于ACP和社会系统的平行系统可以快速获取海量的数据, 但是这些数据中既存在调控效果较好的数据, 也有调控效果较差的数据.为了解决这一问题, 设计了一种选择算子对有利于LRL训练的数据进行筛选保留.另外, 由于AGC机组存在大量约束限制.设计了一种松弛算子对优化结果进行限制.

    为了对比人工神经网络(Artificial neural network, ANN)和LRL的调控效果, 本文设计了一种基于人工神经网络和松弛算子结合的松弛人工神经网络算法(Relaxed artificial neural network, RANN).本文提出的LRL算法的特性归纳如下:

    1) 作为一种统一时间尺度的控制器, 从长远角度来看, LRL可以避免不同时间尺度需要协同调控问题.

    2) 为LRL设计了一个强化网络, 可为一个区域的所有AGC机组提供多个输出.且采用松弛机满足AGC机组的约束.

    3) 懒惰学习的控制策略可以采用从平行系统不断产生的海量数据进行在线更新.这有利于LRL进行训练.

    图 1所示, 传统发电调控框架包含UC, ED, AGC和GCD四个过程.

    图 1  传统发电调控框架
    Fig. 1  Framework of conventional generation control

    UC负责制定长期(1天)的机组开停和有功出力计划; 然后ED重新制定短期(15分钟)所有已开启的机组的发电指令; 最后AGC和GCD为所有AGC机组再次重新制定实时发电指令.

    1.1.1   机组组合模型

    UC的目标是在给定时间周期内制定出最优的机组开停和生产出力计划.因此, UC问题是一个随机混合0-1整数规划问题, 可以采用优化算法进行求解.

    UC问题的优化目标是使总发电成本最低, UC问题的约束包括:有功平衡约束、热备用约束、有功出力限制约束以及发电机调节比率约束, 其目标函数表达式及约束条件为

    $ \begin{align} &\min \sum\limits_{t = 1}^T {\sum\limits_{j = 1}^{{J_i}} {[{F_j}({P_{j, t}}){u_{j, t}} + S{U_{j, t}}(1 - {u_{j, t - 1}}){u_{j, t}}]} }\notag\\ &\, \mathrm{s.t.} \begin{cases} \sum\limits_{j = 1}^{{J_i}} {{P_{j, t}}{u_{j, t}} = P{D_{i, t}}} \\[1mm] \sum\limits_{j = 1}^{{J_i}} {P_j^{\max }{u_{j, t}} \ge P{D_{i, t}} + S{R_{i, t}}} \\[1mm] {u_{j, t}}P_j^{\min } \le {P_{j, t}} \le {u_{j, t}}P_j^{\max }\\[1mm] 0 \le {P_{j, t}} - {P_{j, (t - 1)}} \le P_j^{{\rm{up}}}\\[1mm] 0 \le {P_{j, t}} - {P_{j, (t - 1)}} \le P_j^{{\rm{down}}} \end{cases} \end{align} $

    (1)

    其中, $T$为给定时间周期内的时间断面的个数, 一般设定为24; $J_i$为第$i$个区域内的发电机组个数; $u_{j, t}$为第$j$个发电机组在第$t$时间断面的状态, $u_{j, t}$取值为1或0, 分别代表机组开启和关停状态; 总发电成本包括燃料成本$F_j(P_{j, t})$和启动成本$SU_{j, t}$; $P{D_{i, t}}$为第$i$个区域内在第$t$时间段内的负荷需求总量; $P_j^{\min }$和$P_j^{\max }$分别为在第$i$区域的第$j$个发电机组的有功出力的最小值和最大值; $S{R_{i, t}}$为第$i$个区域内在第$t$时间段内所需的热备用容量; $P_j^{{\rm{up}}}$和$P_j^{{\rm{down}}}$分别为第$j$台发电机组的上调和下调的最大幅度限制; $T_j^{\min\mbox{-}\rm{up}}$为第$j$个发电机组的持续开启时间的最小值; $T_j^{\min\mbox{-}\rm{dowm}}$为第$j$个发电机组的持续停机时间的最小值.

    燃料成本$F_j(P_{j, t})$, 启动成本$SU_{j, t}$以及约束$u_{j, t}$的计算公式如下:

    $ {F_j}({P_{j, t}}) = {a_j} + {b_j}{P_{j, t}} + {c_j}P_{j, t}^2 $

    (2)

    $ \begin{align} &S{U_{j, t}} =\notag\\ &\ \ \ \begin{cases} S{U_{{\rm{H}}, j}}, & T_j^{{\rm{min\mbox{-}down}}} \le T_{j, t}^{{\rm{up}}} \le T_j^{{\rm{min\mbox{-}down}}} + T_j^{{\rm{cold}}}\\ S{U_{{\rm{C}}, j}}, &T_{j, t}^{{\rm{down}}} > T_j^{{\rm{min\mbox{-}down}}} + T_j^{{\rm{cold}}} \end{cases} \end{align} $

    (3)

    $ \begin{align} \begin{cases} T_{j}^{{\rm{up}}} \geq T_j^{\min\mbox{-}{\rm{up}}}\\ T_{j}^{{\rm{down}}} \geq T_j^{\min\mbox{-}{\rm{down}}} \end{cases} \end{align} $

    (4)

    其中, $P_{j, t}$为第$j$台发电机组在第$t$个时间断面时的有功出力; $a_j$, $b_j$和$c_j$分别是发电成本的常数因子, 一次项因子和二次项因子; $T_{j}^{{\rm{up}}}$和$T_{j}^{{\rm{down}}}$分别为第$j$台发电机组开启和关停的累积时间; $T_j^{{\rm{cold}}}$是第$j$台发电机组从完全关停状态进行冷启动所需的时间; $SU_{H, j}$和$SU_{C, j}$分别为第$j$台发电机组进行热启动和冷启动所需的成本.

    1.1.2   经济调度模型

    ED采用优化算法从经济角度重新制定发电命令.通常ED的优化目标包括两部分:经济目标和碳排放目标.将两种优化目标进行线性权重结合, 得到最终的ED的模型如下:

    $ \begin{align} &\min {F_{{\rm{total}}}} = \sum\limits_{j = 1}^{{J_i}} {(\omega F_j^{\rm{e}}({P_j}) + (1 - \omega )F_j^{\rm{c}}({P_j}))}\notag \\ &\, \mathrm{s.t.}\begin{cases} P{D_i} - \sum\limits_{j = 1}^{{J_i}} {{P_j} = 0} \\ P_j^{\min } \le {P_j} \le P_j^{\max }\\ {P_{j, t}} - {P_{j, t - 1}} \le P_j^{{\rm{up}}}\\ {P_{j, t - 1}} - {P_{j, t}} \le P_j^{{\rm{down}}} \end{cases} \end{align} $

    (5)

    其中, $PD_i$为第$i$个区域的系统总负荷量, $\omega$为经济目标权重.

    经济目标和碳排放目标具体表达如下:

    $ \begin{align} F_{{\rm{total}}}^{\rm{e}} = \sum\limits_{j = 1}^{{J_i}} {F_j^{\rm{e}}} ({P_j}) = \sum\limits_{j = 1}^{{J_i}} {({c_j}P_j^2 + {b_j}{P_j} + {a_j})} \end{align} $

    (6)

    $ \begin{align} F_{{\rm{total}}}^{\rm{c}} = \sum\limits_{j = 1}^{{J_i}} {F_j^{\rm{c}}} ({P_j}) = \sum\limits_{j = 1}^{{J_i}} {({\alpha _j}P_j^2 + {\beta _j}{P_j} + {\gamma _j})} \end{align} $

    (7)

    式中, $F_j^{\rm{e}}({P_j})$为第$j$台发电机组的发电成本; ${P_j}$为第$j$台发电机组的有功出力; $F_j^{\rm{c}}({P_j})$为第$j$台发电机组的碳排放量; $\gamma _j$, $\beta _j$和$\alpha _j$分别表示第$j$台发电机组关于碳排放的常数因子、一次项因子和二次项因子.

    1.1.3   自动发电控制模型

    图 2是传统实时控制系统中包含两个区域的电力系统AGC模型. AGC控制器的输入为第$i$个区域的频率误差和区域控制误差(Area control error, ACE) $e_i$, 输出为第$i$个区域的发电命令. AGC模型的控制周期为秒级, 一般设定为4秒或8秒.

    图 2  两区电力系统的AGC模型
    Fig. 2  AGC model of two-area power system
    1.1.4   发电命令调度模型

    GCD的输入为ACG产生的发电指令, 输出为第$i$个区域内所有AGC机组的发电命令$\Delta {P_{i, j}}$.进而, ACG单元的实际发电指令$P_{i, j}^{{\rm{actual}}}$取ED和GCD的发电指令之和, 即$P_{i, j}^{{\rm{actual}}} = {P_{i, j}} + \Delta {P_{i, j}}$.在实际工程中, GCD的目标采用如式(5)所示的经济目标.

    频率控制包含三种调节方式:一次调频、二次调频以及三次调频.一次调频通过调节发电机组在短时间内的有功出力, 进而调节系统频率.但是, 一次调频是一种有差调节方式.为了更好地平衡发电机和负荷之间的有功功率, 电力系统引入了二次调频和三次调频方式.二次调频和三次调频包含了多种算法的集成, 即集成了UC, ED, AGC和GCD.其中, AGC采用的是控制算法, 而UC, ED和GCD均为优化算法.因此, 传统发电调控算法是一种"优化算法+优化算法+控制算法+优化算法"的组合形式.

    大量的优化算法被运用到UC, ED和GCD之中.常用的优化算法有: GA[23]、PSO[24]、模拟退火算法[25]、多元优化算法[26]、灰狼优化算法[27]、多目标极值优化算法[28]、混沌多目标机制优化算法[29]等.同时, 多种控制算法被运用于AGC控制器中.诸如传统的PID算法、模糊逻辑控制算法[30]、模糊PID[31]、滑动模式控制器[32]、自抗扰控制器[33]分数阶PID[34]、Q学习[35]、Q ($\lambda$)学习[14]和R ($\lambda$)学习[36]以及分布式模型预测控制算法[37]等. 表 1展示了频率调节方式和传统发电调控框架之间的关系.

    表 1  频率调节方式与传统发电调控框架之间的关系
    Table 1  Relationship between regulation processes and conventional generation control framework
    传统发电控制调节方式算法类型时间间隔(s)输入输出
    UC三次调频优化算法86 400$ PD_{i, t} $$u_{i, t, j}, P_{j, t}$
    ED二次调频优化算法900 $PD_i$$P_{i, j}$
    AGC二次调频控制算法4$e_{i}, \Delta f_i$$ \Delta P_i$
    GCD二次调频优化算法4 $\Delta P_i$$\Delta P_{i, j}$
    下载: 导出CSV 
    | 显示表格

    在第$i$区域中, UC依据下一天的负荷预测值$PD_{i, t}$制定发电机的启动状态$u_{i, t, j}$以及出力水平$P_{j, t}$.其中时间周期为一天中的每小时, 即$t =\{ 1, 2$, $\cdots$, $24\}$; ED采用15分钟后的超短期负荷预测值$PD_i$制定有功出力值$P_{i, j}$; AGC控制器计算第$i$个区域的总发电需求量$\Delta P_i$; GCD将总的发电量$\Delta P_i$分配到每个AGC机组$\Delta P_{i, j}$.

    为了快速获取准确的发电调度与控制动作, 本文建立了大量的平行发电控制系统.如图 3所示, 在平行发电系统中, 多重虚拟发电控制系统被用来对真实发电控制系统不断地进行仿真.当虚拟控制发电系统的控制效果优于实际发电控制系统时, 它们之间会交换它们发电控制器的重要数据.即虚拟发电控制系统将重要的控制器参数传递到真实发电控制系统, 而真实发电系统则将更新后的系统模型参数反馈回虚拟发电控制系统.

    图 3  平行发电控制系统
    Fig. 3  Parallel generation control systems

    由于通过平行系统可以获取海量的数据, 如果采用传统学习方法对控制算法学习进行训练将花费大量的时间.因此, 需要采用一种更有效的学习算法对海量数据进行学习.本文针对平行发电控制系统的特点, 提出一种懒惰强化学习算法(LRL).如图 4所示, LRL由懒惰学习、选择算子、强化网络以及松弛算子四部分构成.提出的LRL算法可以设计成为基于REG框架的控制器, 可以替代传统的组合算法(UC, ED, AGC和GCD).因此, 基于REG框架的控制器的输入为频率误差$\Delta {f_i}$和ACE $e_i$, 输出为所有AGC机组的发电命令$\Delta {P_{i, j}}$.

    图 4  基于REG的LRL控制器的流程图
    Fig. 4  Procedures of LRL based REG controller

    LRL的懒惰学习将对下一个系统状态进行预测.因此, 懒惰学习的输入为频率误差$\Delta {f_i}$和ACE $e_i$.此外, 懒惰学习可以依据电力系统当前采取的动作集${\bf \it {A}}$预测电力系统的下一状态$\Delta {F'_{i, (t + 1)}}$.其中, 初始动作集合${\bf \it{A}}$描述如下:

    $ \begin{align} {\bf \it{A}} = \left[ {\begin{array}{*{20}{c}} {{a_{1, 1}}}&{{a_{1, 2}}}& \cdots &{{a_{1, k}}}\\ {{a_{2, 1}}}&{{a_{2, 2}}}& \cdots &{{a_{2, k}}}\\ \vdots & \vdots & \ddots & \vdots \\ {{a_{{J_i}, 1}}}&{{a_{{J_i}, 2}}}& \cdots &{{a_{{J_i}, k}}} \end{array}} \right] \end{align} $

    (8)

    其中, ${\bf \it{A}} $具有$k$列, 每一列都是一个AGC机组的发电命令动作向量.对下一状态的预测同样具有$k$列, 且每一列与每一个动作向量的预测相对应.因此, $\Delta {F'_{i, (t + 1)}}$是一个依据所有$k$列动作向量预测而组成的$k$列预测矩阵.

    采用懒惰学习方法估计未知函数的值与映射$g:$ ${{\bf R}^m}$ $ \to {\bf R} $类似.懒惰学习方法的输入和输出可以从矩阵$\Phi $获取, 描述如下:

    $ \begin{align} {\rm{\{ (}}{\varphi _1}{\rm{, }}{y_1}{\rm{), (}}{\varphi _2}{\rm{, }}{y_2}{\rm{), }} \cdots {\rm{, (}}{\varphi _{{N_{{\rm{lazy}}}}}}, {y_{{N_{{\rm{lazy}}}}}}{\rm{)\} }} \end{align} $

    (9)

    其中, $\varphi _i$为$N_{\rm{lazy}}\times k$的输入矩阵, $i=1, 2, \cdots$, $N_{\rm{lazy}}$; $y_i$为$N_{\rm{lazy}} \times 1$的输出向量.第$q$个查询点的预测值可以由下式计算.

    $ \begin{align} \widehat {y}_q = \varphi _q^{\rm{T}}{({{\bf \it{Z}}^{\rm{T}}}{\bf \it{Z}})^{ - 1}}{{\bf \it{Z}}^{\rm{T}}}{\bf \it{v}} \end{align} $

    (10)

    其中, ${{Z}}={ {W\Phi}}$; ${\bf \it{v}}={\bf \it{Wy}}$. ${\bf \it{W}}$是一个对角矩阵, ${\bf \it{W}}_{ii}$ $=\omega_i$, 其中, $\omega_i$为从查询点$\varphi _q$到点$\varphi _i$的距离$d(\varphi _i, \varphi _q)$的权重函数.从而, $({\bf \it{Z}}^{\rm{T}}\bf \it{Z}) \beta={\bf \it{Z}}^{\rm{T}} {\bf \it{v}}$可以作为一个局部加权回归模型.在其训练过程的误差校验方法可为留一法交叉校验(Leave-one-out cross-validation, LOOCV), 计算方式为

    $ \begin{align} &{\rm{MS}}{{\rm{E}}^{{\rm{CV}}}}({\varphi _q}) =\nonumber\\[1mm] &\qquad \displaystyle\frac{1} {{\sum\limits_i {w_i^2} }}\sum\limits_i {{{\left( {\frac{{{v_i} - z_i^{\rm{T}}{{({{\bf \it{Z}}^{\rm{T}}}{\bf \it{Z}})}^{ - 1}} {{\bf \it{Z}}^{\rm{T}}}{\bf \it{v}}}}{{1 - z_i^{\rm{T}}{{({{\bf \it{Z}}^{\rm{T}}}{\bf \it{Z}})}^{ - 1}}{z_i}}}} \right)}^2}} = \nonumber\\[1mm] &\qquad \displaystyle\frac{1}{{\sum\limits_i {w_i^2} }}\sum\limits_i {{{\left( {{w_i}\frac{{{y_i} - \varphi _i^{\rm{T}}{{({{\bf \it{Z}}^{\rm{T}}}{\bf \it{Z}})}^{ - 1}}{{\bf \it{Z}}^{\rm{T}}} {\bf \it{v}}}}{{1 - z_i^{\rm{T}}{{({{\bf \it{Z}}^{\rm{T}}}{\bf \it{Z}})}^{ - 1}}{z_i}}}} \right)}^2}} = \nonumber\\[1mm] &\qquad \displaystyle\frac{1}{{\sum\limits_i {w_i^2} }}\sum\limits_i {{{\left( {{w_i}{e^{{\rm{CV}}}}(i)} \right)}^2}} \end{align} $

    (11)

    其中, ${e^{{\rm{CV}}}}(i)$为第$i$个留一误差, 计算方式为

    $ \begin{align} e_{n + 1}^{{\rm{CV}}}(i) = \dfrac{{{y_i} - \varphi _i^{\rm{T}}{\beta _{n + 1}}}}{{1 + \varphi _i^{\rm{T}}{{\bf \it{P}}_{n + 1}}{\varphi _i}}} \end{align} $

    (12)

    其中, ${{\bf \it{P}}_n}$为矩阵${({{\bf \it{Z}}^{\rm{T}}}{\bf \it{Z}})^{ - 1}}$的回归逼近; ${\beta _n}$为$n$邻近的最优最小二乘序列参数; 且在$e_n^{{\rm{CV}}}(i)$中满足$1$ $\le$ $i\le n$; ${\beta _{n + 1}}$的计算方法如下:

    $ \begin{align} &{\beta _{n + 1}} = {\beta _n} + {\gamma _{n + 1}}{e_{n + 1}}\nonumber\\ & {e_{n + 1}} = {y_{n + 1}} - \varphi _{n + 1}^{\rm{T}}{\beta _n}\nonumber\\ & {\gamma _{n + 1}} = {{\bf \it{P}}_{n + 1}}{\varphi _{n + 1}}\nonumber\\ & {{\bf \it{P}}_{n + 1}} = {{\bf \it{P}}_n} - \frac{{{{\bf \it{P}}_n}{\varphi _{n + 1}}\varphi _{n + 1}^{\rm{T}}{{\bf \it{P}}_n}}}{{1 + \varphi _{n + 1}^{\rm{T}}{{\bf \it{P}}_n}{\varphi _{n + 1}}}} \end{align} $

    (13)

    因此, 针对REG问题, 所提LRL算法中懒惰学习离线学习和在线学习的输入和输出可见表 2.

    表 2  懒惰强化学习输入输出量
    Table 2  Inputs and outputs of lazy reinforcement learning
    输入输出懒惰学习强化网络懒惰强化学习
    输入量$\Delta {f_i}, {e_i}, {\bf \it {A}}$$\Delta {F'_{i, (t + 1)}}$$\Delta {f_i}, {e_i}$
    输出量${\Delta {f'_{i, (t + 1)}}}$$\Delta {P_{i, j}}, $
    $i = 1, 2, \cdots, {J_i}$
    $\Delta {P_{i, j}}, $
    $i = 1, 2, \cdots, {J_i}$
    下载: 导出CSV 
    | 显示表格

    LRL中的选择过程可以从下一状态$(\Delta {F'_{i, (t + 1)}})$中选择最优的状态(最小的$| {\Delta {{f'}_{i, (t + 1)}}} |$).

    LRL中的强化网络可以计算出总的发电命令$\Delta {P_i}$, 并分配$\Delta {P_{i, j}}$到第$i$个区域里的所有AGC机组上, 其中, $\Delta {P_i}=\sum_{j = 1}^{{J_i}} {\Delta {P_{i, j}}} $.强化网络由强化学习和一个反向传播神经网络(Back propagation neural network, BPNN)组成. Q学习是一种无需模型的控制算法.基于Q学习的控制器可以在线根据环境变化更新其控制策略.此类控制器的输入为状态值和奖励值, 输出为作用于环境的动作量.它们可以依据Q-矩阵$\bf \it{Q}$和概率分布矩阵$\bf \it{P}$, 针对当前的环境状态$s$, 制定应当进行的动作$a$.矩阵$\bf \it{Q}$和$\bf \it{P}$可以由奖励函数随后进行更新.

    $ \begin{align} &Q(s, a) \leftarrow Q(s, a) + \alpha (R(s, s', a) \, + \nonumber\\ &\qquad\qquad\ \ \gamma \mathop {\max }\limits_{a \in A} Q(s', a) - Q(s, a)) \end{align} $

    (14)

    $ \begin{align} &P(s, a) \leftarrow \begin{cases} P(s, a) - \beta (1 - P(s, a)), &s' = s\\ P(s, a)(1 - \beta ), &{\mbox{其他}} \end{cases} \end{align} $

    (15)

    其中, $\alpha$为学习率; $\gamma$为折扣系数; $\beta$为概率系数; $s$, $s'$分别为当前状态和下一状态; $R(s, s', a)$为奖励函数, 与当前状态$s$和由动作$a$导致的状态有关.当前状态$s$和下一状态$s'$同属于状态集合$\bf \it{S}$, 即$s \in {\bf \it{S}}$, $s'$ $\in$ ${\bf \it{S}}$.被选择的动作$a$输出动作集合$\bf \it{A}$, 即$a \in {\bf \it{A}}$.本文采用结构简单的三层感知器BPNN, 分配到多个机组的输出$y_i^{{\rm{bpnn}}}$的计算公式为

    $ \begin{align} y_i^{{\rm{bpnn}}} = f\left(x_i^{{\rm{bpnn}}}\right) = f\left(\sum\limits_{j = 1}^{{n^{{\rm{bpnn}}}}} {\omega _{ji}^{{\rm{bpnn}}}x_i^{{\rm{bpnn}}} + b_i^{{\rm{bpnn}}}} \right) \end{align} $

    (16)

    其中, $\omega _{ji}^{{\rm{bpnn}}}$为权重值; $b_i^{{\rm{bpnn}}}$为补偿值; ${n^{{\rm{bpnn}}}}$为BP神经网络中的隐藏元的个数; $f(z)$为sigmoid函数.本文采用的sigmoid函数为

    $ \begin{align} f(z)=\tanh (z) = \frac{{{\rm e}^z - {\rm e}^{ - z}}}{{{\rm e}^z + {\rm e}^{ - z}}} \end{align} $

    (17)

    BPNN训练算法为莱文贝格-马夸特方法(Levenberg-Marquardt algorithm).

    LRL的松弛算子类似一个操作员对强化网络的输出进行约束控制.因此, 松弛算子的约束可以表达为

    $ \begin{align} \Delta {P_{i, j}} \leftarrow \frac{{[\Delta {P_{i, j}}{{u'}_{j, t}}]}}{{\sum\limits_{j = 1}^{{J_i}} {([\Delta {P_{i, j}}{{u'}_{j, t}}])} }}\sum\limits_{j = 1}^{{J_i}} {(\Delta {P_{i, j}})} \end{align} $

    (18)

    其中, $\left[{\Delta {P_{i, j}}{{u'}_{j, t}}} \right]$为约束函数, 表达式为

    $ \begin{align} &\max \left\{ {{P_{j, (t - 1)}} - P_j^{{\rm{down}}}, {{u'}_{j, t}}P_j^{\min }} \right\} \le\notag \\ &\qquad\ \ \Delta {P_{i, j}}{{u'}_{j, t}} \le \min \left\{ {{P_{j, (t - 1)}} + P_j^{{\rm{up}}}, {{u'}_{j, t}}P_j^{\max }} \right\} \end{align} $

    (19)

    其中, ${u'_{j, t}}$为临时启动状态, 表达式为

    $ \begin{align} {u'_{j, t}}=\!\begin{cases} 1, &\!\left[ {\Delta {P_{i, j}}} \right] > 0~\mbox{或}~ 1 < T_{j, (t - 1)}^{{\rm{up}}} < T_{j, (t - 1)}^{{\rm{min\mbox{-}up}}}\\ 0, &\!\left[ {\Delta {P_{i, j}}} \right] = 0~\mbox{或}~1 \le T_{j, (t - 1)}^{{\rm{down}}} < T_j^{{\rm{min\mbox{-}down}}} \end{cases} \end{align} $

    (20)

    传统学习算法会对所有通过平行系统获取的数据进行学习.然而, 采用这些数据进行学习不一定能够取得比当前真实系统更优的控制效果.因此, 本文提出的LRL方法, 会筛选出那些更优的数据进行学习.即, 当在$t$时刻的状态$s_t$优于时刻的状态${s'_{(t + t), 1}}$, 而劣于$t + \Delta t$时刻的状态${s'_{(t + t), 2}}$, 那么算法将排除从$s_t$到${s'_{(t + t), 1}}$的变化过程数据, 而将保留从$s_t$到${s'_{(t + t), 2}}$的变化过程数据进行离线训练.

    针对REG问题, 离线训练的输入与输出如表 2所示.但在对比状态${s'_{(t + t), 1}}$和${s'_{(t + t), 2}}$时, 可将状态设定为预测的区域i频率偏差, 即$\Delta {f'_{i, (t + 1)}}$, 也即从$\Delta {F'_{i, (t + 1)}}$选择最优值对应的输入和输出数据进行训练. 图 5是在平行系统下基于REG框架的懒惰强化学习的控制器运行步骤.

    图 5  平行系统下基于REG控制器的LRL算法的流程图
    Fig. 5  Procedures of LRL based REG controller under parallel systems

    本文仿真均是在主频为2.20 GHz, 内存96 GB的AMAX XR-28201GK型服务器上基于MATLAB 9.1 (R2016b)平台实现的. 表 3是仿真中采用的所有算法, 其中各算法的含义见表 4.

    表 3  仿真所用的算法
    Table 3  Algorithms for this simulation
    序号UCEDAGCGCD
    1模拟退火算法(SAA)SAAPID控制SAA
    2多元优化(MVO)MVO滑模控制器MVO
    3遗传算法(GA)GA自抗扰控制GA
    4灰狼算法(GWO)GWO分数阶PID控制GWO
    5粒子群优化(PSO)PSO模糊逻辑控制器PSO
    6生物地理优化(BBO)BBOQ学习BBO
    7飞蛾扑火算法(MFO)MFOQ($\lambda$)学习MFO
    8鲸鱼群算法(WOA)WOAR($\lambda$)学习WOA
    9固定比例
    10松弛人工神经网络(RANN)
    11懒惰强化学习(LRL)
    下载: 导出CSV 
    | 显示表格
    表 4  各对比算法的缩写
    Table 4  Abbreviation of compared algorithms
    缩写全称意义
    UCUnit commitment机组组合
    EDEconomical dispatch经济调度
    AGCAutomatic generation control自动发电控制
    GCDGeneration command dispatch发电指令调度
    RLReinforcement learning强化学习
    REGReal-time economic generation dispatch and control实时经济调度与控制
    ACPArtificial societies- computational experiments-parallel execution人工社会-计算实验-平行执行
    CPSCyber-physical system信息物理系统
    CPSSCyber-physical-social systems信息物理社会融合系统
    LRLLazy reinforcement learning懒惰强化学习
    RANNRelaxed artificial neural network松弛人工神经网络
    SAASimulated annealing algorithm模拟退火算法
    MVOMulti-verse optimizer多元优化
    GAGenetic algorithm遗传算法
    GWOGray wolf optimizer灰狼算法
    PSOParticle swarm optimization粒子群优化
    BBOBiogeography-based optimization生物地理优化
    MFOMoth-flame optimization飞蛾扑火算法
    WOAWhale optimization algorithm鲸鱼群算法
    LOOCVLeave-one-out cross-validation留一法交叉校验
    BPNNBack propagation neural network反向传播神经网络
    下载: 导出CSV 
    | 显示表格

    组合算法和REG控制器的仿真时间设定为1天或86 400秒.总共采用了有4 608种传统发电调控算法($8\times 8 \times 8 \times 9=4 608$种组合)和两种基于REG框架的算法进行仿真实验.总的设置仿真模拟时间为12.6301年或为($8\times 8 \times 8 \times 9+2$)天.所有的传统发电调控算法的参数设置详见附录A.

    图 6是IEEE新英格兰10机39节点标准电力系统结构.从图 6可以看出, 仿真实验将该电力系统划分成3个区域.该系统中设置10台发电机, 发电机{30, 37, 39}划分至区域1, 发电机{31, 32, 33, 34, 35}划分至区域2, 剩下的发电机{36, 38}划分至区域3.除此之外, 光伏, 风电以及电动汽车也被纳入仿真模型之中(详细参数见图 7).其中, 电动汽车负荷需求曲线为5种不同车辆用户行为叠加而成的.各个机组参数如表 5表 6所示.

    图 6  新英格兰电力系统结构图
    Fig. 6  Structure of New-England power system
    图 7  光伏、电动汽车、风电、负荷曲线
    Fig. 7  Curves of photo-voltaic power (PV), electric vehicle (EV), wind power and load
    表 5  机组参数表
    Table 5  Parameters of the generators
    机组编号30373931323334353638
    机组最小连续开机时间$T_j^{\mathrm{min-up}}$ (h)8855633111
    机组最小连续关机时间$T_j^{\mathrm{min-down}}$ (h)8855633111
    机组最大出力$P_j^{\min}$ (MW)4554551301301628085555555
    机组最小出力$P_j^{\max}$ (MW)1501502020252025101010
    热启动成本$SU_{\mathrm{H}, j}$ (t/(MW $\cdot$ h))4 5005 000550560900170260303030
    冷启动成本$SU_{\mathrm{C}, j}$ (t/(MW $\cdot$ h))9 00010 0001 1001 1201 800340520606060
    冷启动时间$T_j^{\mathrm{cold}}$ (h)5544422000
    ED成本系数$a_j$0.6750.450.5630.5630.450.5630.5630.3370.3150.287
    ED成本系数$b_j$360240299299240299299181168145
    ED成本系数$c_j$11 2507 5109 3909 3907 5109 3909 3905 5305 2505 270
    ED排放系数$\alpha _j$3.3751.1251.6891.5761.171.5761.5760.6740.630.574
    ED排放系数$\beta _j$1 800600897837624837837362404290
    ED排放系数$\gamma _j$56 25018 77028 17026 29019 53026 29026 29011 06013 80010 540
    下载: 导出CSV 
    | 显示表格
    表 6  机组组合问题参数表
    Table 6  Parameters for unit commitment problem
    UC问题的负荷时段(h)123456789101112
    UC问题的负荷值$PD_t$ (WM)7007508509501 0001 1001 1501 2001 3001 4001 4501 500
    UC问题的旋转备用$SR_t$ (WM)70758595100110115120130140145150
    UC问题的负荷时段(h)131415161718192021222324
    UC问题的负荷值$PD_t$ (WM)1 4001 3001 2001 0501 0001 1001 2001 4001 3001 100900800
    UC问题的旋转备用$SR_t$ (WM)1401301201051001101201401301109080
    下载: 导出CSV 
    | 显示表格

    仿真实验设置发电控制的控制周期为4 s. REG控制器每4 s计算一次.对于传统组合算法, UC每天进行一次, ED每15分钟优化一次, AGC和GCD每次控制周期中计算一次.松弛人工神经网络RANN算法由人工神经网络和所提LRL算法中的松弛算子组成. LRL整体的输入和输出分别作为RANN算法的输入和输出. RANN算法的松弛算子见式(18)~(20). BPNN选择的三层感知网络的隐含层神经元的个数设定为40个.每个松弛人工神经网络设置有40个隐藏元.在所提LRL算法中, 强化学习和懒惰学习的动作集$k$的列数设为121, 该列数一般可选范围较大; 动作值选为从$-300$~$300$ MW; 其中强化学习的学习率的范围为$\alpha \in (0, 1]$, 本文选为0.1;概率选择系数$\beta \in (0, 1]$, 本文设定为0.5;折扣系数$\lambda \in (0, 1]$, 本文设定为0.9.其中学习率选择的越大学习速度越快, 但会导致精度随之下降.

    强化学习系列算法Q学习、Q($\lambda $)学习和R($\lambda $)学习算法的离线学习是时间分别为2.27 h, 2.49 h和2.95 h; 松弛人工神经网络算法的训练时间为15.50 h; 所提LRL算法的离线训练时间为6.60 h.虽然所提LRL算法较传统强化学习算法在离线训练效率方面不具有优势, 但是其具有最佳的控制效果.同时, 与统一时间尺度的松弛人工神经网络算法相比, LRL算法的离线训练时间较小且其控制效果更优.

    仿真结果展示在图 8~12表 7~10中.

    表 7  UC算法仿真结果统计
    Table 7  Statistic of simulation results obtained by the UC
    算法ACE1 (MW) $\Delta f_1$ (Hz)ACE2 (MW)$\Delta f_2$ (Hz)ACE3 (MW)$\Delta f_3$ (Hz)
    SAA573.89040.038235258.77980.037525 527.97461.3137
    MVO575.36720.038274259.92650.0375585 532.62021.3154
    GA603.43910.041805258.64840.0410416 052.28061.4428
    GWO616.0640.043454257.61070.0426536 290.08431.5017
    PSO575.71720.038264260.35430.0375555 535.16441.3159
    BBO574.27690.038213259.3490.0374995 522.56911.3131
    MFO569.71590.037685259.14990.0369845 441.34871.2932
    WOA645.59060.047207255.82460.046396 844.85091.6369
    RANN553.40320.039963224.17480.0390835 431.28441.2907
    LRL441.92250.010254389.99050.00956121 023.19190.23743
    下载: 导出CSV 
    | 显示表格
    表 8  ED算法仿真结果统计
    Table 8  Statistic of simulation results obtained by the ED algorithms
    算法ACE1 (MW) $\Delta f_1$ (Hz)ACE2 (MW)$\Delta f_2$ (Hz)ACE3 (MW)$\Delta f_3$ (Hz)
    SAA587.84140.039976258.27670.0392345 777.57551.3756
    MVO588.1770.039978258.51250.0392455 782.35671.3768
    GA589.40910.040193257.63350.0394795 818.98091.3856
    GWO587.65470.039959258.09230.0392285 780.46641.3763
    PSO587.8580.039915258.81110.0391825 771.29241.3741
    BBO588.01980.039924258.92110.0391925 770.46081.3739
    MFO588.18360.039988258.49480.039255 778.8441.3759
    WOA588.69740.040103257.71130.0393875 805.40461.3823
    RANN553.40320.039963224.17480.0390835 431.28441.2907
    LRL441.92250.010254389.99050.00956121 023.19190.23743
    下载: 导出CSV 
    | 显示表格
    表 9  AGC算法仿真结果统计
    Table 9  Statistic of simulation results obtained by the AGC algorithms
    算法ACE1 (MW) $\Delta f_1$ (Hz)ACE2 (MW)$\Delta f_2$ (Hz)ACE3 (MW)$\Delta f_3$ (Hz)
    PID控制591.30810.040435257.5180.0397175 854.01021.3939
    滑动模式控制器590.73350.040374257.44950.0396565 844.72911.3916
    自抗扰控制591.37710.040424257.67730.0397075 853.04881.3937
    分数阶PID控制591.10070.040437257.30690.0397155 852.74781.3936
    模糊逻辑控制591.9510.040504257.60240.0397815 863.47851.3963
    Q学习591.36030.040452257.45720.0397275 855.13391.3942
    Q($\lambda$)学习591.07720.040419257.44210.0396965 849.97051.393
    R($\lambda$)学习591.72820.040494257.4690.039775 862.78321.3961
    RANN553.40320.039963224.17480.0390835 431.28441.2907
    LRL441.92250.010254389.99050.00956121 023.19190.23743
    下载: 导出CSV 
    | 显示表格
    表 10  GCD算法仿真结果统计
    Table 10  Statistic of simulation results obtained by the GCD algorithms
    算法ACE1 (MW) $\Delta f_1$ (Hz)ACE2 (MW)$\Delta f_2$ (Hz)ACE3 (MW)$\Delta f_3$ (Hz)
    SAA591.30810.040435257.5180.0397175 854.01021.3939
    MVO590.73350.040374257.44950.0396565 844.72911.3916
    GA591.37710.040424257.67730.0397075 853.04881.3937
    GWO591.10070.040437257.30690.0397155 852.74781.3936
    PSO591.9510.040504257.60240.0397815 863.47851.3963
    BBO591.36030.040452257.45720.0397275 855.13391.3942
    MFO591.07720.040419257.44210.0396965 849.97051.393
    WOA591.72820.040494257.4690.039775 862.78321.3961
    固定比例509.03910.028801282.03320.0276093 973.7430.94347
    RANN553.40320.039963224.17480.0390835 431.28441.2907
    LRL441.92250.010254389.99050.00956121 023.19190.23743
    下载: 导出CSV 
    | 显示表格
    图 8  仿真统计结果
    Fig. 8  Statistical result
    图 9  仿真统计结果(频率偏差)
    Fig. 9  Statistical result of frequency deviation
    图 10  仿真统计结果(区域控制误差)
    Fig. 10  Statistical result of area control error
    图 11  平行系统频率偏差收敛曲线
    Fig. 11  Convergence curve of frequency deviation obtained by the parallel systems
    图 12  平行系统区域控制误差收敛曲线
    Fig. 12  Convergence curve of area control error obtained by the parallel systems

    图 8是频率偏差、区域控制误差和仿真计算所用时间的统计结果, 其中所提LRL算法能得到最优的调控效果.

    图 9是各个算法频率偏差的统计对比效果, 其中所提LRL算法能在所有区域均获得最小的频率偏差. 图 10是各个算法获得的区域控制误差的统计结果, 可以看出, 所提LRL算法不会导致大量牺牲某个区域的功率来满足其他区域的功率平衡.

    图 11图 12是利用平行系统仿真数据对所提LRL算法训练的收敛曲线图.可以看出, 经过667次的迭代, 能获得最优的收敛结果.

    图 9以及表 7~10可以看出, 与传统组合发电控制算法和松弛人工神经网络相比, 本文提出的LRL方法可以保持系统内的有功平衡, 并且能使电网频率偏差达到最低.因此, LRL能够在多区域大规模互联电网中取得最优的控制效果.

    图 8图 10可以看出, 在仿真中, 由于LRL可以在最短时间内取得最低的频率偏差和最低的控制错误率, LRL的懒惰学习可以有效地对电力系统的下一状态进行预测.因此, LRL可以提供准确的AGC机组动作指令.

    在应对多区域大规模互联电网的经济调度和发电控制问题时, REG控制器完全可以取代传统的组合算法方法.

    图 11图 12可以看出, 由于仿真采用了平行系统, 降低了使用的真实仿真时间, 由于平行系统进行了迭代, 加速了仿真的过程.

    为了解决多区域大规模互联电网经济调度和发电控制中存在的协同问题, 本文提出了一种REG框架.该框架可作为一种传统发电调控框架的替代.然后, 为REG控制器提出了一种基于人工社会-计算实验-平行执行方法的懒惰学习算法.基于REG控制器的LRL算法的特征可以总结如下:

    1) 本文提出了一种统一时间尺度的REG控制框架, 并提出一种基于REG控制器的LRL算法.可以有效地对电力系统的下一运行状态进行预测并且输出满足UC问题的约束动作指令, 取得最优的控制效果.

    2) LRL中的强化学习网络具有同时产生多个输出的能力.因此, 基于REG控制器LRL的可以不断地为存在于多区域大规模互联电网的所有AGC机组输出发电命令.

    3) 通过搭建平行系统, 使得基于LRL的REG控制器可以用于解决多区域大规模互联电网经济调度和发电控制问题.

    各算法重要参数设置如下:

    1) PID控制:比例系数$k_{\mathrm{P}}=-0.006031543250198, $积分系数$k_{\mathrm{I}}=0.00043250;$

    2) 滑模控制器:开通/关断点$k_{\mathrm{point}}=\pm 0.1$ Hz, 开通/关断输出$k_{\mathrm{v}}=\pm80$ MW;

    3) 自抗扰控制:扩张状态观测器

    $ \begin{align*} &A = \left[ {\begin{array}{*{20}{c}} 0&{0.0001}&0&0\\ 0&0&{0.0001}&0\\ 0&0&0&{0.0001}\\ 0&0&0&0 \end{array}} \right]\\ &B = \left[ {\begin{array}{*{20}{c}} 0&0\\ 0&0\\ {0.0001}&{0.0001}\\ 0&0 \end{array}} \right]\\ &C = {\rm diag}\left\{ {\begin{array}{*{20}{c}} {0.1}&{0.1}&{0.1}&{0.1} \end{array}} \right\}\\ &D = {0_{4 \times 2}}\\ &k_1=15.0, \ k_2=5.5, \ k_3=2.0, \ k_4=1 \end{align*} $

    4) 分数阶PID控制:比例系数$k_{\mathrm{P}}=-1, $积分系数$k_{\mathrm{I}}$ $=$ $0.43250, $ $\lambda=1.3, $ $\mu=200;$

    5) 模糊逻辑控制器: $X$ (输入, $\Delta f$)在[$-$0.2, 0.2] Hz等间隔选取21个区间, $Y$ (输入, $\int \Delta f{\rm d}t$)在[$-$1, 1] Hz等间隔选取21个区间, $Z$ (输出, $\Delta P$)在[$-$150, 150] MW等间隔选取441个区间;

    6) Q学习:动作集$A=\{-300, -240, -180, -120$, $-60, 0, 60, 120, 180, 240, 300\}$, 学习率$\alpha=0.1, $概率分布常数$\beta=0.5, $未来奖励折扣系数$\gamma=0.9, $ $\lambda=0.9$;

    7) Q($\lambda$)学习: $A=\{-300, -240, -180, -120, -60, 0$, $60, 120, 180, 240, 300\}$, $\alpha=0.1$, $\beta=0.5$, $\gamma=0.9$, $\lambda=0.9$;

    8) R($\lambda$)学习: $A=\{-300, -240, -180, -120, -60, 0$, $60, 120, 180, 240, 300\}$, $\alpha=0.1$, $\beta=0.5$, $\gamma=0.9$, $\lambda=0.9$, $R_0$ $=0;$

    9) 对于所有用于UC的优化算法:进化代数$N_{\mathrm{g}}=50$, 种群数目$P_{\mathrm{s}}=10$;

    10) 对于所有用于ED的优化算法:进化代数$N_{\mathrm{g}}=30$, 种群数目$P_{\mathrm{s}}=10$;

    11) 对于所有用于GCD的优化算法:进化代数$N_{\mathrm{g}}=5$, 种群数目$P_{\mathrm{s}}=10$;

    12) 固定比例GCD控制: ${k_j} = {{\Delta P_j^{\max }}}/ {{\sum {\Delta P_j^{\max }} }}\Delta {P_j}$, $j$ $= 1, 2, \cdots, {J_i}$, $i = 1, 2, \cdots, 3$.

  • 图  1  不同类型模糊图像示例

    Fig.  1  Examples for different kinds of blurred images

    图  2  基于空域/频域的NR-IQA方法分类

    Fig.  2  Classification of spatial/spectral domain-based NR-IQA methods

    图  3  基于学习的NR-IQA方法分类

    Fig.  3  Classification of learning-based NR-IQA methods

    图  4  不同类型NR-IQA方法在不同人工模糊数据集中平均性能评价指标值比较

    Fig.  4  Average performance evaluation result comparison through different types of NR-IQA methods for different artificial blur databases

    图  5  不同类型NR-IQA方法在不同自然模糊数据集中平均性能评价指标值比较

    Fig.  5  Average performance evaluation result comparison through different types of NR-IQA methods for different natural blur databases

    表  1  含有模糊图像的主要图像质量评价数据集

    Table  1  Main image quality assessment databases including blurred images

    数据集时间参考图像模糊图像模糊类型主观评价分值范围
    IVC[28]2005420高斯模糊MOS模糊−清晰 [1 5]
    LIVE[22]200629145高斯模糊DMOS清晰−模糊 [0 100]
    A57[30]200739高斯模糊DMOS清晰−模糊 [0 1]
    TID2008[26]200925100高斯模糊MOS模糊−清晰 [0 9]
    CSIQ[25]200930150高斯模糊DMOS清晰−模糊 [0 1]
    VCL@FER[29]201223138高斯模糊MOS模糊−清晰 [0 100]
    TID2013[27]201325125高斯模糊MOS模糊−清晰 [0 9]
    KADID-10k 1[31]201981405高斯模糊MOS模糊−清晰 [1 5]
    KADID-10k 2[31]201981405镜头模糊MOS模糊−清晰 [1 5]
    KADID-10k 3[31]201981405运动模糊MOS模糊−清晰 [1 5]
    MLIVE1[33]201215225高斯模糊和高斯白噪声DMOS清晰−模糊 [0 100]
    MLIVE2[33]201215225高斯模糊和JEPG压缩DMOS清晰−模糊 [0 100]
    MDID2013[32]201312324高斯模糊、JEPG压缩和白噪声DMOS清晰−模糊 [0 1]
    MDID[34]2017201600高斯模糊、对比度变化、高斯噪声、
    JPEG或JPEG2000
    MOS模糊−清晰 [0 8]
    BID[21]2011586自然模糊MOS模糊−清晰 [0 5]
    CID2013[35]2013480自然模糊MOS模糊−清晰 [0 100]
    CLIVE[36-37]20161162自然模糊MOS模糊−清晰 [0 100]
    KonIQ-10k [38]201810073自然模糊MOS模糊−清晰 [1 5]
    下载: 导出CSV

    表  2  基于空域/频域的不同方法优缺点对比

    Table  2  Advantage and disadvantage comparison for different methods based on spatial/spectral domain

    方法分类优点缺点
    边缘信息概念直观、计算复杂度低容易因图像中缺少锐利边缘而影响评价结果
    再模糊理论对图像内容依赖小, 计算复杂度低准确性依赖 FR-IQA 方法
    奇异值分解能较好地提取图像结构、边缘、纹理信息计算复杂度较高
    自由能理论外部输入信号与其生成模型可解释部分之间的
    差距与视觉感受的图像质量密切相关
    计算复杂度高
    DFT/DCT/小波变换综合了图像的频域特性和多尺度特征, 准确性和鲁棒性更高计算复杂度高
    下载: 导出CSV

    表  3  基于学习的不同方法优缺点对比

    Table  3  Advantage and disadvantage comparison for different methods based on learning

    方法分类优点缺点
    SVM在小样本训练集上能够取得比其他算法更好的效果评价结果的好坏由提取的特征决定
    NN具有很好的非线性映射能力样本较少时, 容易出现过拟合现象, 且
    计算复杂度随着数据量的增加而增大
    深度学习可以从大量数据中自动学习图像特征的多层表示对数据集中数据量要求大
    字典/码本可以获得图像中的高级特征字典/码本的大小减小时, 性能显著下降
    MVG无需图像的 MOS/DMOS 值模型建立困难, 对数据集中数据量要求较大
    下载: 导出CSV

    表  4  用于对比的不同NR-IQA方法

    Table  4  Different NR-IQA methods for comparison

    方法类别方法特征模糊/通用
    空域/频域空域边缘信息JNB[43]计算边缘分块所对应的边缘宽度模糊
    边缘信息CPBD[44]计算模糊检测的累积概率模糊
    边缘信息MLV[47]计算图像的最大局部变化得到反映图像对比度信息的映射图模糊
    自由能理论ARISM[63]每个像素 AR 模型系数的能量差和对比度差模糊
    边缘信息BIBLE[49]图像的梯度和 Tchebichef 矩量模糊
    边缘信息Zhan 等[14]图像中最大梯度及梯度变化量模糊
    频域


    DFT变换S3[65]在频域测量幅度谱的斜率, 在空域测量空间变化情况模糊
    小波变换LPC-SI[81]LPC 强度变化作为指标模糊
    小波变换BISHARP[77]计算图像的均方根来获取图像局部对比度信息,
    同时利用小波变换中对角线小波系数
    模糊
    HVS滤波器HVS-MaxPol[85]利用 MaxPol 卷积滤波器分解与图像清晰度相关的有意义特征模糊
    学习机器学习SVM+SVRBIQI[86]对图像进行小波变换后, 利用 GGD 对得到的子带系数进行参数化通用
    SVM+SVRDIIVINE[87]从小波子带系数中提取一系列的统计特征通用
    SVM+SVRSSEQ[88]空间−频域熵特征通用
    SVM+SVRBLIINDS-II[91]多尺度下的广义高斯模型形状参数特征、频率变化系数特征、
    能量子带特征、基于定位模型的特征
    通用
    SVRBRISQUE[96]GGD 拟合 MSCN 系数作为特征, AGGD 拟合 4 个相邻元素乘积系数作为特征通用
    SVRRISE[107]多尺度图像空间中的梯度值和奇异值特征, 以及多分辨率图像的熵特征模糊
    SVRLiu 等[109]局部模式算子提取图像结构信息, Toggle 算子提取边缘信息模糊
    SVRCai 等[110]输入图像与其重新模糊版本之间的 Log-Gabor 滤波器响应差异和基于方向
    选择性的模式差异, 以及输入图像与其 4 个下采样图像之间的自相似性
    模糊
    深度学习CNNKang's CNN[116]对图像分块进行局部对比度归一化通用
    浅层CNN+GRNNYu's CNN[127]对图像分块进行局部对比度归一化模糊
    聚类技术+RBMMSFF[139]Gabor 滤波器提取不同方向和尺度的原始图像特征,
    然后由 RBMs 生成特征描述符
    通用
    DNNMEON[132]原始图像作为输入通用
    CNNDIQaM-NR[131]使用 CNN 提取失真图像块和参考图像块的特征通用
    CNNDIQA[118]图像归一化后, 通过下采样及上采样得到低频图像通用
    CNNSGDNet[133]使用 DCNN 作为特征提取器获取图像特征通用
    秩学习Rank Learning[141]选取一定比例的图像块集合作为输入, 梯度信息被用来指导图像块选择过程模糊
    DCNN+SFASFA[128]多个图像块作为输入, 并使用预先训练好的 DCNN 模型提取特征模糊
    DNN+NSSNSSADNN[134]每个图像块归一化后用 CNNs 提取特征, 得到 1024 维向量通用
    CNNDB-CNN[123]用预训练的 S-CNN 及 VGG-16 分别提取合成失真与真实图像的相关特征通用
    CNNCGFA-CNN[124]用 VGG-16 以提取失真图像的相关特征通用
    字典/码本聚类算法+码本CORNIA[145]未标记图像块中提取局部特征进行 K-means 聚类以构建码本通用
    聚类算法+码本QAC[147]用比例池化策略估计每个分块的局部质量,
    通过 QAC 学习不同质量级别上的质心作为码本
    通用
    稀疏学习+字典SPARISH[143]以图像块的方式表示模糊图像, 并使用稀疏系数计算块能量模糊
    MVGMVG模型NIQE[150]提取 MSCN 系数, 再用 GGD 和 AGGD 拟合得到特征通用
    下载: 导出CSV

    表  5  基于深度学习的方法所采用的不同网络结构

    Table  5  Different network structures of deep learning-based methods

    方法网络结构
    Kang's CNN[116]包括一个含有最大/最小池化的卷积层, 两个全连接层及一个输出结点
    Yu's CNN[127]采用单一特征层挖掘图像内在特征, 利用 GRNN 评价图像质量
    MSFF[139]图像的多个特征作为输入, 通过端到端训练学习特征权重
    MEON[132]由失真判别网络和质量预测网络两个子网络组成, 并采用 GDN 作为激活函数
    DIQaM-NR[131]包含 10 个卷积层和 5 个池化层用于特征提取, 以及 2 个全连接层进行回归分析
    DIQA[118]网络训练分为客观失真部分及与人类视觉系统相关部分两个阶段
    SGDNet[133]包括视觉显著性预测和图像质量预测的两个子任务
    Rank Learning[141]结合了 Siamese Mobilenet 及多尺度 patch 提取方法
    SFA[128]包括 4 个步骤: 图像的多 patch 表示, 预先训练好的 DCNN 模型提取特征,
    通过 3 种不同统计结构进行特征聚合, 部分最小二乘回归进行质量预测
    NSSADNN[134]采用多任务学习方式设计, 包括自然场景统计 (NSS) 特征预测任务和质量分数预测任务
    DB-CNN[123]两个卷积神经网络分别专注于两种失真图像特征提取, 并采用双线性池化实现质量预测
    CGFA-CNN[124]采用两阶段策略, 首先基于 VGG-16 网络的子网络 1 识别图像中的失真类型, 而后利用子网络 2 实现失真量化
    下载: 导出CSV

    表  6  基于空域/频域的不同NR-IQA方法在不同数据集中比较结果

    Table  6  Comparison of different spatial/spectral domain-based NR-IQA methods for different databases

    方法发表时间
    LIVE CSIQ
    PLCCSROCCRMSEMAEPLCCSROCCRMSEMAE
    JNB[43]20090.8430.84211.7069.241 0.7860.7620.1800.122
    CPBD[44]20110.9130.9438.8826.8200.8740.8850.1400.111
    S3[65]20120.9190.9638.5787.3350.8940.9060.1350.110
    LPC-SI[81]20130.9070.9239.1777.2750.9230.9220.1110.093
    MLV[47]20140.9590.9576.1714.8960.9490.9250.0910.071
    ARISM[63]20150.9620.9685.9324.5120.9440.9250.0950.076
    BIBLE[49]20160.9630.9735.8834.6050.9400.9130.0980.077
    Zhan 等[14]20180.9600.9636.0784.6970.9670.9500.0730.057
    BISHARP[77]20180.9520.9606.6945.2800.9420.9270.0970.078
    HVS-MaxPol[85]20190.9570.9606.3185.0760.9430.9210.0950.077
    方法发表时间
    TID2008 TID2013
    PLCCSROCCRMSEMAEPLCCSROCCRMSEMAE
    JNB[43]20090.6610.6670.8810.673 0.6950.6900.8980.687
    CPBD[44]20110.8200.8410.6720.5240.8540.8520.6490.526
    S3[65]20120.8510.8420.6170.4780.8790.8610.5950.480
    LPC-SI[81]20130.8610.8960.5990.4780.8690.9190.6210.507
    MLV[47]20140.8580.8550.6020.4680.8830.8790.5870.460
    ARISM[63]20150.8430.8510.6320.4920.8950.8980.5580.442
    BIBLE[49]20160.8930.8920.5280.4130.9050.8990.5310.426
    Zhan 等[14]20180.9370.9420.4100.3200.9540.9610.3740.288
    BISHARP[77]20180.8770.8800.5640.4390.8920.8960.5650.449
    HVS-MaxPol[85]20190.8530.8510.6120.4840.8770.8750.5990.484
    下载: 导出CSV

    表  7  基于学习的不同NR-IQA方法在不同人工模糊数据集中比较结果

    Table  7  Comparison of different learning-based NR-IQA methods for different artificial blur databases

    方法发表
    时间
    LIVE CSIQ TID2008 TID2013
    PLCCSROCCPLCCSROCCPLCCSROCCPLCCSROCC
    BIQI[86]20100.9200.914 0.8460.773 0.7940.799 0.8250.815
    DIIVINE[87]20110.9430.9360.8860.8790.8350.8290.8470.842
    BLIINDS-II[91]20120.9390.9310.8860.8920.8420.8590.8570.862
    BRISQUE[96]20120.9510.9430.9210.9070.8660.8650.8620.861
    CORNIA[145]20120.9680.9690.7810.7140.9320.9320.9040.912
    NIQE[150]20130.9390.9300.9180.8910.8320.8230.8160.807
    QAC[147]20130.9160.9030.8310.8310.8130.8120.8480.847
    SSEQ[88]20140.9610.9480.8710.8700.8580.8520.8630.862
    Kang's CNN[116]20140.9630.9830.7740.7810.8800.8500.9310.922
    SPARISH[143]20160.9600.9600.9390.9140.8960.8960.9020.894
    Yu's CNN[127]20170.9730.9650.9420.9250.9370.9190.9220.914
    RISE[107]20170.9620.9490.9460.9280.9290.9220.9420.934
    MEON[132]20180.9480.9400.9160.9050.8910.880
    DIQaM-NR[131]20180.9720.9600.8930.8850.9150.908
    DIQA[118]20190.9520.9510.8710.8650.9210.918
    SGDNet[133]20190.9460.9390.8660.8600.9280.914
    Rank Learning[141]20190.9690.9540.9790.9530.9590.9490.9650.955
    SFA[128]20190.9720.9630.9460.9370.9540.948
    NSSADNN[134]20190.9710.9810.9230.9300.8570.840
    CGFA-CNN[124]20200.9740.9680.9550.941
    MSFF[139]20200.9540.9620.9250.9280.9210.928
    DB-CNN[123]20200.9560.9350.9690.9470.8570.844
    Liu 等[109]20200.9800.9730.9550.9360.9720.964
    Cai 等[110]20200.9580.9550.9520.9230.9570.941
    下载: 导出CSV

    表  8  基于学习的不同NR-IQA方法在不同自然模糊数据集中比较结果

    Table  8  Comparison of different learning-based NR-IQA methods for different natural blur databases

    方法发表
    时间
    BID CID2013 CLIVE
    PLCCSROCCPLCCSROCCPLCCSROCC
    BIQI[86]20100.6040.572 0.7770.744 0.5400.519
    DIIVINE[87]20110.5060.4890.4990.4770.5580.509
    BLIINDS-II[91]20120.5580.5300.7310.7010.5070.463
    BRISQUE[96]20120.6120.5900.7140.6820.6450.607
    CORNIA[145]20120.6800.6240.6650.618
    NIQE[150]20130.4710.4690.6930.6330.4780.421
    QAC[147]20130.3210.3180.1870.1620.3180.298
    SSEQ[88]20140.6040.5810.6890.676
    Kang's CNN[116]20140.4980.4820.5230.5260.5220.496
    SPARISH[143]20160.3560.3070.6780.6610.4840.402
    Yu's CNN[127]20170.5600.5570.7150.7040.5010.502
    RISE[107]20170.6020.5840.7930.7690.5550.515
    MEON[132]20180.4820.4700.7030.7010.6930.688
    DIQaM-NR[131]20180.4760.4610.6860.6740.6010.606
    DIQA[118]20190.5060.4920.7200.7080.7040.703
    SGDNet[133]20190.4220.4170.6530.6440.8720.851
    Rank Learning[141]20190.7510.7190.8630.836
    SFA[128]20190.8400.8260.8330.812
    NSSADNN[134]20190.5740.5680.8250.7480.8130.745
    CGFA-CNN[124]20200.8460.837
    DB-CNN[123]20200.4750.4640.6860.6720.8690.851
    Cai 等[110]20200.6330.6030.8800.874
    下载: 导出CSV
  • [1] Jayageetha J, Vasanthanayaki C. Medical image quality assessment using CSO based deep neural network. Journal of Medical Systems, 2018, 42(11): Article No. 224
    [2] Ma J J, Nakarmi U, Kin C Y S, Sandino C M, Cheng J Y, Syed A B, et al. Diagnostic image quality assessment and classification in medical imaging: Opportunities and challenges. In: Proceedings of the 17th International Symposium on Biomedical Imaging (ISBI). Iowa City, USA: IEEE, 2020. 337−340
    [3] Chen G B, Zhai M T. Quality assessment on remote sensing image based on neural networks. Journal of Visual Communication and Image Representation, 2019, 63: Article No. 102580
    [4] Hombalimath A, Manjula H T, Khanam A, Girish K. Image quality assessment for iris recognition. International Journal of Scientific and Research Publications, 2018, 8(6): 100-103
    [5] Zhai G T, Min X K. Perceptual image quality assessment: A survey. Science China Information Sciences, 2020, 63(11): Article No. 211301
    [6] 王烨茹. 基于数字图像处理的自动对焦方法研究 [博士学位论文], 浙江大学, 中国, 2018.

    Wang Ye-Ru. Research on Auto-focus Methods Based on Digital Imaging Processing [Ph.D. dissertation], Zhejiang University, China, 2018.
    [7] 尤玉虎, 刘通, 刘佳文. 基于图像处理的自动对焦技术综述. 激光与红外, 2013, 43(2): 132-136 doi: 10.3969/j.issn.1001-5078.2013.02.003

    You Yu-Hu, Liu Tong, Liu Jia-Wen. Survey of the auto-focus methods based on image processing. Laser and Infrared, 2013, 43(2): 132-136 doi: 10.3969/j.issn.1001-5078.2013.02.003
    [8] Cannon M. Blind deconvolution of spatially invariant image blurs with phase. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1976, 24(1): 58-63 doi: 10.1109/TASSP.1976.1162770
    [9] Tekalp A M, Kaufman H, Woods J W. Identification of image and blur parameters for the restoration of noncausal blurs. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1986, 34(4): 963-972 doi: 10.1109/TASSP.1986.1164886
    [10] Pavlovic G, Tekalp A M. Maximum likelihood parametric blur identification based on a continuous spatial domain model. IEEE Transactions on Image Processing, 1992, 1(4): 496-504 doi: 10.1109/83.199919
    [11] Kim S K, Park S R, Paik J K. Simultaneous out-of-focus blur estimation and restoration for digital auto-focusing system. IEEE Transactions on Consumer Electronics, 1998, 44(3): 1071-1075 doi: 10.1109/30.713236
    [12] Sada M M, Mahesh G M. Image deblurring techniques-a detail review. International Journal of Scientific Research in Science, Engineering and Technology, 2018, 4(2): 176-188
    [13] Wang R X, Tao D C. Recent progress in image deblurring. arXiv:1409.6838, 2014.
    [14] Zhan Y B, Zhang R. No-reference image sharpness assessment based on maximum gradient and variability of gradients. IEEE Transactions on Multimedia, 2018, 20(7): 1796-1808 doi: 10.1109/TMM.2017.2780770
    [15] Wang X W, Liang X, Zheng J J, Zhou H J. Fast detection and segmentation of partial image blur based on discrete Walsh-Hadamard transform. Signal Processing: Image Communication, 2019, 70: 47-56 doi: 10.1016/j.image.2018.09.007
    [16] Liao L F, Zhang X, Zhao F Q, Zhong T, Pei Y C, Xu X M, et al. Joint image quality assessment and brain extraction of fetal MRI using deep learning. In: Proceedings of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham, Germany: Springer, 2020. 415−424
    [17] Li D Q, Jiang T T. Blur-specific no-reference image quality assessment: A classification and review of representative methods. In: Proceedings of the 2019 International Conference on Sensing and Imaging. Cham, Germany: Springer, 2019. 45−68
    [18] Dharmishtha P, Jaliya U K, Vasava H D. A review: No-reference/blind image quality assessment. International Research Journal of Engineering and Technology, 2017, 4(1): 339-343
    [19] Yang X H, Li F, Liu H T. A survey of DNN methods for blind image quality assessment. IEEE Access, 2019, 7: 123788-123806 doi: 10.1109/ACCESS.2019.2938900
    [20] 王志明. 无参考图像质量评价综述. 自动化学报, 2015, 41(6): 1062-1079

    Wang Zhi-Ming. Review of no-reference image quality assessment. Acta Automatica Sinica, 2015, 41(6): 1062-1079
    [21] Ciancio A, da Costa A L N T T, da Silva E A B, Said A, Samadani R, Obrador P. No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Transactions on Image Processing, 2011, 20(1): 64-75 doi: 10.1109/TIP.2010.2053549
    [22] Sheikh H R, Sabir M F, Bovik A C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on Image Processing, 2006, 15(11): 3440-3451 doi: 10.1109/TIP.2006.881959
    [23] Zhu X, Milanfar P. Removing atmospheric turbulence via space-invariant deconvolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 157-170 doi: 10.1109/TPAMI.2012.82
    [24] Franzen R. Kodak Lossless True Color Image Suite [Online], available: http://www.r0k.us/graphics/kodak/, May 1, 1999
    [25] Larson E C, Chandler D M. Most apparent distortion: Full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging, 2010, 19(1): Article No. 011006
    [26] Ponomarenko N N, Lukin V V, Zelensky A, Egiazarian K, Astola J, Carli M, et al. TID2008 - a database for evaluation of full-reference visual quality assessment metrics. Advances of Modern Radioelectronics, 2009, 10: 30-45
    [27] Ponomarenko N, Ieremeiev O, Lukin V, Egiazarian K, Jin L N, Astola J, et al. Color image database TID2013: Peculiarities and preliminary results. In: Proceedings of the 2013 European Workshop on Visual Information Processing (EUVIP). Paris, France: IEEE, 2013. 106−111
    [28] Le Callet P, Autrusseau F. Subjective quality assessment IRCCyN/IVC database [Online], available: http://www.irccyn.ec-nantes.fr/ivcdb/, February 4, 2015
    [29] Zarić A E, Tatalović N, Brajković N, Hlevnjak H, Lončarić M, Dumić E, et al. VCL@FER image quality assessment database. Automatika, 2012, 53(4): 344-354 doi: 10.7305/automatika.53-4.241
    [30] Chandler D M, Hemami S S. VSNR: A wavelet-based visual signal-to-noise ratio for natural images. IEEE Transactions on Image Processing, 2007, 16(9): 2284-2298 doi: 10.1109/TIP.2007.901820
    [31] Lin H H, Hosu V, Saupe D. KADID-10k: A large-scale artificially distorted IQA database. In: Proceedings of the 11th International Conference on Quality of Multimedia Experience (QoMEX). Berlin, Germany: IEEE, 2019. 1−3
    [32] Gu K, Zhai G T, Yang X K, Zhang W J. Hybrid no-reference quality metric for singly and multiply distorted images. IEEE Transactions on Broadcasting, 2014, 60(3): 555-567 doi: 10.1109/TBC.2014.2344471
    [33] Jayaraman D, Mittal A, Moorthy A K, Bovik A C. Objective quality assessment of multiply distorted images. In: Proceedings of the 2012 Conference Record of the 46th Asilomar Conference on Signals, Systems and Computers (ASILOMAR). Pacific Grove, USA: IEEE, 2012. 1693−1697
    [34] Sun W, Zhou F, Liao Q M. MDID: A multiply distorted image database for image quality assessment. Pattern Recognition, 2017, 61: 153-168 doi: 10.1016/j.patcog.2016.07.033
    [35] Virtanen T, Nuutinen M, Vaahteranoksa M, Oittinen P, Häkkinen J. CID2013: A database for evaluating no-reference image quality assessment algorithms. IEEE Transactions on Image Processing, 2015, 24(1): 390-402 doi: 10.1109/TIP.2014.2378061
    [36] Ghadiyaram D, Bovik A C. Massive online crowdsourced study of subjective and objective picture quality. IEEE Transactions on Image Processing, 2016, 25(1): 372-387 doi: 10.1109/TIP.2015.2500021
    [37] Ghadiyaram D, Bovik A C. LIVE in the wild image quality challenge database. [Online], available: http://live.ece.utexas.edu/research/ChallengeDB/index.html, 2015.
    [38] Hosu V, Lin H H, Sziranyi T, Saupe D. KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing, 2020, 29: 4041-4056 doi: 10.1109/TIP.2020.2967829
    [39] Zhu X, Milanfar P. Image reconstruction from videos distorted by atmospheric turbulence. In: Proceedings of the SPIE 7543, Visual Information Processing and Communication. San Jose, USA: SPIE, 2010. 75430S
    [40] Marziliano P, Dufaux F, Winkler S, Ebrahimi T. Perceptual blur and ringing metrics: Application to JPEG2000. Signal Processing: Image Communication, 2004, 19(2): 163-172 doi: 10.1016/j.image.2003.08.003
    [41] 赵巨峰, 冯华君, 徐之海, 李奇. 基于模糊度和噪声水平的图像质量评价方法. 光电子•激光, 2010, 21(7): 1062-1066

    Zhao Ju-Feng, Feng Hua-Jun, Xu Zhi-Hai, Li Qi. Image quality assessment based on blurring and noise level. Journal of Optoelectronics • Laser, 2010, 21(7): 1062-1066
    [42] Zhang F Y, Roysam B. Blind quality metric for multidistortion images based on cartoon and texture decomposition. IEEE Signal Processing Letters, 2016, 23(9): 1265-1269 doi: 10.1109/LSP.2016.2594166
    [43] Ferzli R, Karam L J. A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE Transactions on Image Processing, 2009, 18(4): 717-728 doi: 10.1109/TIP.2008.2011760
    [44] Narvekar N D, Karam L J. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). IEEE Transactions on Image Processing, 2011, 20(9): 2678-2683 doi: 10.1109/TIP.2011.2131660
    [45] Wu S Q, Lin W S, Xie S L, Lu Z K, Ong E P, Yao S S. Blind blur assessment for vision-based applications. Journal of Visual Communication and Image Representation, 2009, 20(4): 231-241 doi: 10.1016/j.jvcir.2009.03.002
    [46] Ong E P, Lin W S, Lu Z K, Yang X K, Yao S S, Pan F, et al. A no-reference quality metric for measuring image blur. In: Proceedings of the 7th International Symposium on Signal Processing and Its Applications. Paris, France: IEEE, 2003. 469−472
    [47] Bahrami K, Kot A C. A fast approach for no-reference image sharpness assessment based on maximum local variation. IEEE Signal Processing Letters, 2014, 21(6): 751-755 doi: 10.1109/LSP.2014.2314487
    [48] 蒋平, 张建州. 基于局部最大梯度的无参考图像质量评价. 电子与信息学报, 2015, 37(11): 2587-2593

    Jiang Ping, Zhang Jian-Zhou. No-reference image quality assessment based on local maximum gradient. Journal of Electronics & Information Technology, 2015, 37(11): 2587-2593
    [49] Li L D, Lin W S, Wang X S, Yang G B, Bahrami K, Kot A C. No-reference image blur assessment based on discrete orthogonal moments. IEEE Transactions on Cybernetics, 2016, 46(1): 39-50 doi: 10.1109/TCYB.2015.2392129
    [50] Crete F, Dolmiere T, Ladret P, Nicolas M. The blur effect: Perception and estimation with a new no-reference perceptual blur metric. In: Proceedings of the SPIE 6492, Human Vision and Electronic Imaging XII. San Jose, USA: SPIE, 2007. 64920I
    [51] Wang Z, Bovik A C, Sheikh H R, Simoncelli E P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 2004, 13(4): 600-612 doi: 10.1109/TIP.2003.819861
    [52] 桑庆兵, 苏媛媛, 李朝锋, 吴小俊. 基于梯度结构相似度的无参考模糊图像质量评价. 光电子•激光, 2013, 24(3): 573-577

    Sang Qing-Bing, Su Yuan-Yuan, Li Chao-Feng, Wu Xiao-Jun. No-reference blur image quality assemssment based on gradient similarity. Journal of Optoelectronics • Laser, 2013, 24(3): 573-577
    [53] 邵宇, 孙富春, 李洪波. 基于视觉特性的无参考型遥感图像质量评价方法. 清华大学学报(自然科学版), 2013, 53(4): 550-555

    Shao Yu, Sun Fu-Chun, Li Hong-Bo. No-reference remote sensing image quality assessment method using visual properties. Journal of Tsinghua University (Science & Technology), 2013, 53(4): 550-555
    [54] Wang T, Hu C, Wu S Q, Cui J L, Zhang L Y, Yang Y P, et al. NRFSIM: A no-reference image blur metric based on FSIM and re-blur approach. In: Proceedings of the 2017 IEEE International Conference on Information and Automation (ICIA). Macau, China: IEEE, 2017. 698−703
    [55] Zhang L, Zhang L, Mou X Q, Zhang D. FSIM: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 2011, 20(8): 2378-2386 doi: 10.1109/TIP.2011.2109730
    [56] Bong D B L, Khoo B E. An efficient and training-free blind image blur assessment in the spatial domain. IEICE Transactions on Information and Systems, 2014, E97-D(7): 1864-1871 doi: 10.1587/transinf.E97.D.1864
    [57] 王红玉, 冯筠, 牛维, 卜起荣, 贺小伟. 基于再模糊理论的无参考图像质量评价. 仪器仪表学报, 2016, 37(7): 1647-1655 doi: 10.3969/j.issn.0254-3087.2016.07.026

    Wang Hong-Yu, Feng Jun, Niu Wei, Bu Qi-Rong, He Xiao-Wei. No-reference image quality assessment based on re-blur theory. Chinese Journal of Scientific Instrument, 2016, 37(7): 1647-1655 doi: 10.3969/j.issn.0254-3087.2016.07.026
    [58] 王冠军, 吴志勇, 云海姣, 梁敏华, 杨华. 结合图像二次模糊范围和奇异值分解的无参考模糊图像质量评价. 计算机辅助设计与图形学学报, 2016, 28(4): 653-661 doi: 10.3969/j.issn.1003-9775.2016.04.016

    Wang Guan-Jun, Wu Zhi-Yong, Yun Hai-Jiao, Liang Min-Hua, Yang Hua. No-reference quality assessment for blur image combined with re-blur range and singular value decomposition. Journal of Computer-Aided Design and Computer Graphics, 2016, 28(4): 653-661 doi: 10.3969/j.issn.1003-9775.2016.04.016
    [59] Chetouani A, Mostafaoui G, Beghdadi A. A new free reference image quality index based on perceptual blur estimation. In: Proceedings of the 10th Pacific-Rim Conference on Multimedia. Bangkok, Thailand: Springer, 2009. 1185−1196
    [60] Sang Q B, Qi H X, Wu X J, Li C F, Bovik A C. No-reference image blur index based on singular value curve. Journal of Visual Communication and Image Representation, 2014, 25(7): 1625-1630 doi: 10.1016/j.jvcir.2014.08.002
    [61] Qureshi M A, Deriche M, Beghdadi A. Quantifying blur in colour images using higher order singular values. Electronics Letters, 2016, 52(21): 1755-1757 doi: 10.1049/el.2016.1792
    [62] Zhai G T, Wu X L, Yang X K, Lin W S, Zhang W J. A psychovisual quality metric in free-energy principle. IEEE Transactions on Image Processing, 2012, 21(1): 41-52 doi: 10.1109/TIP.2011.2161092
    [63] Gu K, Zhai G T, Lin W S, Yang X K, Zhang W J. No-reference image sharpness assessment in autoregressive parameter space. IEEE Transactions on Image Processing, 2015, 24(10): 3218-3231 doi: 10.1109/TIP.2015.2439035
    [64] Chetouani A, Beghdadi A, Deriche M. A new reference-free image quality index for blur estimation in the frequency domain. In: Proceedings of the 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). Ajman, United Arab Emirates: IEEE, 2009. 155−159
    [65] Vu C T, Phan T D, Chandler D M. S3: A spectral and spatial measure of local perceived sharpness in natural images. IEEE Transactions on Image Processing, 2012, 21(3): 934-945 doi: 10.1109/TIP.2011.2169974
    [66] 卢彦飞, 张涛, 郑健, 李铭, 章程. 基于局部标准差与显著图的模糊图像质量评价方法. 吉林大学学报(工学版), 2016, 46(4): 1337-1343

    Lu Yan-Fei, Zhang Tao, Zheng Jian, LI Ming, Zhang Cheng. No-reference blurring image quality assessment based on local standard deviation and saliency map. Journal of Jilin University (Engineering and Technology Edition), 2016, 46(4): 1337-1343
    [67] Marichal X, Ma W Y, Zhang H J. Blur determination in the compressed domain using DCT information. In: Proceedings of the 1999 International Conference on Image Processing (Cat. 99CH36348). Kobe, Japan: IEEE, 1999. 386−390
    [68] Caviedes J, Oberti F. A new sharpness metric based on local kurtosis, edge and energy information. Signal Processing: Image Communication, 2004, 19(2): 147-161 doi: 10.1016/j.image.2003.08.002
    [69] 张士杰, 李俊山, 杨亚威, 张仲敏. 湍流退化红外图像降晰函数辨识. 光学 精密工程, 2013, 21(2): 514-521 doi: 10.3788/OPE.20132102.0514

    Zhang Shi-Jie, Li Jun-Shan, Yang Ya-Wei, Zhang Zhong-Min. Blur identification of turbulence-degraded IR images. Optics and Precision Engineering, 2013, 21(2): 514-521 doi: 10.3788/OPE.20132102.0514
    [70] Zhang S Q, Wu T, Xu X H, Cheng Z M, Chang C C. No-reference image blur assessment based on SIFT and DCT. Journal of Information Hiding and Multimedia Signal Processing, 2018, 9(1): 219-231
    [71] Zhang S Q, Li P C, Xu X H, Li L, Chang C C. No-reference image blur assessment based on response function of singular values. Symmetry, 2018, 10(8): Article No. 304
    [72] 卢亚楠, 谢凤英, 周世新, 姜志国, 孟如松. 皮肤镜图像散焦模糊与光照不均混叠时的无参考质量评价. 自动化学报, 2014, 40(3): 480-488

    Lu Ya-Nan, Xie Feng-Ying, Zhou Shi-Xin, Jiang Zhi-Guo, Meng Ru-Song. Non-reference quality assessment of dermoscopy images with defocus blur and uneven illumination distortion. Acta Automatica Sinica, 2014, 40(3): 480-488
    [73] Tong H H, Li M J, Zhang H J, Zhang C S. Blur detection for digital images using wavelet transform. In: Proceedings of the 2004 IEEE International Conference on Multimedia and Expo (ICME). Taipei, China: IEEE, 2004. 17−20
    [74] Ferzli R, Karam L J. No-reference objective wavelet based noise immune image sharpness metric. In: Proceedings of the 2005 IEEE International Conference on Image Processing. Genova, Italy: IEEE, 2005. Article No. I-405
    [75] Kerouh F. A no reference quality metric for measuring image blur in wavelet domain. International Journal of Digital Information and Wireless Communications, 2012, 4(1): 803-812
    [76] Vu P V, Chandler D M. A fast wavelet-based algorithm for global and local image sharpness estimation. IEEE Signal Processing Letters, 2012, 19(7): 423-426 doi: 10.1109/LSP.2012.2199980
    [77] Gvozden G, Grgic S, Grgic M. Blind image sharpness assessment based on local contrast map statistics. Journal of Visual Communication and Image Representation, 2018, 50: 145-158 doi: 10.1016/j.jvcir.2017.11.017
    [78] Wang Z, Simoncelli E P. Local phase coherence and the perception of blur. In: Proceedings of the 16th International Conference on Neural Information Processing Systems. Whistler British Columbia, Canada: MIT Press, 2003. 1435−1442
    [79] Ciancio A, da Costa A L N T, da Silva E A B, Said A, Samadani R, Obrador P. Objective no-reference image blur metric based on local phase coherence. Electronics Letters, 2009, 45(23): 1162-1163 doi: 10.1049/el.2009.1800
    [80] Hassen R, Wang Z, Salama M. No-reference image sharpness assessment based on local phase coherence measurement. In: Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. Dallas, USA: IEEE, 2010. 2434−2437
    [81] Hassen R, Wang Z, Salama M M A. Image sharpness assessment based on local phase coherence. IEEE Transactions on Image Processing, 2013, 22(7): 2798-2810 doi: 10.1109/TIP.2013.2251643
    [82] Do M N, Vetterli M. The contourlet transform: An efficient directional multiresolution image representation. IEEE Transactions on Image Processing, 2005, 14(12): 2091-2106 doi: 10.1109/TIP.2005.859376
    [83] 楼斌, 沈海斌, 赵武锋, 严晓浪. 基于自然图像统计的无参考图像质量评价. 浙江大学学报(工学版), 2010, 44(2): 248-252 doi: 10.3785/j.issn.1008-973X.2010.02.007

    Lou Bin, Shen Hai-Bin, Zhao Wu-Feng, Yan Xiao-Lang. No-reference image quality assessment based on statistical model of natural image. Journal of Zhejiang University (Engineering Science), 2010, 44(2): 248-252 doi: 10.3785/j.issn.1008-973X.2010.02.007
    [84] 焦淑红, 齐欢, 林维斯, 唐琳, 申维和. 基于Contourlet统计特性的无参考图像质量评价. 吉林大学学报(工学版), 2016, 46(2): 639-645

    Jiao Shu-Hong, Qi Huan, Lin Wei-Si, Tang Lin, Shen Wei-He. No-reference quality assessment based on the statistics in Contourlet domain. Journal of Jilin University (Engineering and Technology Edition), 2016, 46(2): 639-645
    [85] Hosseini M S, Zhang Y Y, Plataniotis K N. Encoding visual sensitivity by MaxPol convolution filters for image sharpness assessment. IEEE Transactions on Image Processing, 2019, 28(9): 4510-4525 doi: 10.1109/TIP.2019.2906582
    [86] Moorthy A K, Bovik A C. A two-step framework for constructing blind image quality indices. IEEE Signal Processing Letters, 2010, 17(5): 513-516 doi: 10.1109/LSP.2010.2043888
    [87] Moorthy A K, Bovik A C. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Transactions on Image Processing, 2011, 20(12): 3350-3364 doi: 10.1109/TIP.2011.2147325
    [88] Liu L X, Liu B, Huang H, Bovik A C. No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication, 2014, 29(8): 856-863 doi: 10.1016/j.image.2014.06.006
    [89] 陈勇, 帅锋, 樊强. 基于自然统计特征分布的无参考图像质量评价. 电子与信息学报, 2016, 38(7): 1645-1653

    Chen Yong, Shuai Feng, Fan Qiang. A no-reference image quality assessment based on distribution characteristics of natural statistics. Journal of Electronics and Information Technology, 2016, 38(7): 1645-1653
    [90] Zhang Y, Chandler D M. Opinion-unaware blind quality assessment of multiply and singly distorted images via distortion parameter estimation. IEEE Transactions on Image Processing, 2018, 27(11): 5433-5448 doi: 10.1109/TIP.2018.2857413
    [91] Saad M A, Bovik A C, Charrier C. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Transactions on Image Processing, 2012, 21(8): 3339-3352 doi: 10.1109/TIP.2012.2191563
    [92] Saad M A, Bovik A C, Charrier C. A DCT statistics-based blind image quality index. IEEE Signal Processing Letters, 2010, 17(6): 583-586 doi: 10.1109/LSP.2010.2045550
    [93] Liu L X, Dong H P, Huang H, Bovik A C. No-reference image quality assessment in curvelet domain. Signal Processing: Image Communication, 2014, 29(4): 494-505 doi: 10.1016/j.image.2014.02.004
    [94] Zhang Y, Chandler D M. No-reference image quality assessment based on log-derivative statistics of natural scenes. Journal of Electronic Imaging, 2013, 22(4): Article No. 043025
    [95] 李俊峰. 基于RGB色彩空间自然场景统计的无参考图像质量评价. 自动化学报, 2015, 41(9): 1601-1615

    Li Jun-Feng. No-reference image quality assessment based on natural scene statistics in RGB color space. Acta Automatica Sinica, 2015, 41(9): 1601-1615
    [96] Mittal A, Moorthy A K, Bovik A C. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 2012, 21(12): 4695-4708 doi: 10.1109/TIP.2012.2214050
    [97] 唐祎玲, 江顺亮, 徐少平. 基于非零均值广义高斯模型与全局结构相关性的BRISQUE改进算法. 计算机辅助设计与图形学学报, 2018, 30(2): 298-308

    Tang Yi-Ling, Jiang Shun-Liang, Xu Shao-Ping. An improved BRISQUE algorithm based on non-zero mean generalized Gaussian model and global structural correlation coefficients. Journal of Computer-Aided Design & Computer Graphics, 2018, 30(2): 298-308
    [98] Ye P, Doermann D. No-reference image quality assessment using visual codebooks. IEEE Transactions on Image Processing, 2012, 21(7): 3129-3138 doi: 10.1109/TIP.2012.2190086
    [99] Xue W F, Mou X Q, Zhang L, Bovik A C, Feng X C. Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Transactions on Image Processing, 2014, 23(11): 4850-4862 doi: 10.1109/TIP.2014.2355716
    [100] Smola A J, Schölkopf B. A tutorial on support vector regression. Statistics and Computing, 2004, 14(3): 199-222 doi: 10.1023/B:STCO.0000035301.49549.88
    [101] 陈勇, 吴明明, 房昊, 刘焕淋. 基于差异激励的无参考图像质量评价. 自动化学报, 2020, 46(8): 1727-1737

    Chen Yong, Wu Ming-Ming, Fang Hao, Liu Huan-Lin. No-reference image quality assessment based on differential excitation. Acta Automatica Sinica, 2020, 46(8): 1727-1737
    [102] Li Q H, Lin W S, Xu J T, Fang Y M. Blind image quality assessment using statistical structural and luminance features. IEEE Transactions on Multimedia, 2016, 18(12): 2457-2469 doi: 10.1109/TMM.2016.2601028
    [103] Li C F, Zhang Y, Wu X J, Zheng Y H. A multi-scale learning local phase and amplitude blind image quality assessment for multiply distorted images. IEEE Access, 2018, 6: 64577-64586 doi: 10.1109/ACCESS.2018.2877714
    [104] Gao F, Tao D C, Gao X B, Li X L. Learning to rank for blind image quality assessment. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26(10): 2275-2290 doi: 10.1109/TNNLS.2014.2377181
    [105] 桑庆兵, 李朝锋, 吴小俊. 基于灰度共生矩阵的无参考模糊图像质量评价方法. 模式识别与人工智能, 2013, 26(5): 492-497 doi: 10.3969/j.issn.1003-6059.2013.05.012

    Sang Qing-Bing, Li Chao-Feng, Wu Xiao-Jun. No-reference blurred image quality assessment based on gray level co-occurrence matrix. Pattern Recognition and Artificial Intelligence, 2013, 26(5): 492-497 doi: 10.3969/j.issn.1003-6059.2013.05.012
    [106] Oh T, Park J, Seshadrinathan K, Lee S, Bovik A C. No-reference sharpness assessment of camera-shaken images by analysis of spectral structure. IEEE Transactions on Image Processing, 2014, 23(12): 5428-5439 doi: 10.1109/TIP.2014.2364925
    [107] Li L D, Xia W H, Lin W S, Fang Y M, Wang S Q. No-reference and robust image sharpness evaluation based on multiscale spatial and spectral features. IEEE Transactions on Multimedia, 2017, 19(5): 1030-1040 doi: 10.1109/TMM.2016.2640762
    [108] Li L D, Yan Y, Lu Z L, Wu J J, Gu K, Wang S Q. No-reference quality assessment of deblurred images based on natural scene statistics. IEEE Access, 2017, 5: 2163-2171 doi: 10.1109/ACCESS.2017.2661858
    [109] Liu L X, Gong J C, Huang H, Sang Q B. Blind image blur metric based on orientation-aware local patterns. Signal Processing: Image Communication, 2020, 80: Article No. 115654
    [110] Cai H, Wang M J, Mao W D, Gong M L. No-reference image sharpness assessment based on discrepancy measures of structural degradation. Journal of Visual Communication and Image Representation, 2020, 71: Article No. 102861
    [111] 李朝锋, 唐国凤, 吴小俊, 琚宜文. 学习相位一致特征的无参考图像质量评价. 电子与信息学报, 2013, 35(2): 484-488

    Li Chao-Feng, Tang Guo-Feng, Wu Xiao-Jun, Ju Yi-Wen. No-reference image quality assessment with learning phase congruency feature. Journal of Electronics and Information Technology, 2013, 35(2): 484-488
    [112] Li C F, Bovik A C, Wu X J. Blind image quality assessment using a general regression neural network. IEEE Transactions on Neural Networks, 2011, 22(5): 793-799 doi: 10.1109/TNN.2011.2120620
    [113] Liu L X, Hua Y, Zhao Q J, Huang H, Bovik A C. Blind image quality assessment by relative gradient statistics and adaboosting neural network. Signal Processing: Image Communication, 2016, 40: 1-15 doi: 10.1016/j.image.2015.10.005
    [114] 沈丽丽, 杭宁. 联合多种边缘检测算子的无参考质量评价算法. 工程科学学报, 2018, 40(8): 996-1004

    Shen Li-Li, Hang Ning. No-reference image quality assessment using joint multiple edge detection. Chinese Journal of Engineering, 2018, 40(8): 996-1004
    [115] Liu Y T, Gu K, Wang S Q, Zhao D B, Gao W. Blind quality assessment of camera images based on low-level and high-level statistical features. IEEE Transactions on Multimedia, 2019, 21(1): 135-146 doi: 10.1109/TMM.2018.2849602
    [116] Kang L, Ye P, Li Y, Doermann D. Convolutional neural networks for no-reference image quality assessment. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Columbus, USA: IEEE, 2014. 1733−1740
    [117] Kim J, Lee S. Fully deep blind image quality predictor. IEEE Journal of Selected Topics in Signal Processing, 2017, 11(1): 206-220 doi: 10.1109/JSTSP.2016.2639328
    [118] Kim J, Nguyen A D, Lee S. Deep CNN-based blind image quality predictor. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(1): 11-24 doi: 10.1109/TNNLS.2018.2829819
    [119] Guan J W, Yi S, Zeng X Y, Cham W K, Wang X G. Visual importance and distortion guided deep image quality assessment framework. IEEE Transactions on Multimedia, 2017, 19(11): 2505-2520 doi: 10.1109/TMM.2017.2703148
    [120] Bianco S, Celona L, Napoletano P, Schettini R. On the use of deep learning for blind image quality assessment. Signal, Image and Video Processing, 2018, 12(2): 355-362 doi: 10.1007/s11760-017-1166-8
    [121] Pan D, Shi P, Hou M, Ying Z F, Fu S Z, Zhang Y. Blind predicting similar quality map for image quality assessment. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE, 2018. 6373−6382
    [122] He L H, Zhong Y Z, Lu W, Gao X B. A visual residual perception optimized network for blind image quality assessment. IEEE Access, 2019, 7: 176087-176098 doi: 10.1109/ACCESS.2019.2957292
    [123] Zhang W X, Ma K D, Yan J, Deng D X, Wang Z. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(1): 36-47 doi: 10.1109/TCSVT.2018.2886771
    [124] Cai W P, Fan C E, Zou L, Liu Y F, Ma Y, Wu M Y. Blind image quality assessment based on classification guidance and feature aggregation. Electronics, 2020, 9(11): Article No. 1811
    [125] Li D Q, Jiang T T, Jiang M. Exploiting high-level semantics for no-reference image quality assessment of realistic blur images. In: Proceedings of the 25th ACM International Conference on Multimedia. Mountain View, USA: ACM, 2017. 378−386
    [126] Yu S D, Jiang F, Li L D, Xie Y Q. CNN-GRNN for image sharpness assessment. In: Proceedings of the 2016 Asian Conference on Computer Vision. Taipei, China: Springer, 2016. 50−61
    [127] Yu S D, Wu S B, Wang L, Jiang F, Xie Y Q, Li L D. A shallow convolutional neural network for blind image sharpness assessment. PLoS One, 2017, 12(5): Article No. e0176632
    [128] Li D Q, Jiang T T, Lin W S, Jiang M. Which has better visual quality: The clear blue sky or a blurry animal?. IEEE Transactions on Multimedia, 2019, 21(5): 1221-1234 doi: 10.1109/TMM.2018.2875354
    [129] Li Y M, Po L M, Xu X Y, Feng L T, Yuan F, Cheung C H, et al. No-reference image quality assessment with shearlet transform and deep neural networks. Neurocomputing, 2015, 154: 94-109 doi: 10.1016/j.neucom.2014.12.015
    [130] Gao F, Yu J, Zhu S G, Huang Q M, Tian Q. Blind image quality prediction by exploiting multi-level deep representations. Pattern Recognition, 2018, 81: 432-442 doi: 10.1016/j.patcog.2018.04.016
    [131] Bosse S, Maniry D, Müller K R, Wiegand T, Samek W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Transactions on Image Processing, 2018, 27(1): 206-219 doi: 10.1109/TIP.2017.2760518
    [132] Ma K D, Liu W T, Zhang K, Duanmu Z F, Wang Z, Zuo W M. End-to-end blind image quality assessment using deep neural networks. IEEE Transactions on Image Processing, 2018, 27(3): 1202-1213 doi: 10.1109/TIP.2017.2774045
    [133] Yang S, Jiang Q P, Lin W S, Wang Y T. SGDNet: An end-to-end saliency-guided deep neural network for no-reference image quality assessment. In: Proceedings of the 27th ACM International Conference on Multimedia. Nice, France: ACM, 2019. 1383−1391
    [134] Yan B, Bare B, Tan W M. Naturalness-aware deep no-reference image quality assessment. IEEE Transactions on Multimedia, 2019, 21(10): 2603-2615 doi: 10.1109/TMM.2019.2904879
    [135] Yan Q S, Gong D, Zhang Y N. Two-stream convolutional networks for blind image quality assessment. IEEE Transactions on Image Processing, 2019, 28(5): 2200-2211 doi: 10.1109/TIP.2018.2883741
    [136] Lin K Y, Wang G X. Hallucinated-IQA: No-reference image quality assessment via adversarial learning. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE, 2018. 732−741
    [137] Yang H T, Shi P, Zhong D X, Pan D, Ying Z F. Blind image quality assessment of natural distorted image based on generative adversarial networks. IEEE Access, 2019, 7: 179290-179303 doi: 10.1109/ACCESS.2019.2957235
    [138] Hou W L, Gao X B, Tao D C, Li X L. Blind image quality assessment via deep learning. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26(6): 1275-1286 doi: 10.1109/TNNLS.2014.2336852
    [139] He S Y, Liu Z Z. Image quality assessment based on adaptive multiple Skyline query. Signal Processing: Image Communication, 2020, 80: Article No. 115676
    [140] Ma K D, Liu W T, Liu T L, Wang Z, Tao D C. dipIQ: Blind image quality assessment by learning-to-rank discriminable image pairs. IEEE Transactions on Image Processing, 2017, 26(8): 3951-3964 doi: 10.1109/TIP.2017.2708503
    [141] Zhang Y B, Wang H Q, Tan F F, Chen, W J, Wu Z R. No-reference image sharpness assessment based on rank learning. In: Proceedings of the 2019 International Conference on Image Processing (ICIP). Taipei, China: IEEE, 2019. 2359−2363
    [142] Yang J C, Sim K, Jiang B, Lu W. Blind image quality assessment utilising local mean eigenvalues. Electronics Letters, 2018, 54(12): 754-756 doi: 10.1049/el.2018.0958
    [143] Li L D, Wu D, Wu J J, Li H L, Lin W S, Kot A C. Image sharpness assessment by sparse representation. IEEE Transactions on Multimedia, 2016, 18(6): 1085-1097 doi: 10.1109/TMM.2016.2545398
    [144] Lu Q B, Zhou W G, Li H Q. A no-reference Image sharpness metric based on structural information using sparse representation. Information Sciences, 2016, 369: 334-346 doi: 10.1016/j.ins.2016.06.042
    [145] Ye P, Kumar J, Kang L, Doermann D. Unsupervised feature learning framework for no-reference image quality assessment. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, USA: IEEE, 2012. 1098−1105
    [146] Xu J T, Ye P, Li Q H, Du H Q, Liu Y, Doermann D. Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing, 2016, 25(9): 4444-4457 doi: 10.1109/TIP.2016.2585880
    [147] Xue W F, Zhang L, Mou X Q. Learning without human scores for blind image quality assessment. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA: IEEE, 2013. 995−1002
    [148] Wu Q B, Li H L, Meng F M, Ngan K N, Luo B, Huang C, et al. Blind image quality assessment based on multichannel feature fusion and label transfer. IEEE Transactions on Circuits and Systems for Video Technology, 2016, 26(3): 425-440 doi: 10.1109/TCSVT.2015.2412773
    [149] Jiang Q P, Shao F, Lin W S, Gu K, Jiang G Y, Sun H F. Optimizing multistage discriminative dictionaries for blind image quality assessment. IEEE Transactions on Multimedia, 2018, 20(8): 2035-2048 doi: 10.1109/TMM.2017.2763321
    [150] Mittal A, Soundararajan R, Bovik A C. Making a "completely blind" image quality analyzer. IEEE Signal Processing Letters, 2013, 20(3): 209-212 doi: 10.1109/LSP.2012.2227726
    [151] Zhang L, Zhang L, Bovik A C. A feature-enriched completely blind image quality evaluator. IEEE Transactions on Image Processing, 2015, 24(8): 2579-2591 doi: 10.1109/TIP.2015.2426416
    [152] Jiao S H, Qi H, Lin W S, Shen W H. Fast and efficient blind image quality index in spatial domain. Electronics Letters, 2013, 49(18): 1137-1138 doi: 10.1049/el.2013.1837
    [153] Abdalmajeed S, Jiao S H. No-reference image quality assessment algorithm based on Weibull statistics of log-derivatives of natural scenes. Electronics Letters, 2014, 50(8): 595-596 doi: 10.1049/el.2013.3585
    [154] 南栋, 毕笃彦, 查宇飞, 张泽, 李权合. 基于参数估计的无参考型图像质量评价算法. 电子与信息学报, 2013, 35(9): 2066-2072

    Nan Dong, Bi Du-Yan, Zha Yu-Fei, Zhang Ze, Li Quan-He. A no-reference image quality assessment method based on parameter estimation. Journal of Electronics & Information Technology, 2013, 35(9): 2066-2072
    [155] Panetta K, Gao C, Agaian S. No reference color image contrast and quality measures. IEEE Transactions on Consumer Electronics, 2013, 59(3): 643-651 doi: 10.1109/TCE.2013.6626251
    [156] Gu J, Meng G F, Redi J A, Xiang S M, Pan C H. Blind image quality assessment via vector regression and object oriented pooling. IEEE Transactions on Multimedia, 2018, 20(5): 1140-1153 doi: 10.1109/TMM.2017.2761993
    [157] Wu Q B, Li H L, Wang Z, Meng F M, Luo B, Li W, et al. Blind image quality assessment based on rank-order regularized regression. IEEE Transactions on Multimedia, 2017, 19(11): 2490-2504 doi: 10.1109/TMM.2017.2700206
    [158] Al-Bandawi H, Deng G. Blind image quality assessment based on Benford’s law. IET Image Processing, 2018, 12(11): 1983-1993 doi: 10.1049/iet-ipr.2018.5385
    [159] Wu Q B, Li H L, Ngan K N, Ma K D. Blind image quality assessment using local consistency aware retriever and uncertainty aware evaluator. IEEE Transactions on Circuits and Systems for Video Technology, 2018, 28(9): 2078-2089 doi: 10.1109/TCSVT.2017.2710419
    [160] Deng C W, Wang S G, Li Z, Huang G B, Lin W S. Content-insensitive blind image blurriness assessment using Weibull statistics and sparse extreme learning machine. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2019, 49(3): 516-527 doi: 10.1109/TSMC.2017.2718180
    [161] Wang Z, Li Q. Information content weighting for perceptual image quality assessment. IEEE Transactions on Image Processing, 2011, 20(5): 1185-1198 doi: 10.1109/TIP.2010.2092435
  • 加载中
图(5) / 表(8)
计量
  • 文章访问数:  2413
  • HTML全文浏览量:  2064
  • PDF下载量:  598
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-12-17
  • 录用日期:  2021-05-12
  • 网络出版日期:  2021-06-20
  • 刊出日期:  2022-03-25

目录

/

返回文章
返回