2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于深度学习的视频超分辨率重建算法进展

唐麒 赵耀 刘美琴 姚超

李钊星, 蔡云鹏, 刘茂汉, 王霞, 许斌. 基于预定义时间的舰载机抗干扰着舰控制. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240766
引用本文: 唐麒, 赵耀, 刘美琴, 姚超. 基于深度学习的视频超分辨率重建算法进展. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240235
Li Zhao-Xing, Cai Yun-Peng, Liu Mao-Han, Wang Xia, Xu Bin. Predefined-time anti interference landing control for carrier-based aircraft. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240766
Citation: Tang Qi, Zhao Yao, Liu Mei-Qin, Yao Chao. A review of video super-resolution algorithms based on deep learning. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240235

基于深度学习的视频超分辨率重建算法进展

doi: 10.16383/j.aas.c240235 cstr: 32138.14.j.aas.c240235
基金项目: 中央高校基本科研业务费专项资金资助(2024JBZX001), 国家自然科学基金(62120106009, 62332017, 62372036) 资助
详细信息
    作者简介:

    唐麒:北京交通大学信息科学研究所硕士研究生. 主要研究方向为图像与视频复原. E-mail: qitang@bjtu.edu.cn

    赵耀:北京交通大学信息科学研究所教授. 主要研究方向为图像/视频压缩, 数字媒体内容安全, 媒体内容分析与理解, 人工智能. E-mail: yzhao@bjtu.edu.cn

    刘美琴:北京交通大学信息科学研究所教授. 主要研究方向为多媒体信息处理, 三维视频处理, 视频智能编码. 本文通信作者. E-mail: mqliu@bjtu.edu.cn

    姚超:北京科技大学计算机与通信工程学院副教授. 主要研究方向为图像/视频压缩, 计算机视觉和人机交互. E-mail: yaochao@ustb.edu.cn

A Review of Video Super-resolution Algorithms Based on Deep Learning

Funds: Supported by Fundamental Research Funds for the Central Universities (2024JBZX001), and National Natural Science Foundation of China (62120106009, 62332017, 62372036)
More Information
    Author Bio:

    TANG Qi  Master student at Institute of Information Science, Beijing Jiaotong University. His research interest covers image and video restoration

    ZHAO Yao  Professor at Institute of Information Science, Beijing Jiaotong University. His research interest covers image/video compression, digital media content security, media content analysis and understanding, artificial intelligence

    LIU Mei-Qin  Professor at Institute of Information Science, Beijing Jiaotong University. Her research interest covers multimedia information processing, 3D video processing and video intelligent coding. Corresponding author of this paper

    YAO Chao  Associate Professor at School of Computer and Communication Engineering, University of Science and Technology Beijing. His research interest covers image/video compression, computer vision, and human–computer interaction

  • 摘要: 视频超分辨率重建(Video super-resolution, VSR)是底层计算机视觉任务中的一个重要研究方向, 旨在利用低分辨率视频的帧内和帧间信息, 重建具有更多细节和内容一致的高分辨率视频, 有助于提升下游任务性能和改善用户观感体验. 近年来, 基于深度学习的视频超分辨率重建算法如雨后春笋般涌现, 在帧间对齐、信息传播等方面取得了突破性的进展. 在简述视频超分辨率重建任务的基础上, 梳理了现有的视频超分辨率重建的公共数据集及相关算法; 接着, 重点综述了基于深度学习的视频超分辨率重建算法的创新性工作进展情况; 最后, 总结了视频超分辨率重建算法面临的挑战及未来的发展趋势.
  • 舰载机是依托航母为起降平台, 执行海战中侦查、预警、电子干扰与目标攻击等任务的关键力量. 作为实际对抗的前提, 舰载机的安全起降是航母强大战斗力与生存能力的有效保证, 也是各国航母作战系统中的一项关键技术[1]. 尤其在着舰时, 面向舰尾流扰动、甲板运动及系统自身通道耦合与时延等不利因素影响需要将飞机精确降落到狭小的甲板上, 是整个过程中危险程度与事故率最高的阶段. 因此, 舰载机着舰系统对轨迹跟踪鲁棒性、精确性及快速性有严格要求, 着舰控制依旧存在诸多挑战[23].

    考虑舰载机着舰过程中存在舰尾流和甲板运动等外界干扰, 文献[4]对“魔毯”着舰控制技术进行研究, 并进一步分析了不同控制模态的舰尾流抑制能力. 文献[5]采用扰动观测器估计舰尾流扰动并获取甲板运动量测信息进行实时补偿, 设计了基于自适应逆最优控制的自动着舰方法. 文献[6] 借助非奇异快速终端滑模观测器估计舰尾流扰动, 设计基于反步法的控制策略并考虑执行机构物理约束, 提升着舰姿态的稳定性. 为进一步实现甲板运动的有效估计, 文献[79] 基于自回归AR模型、移动平均模型MA和粒子滤波对甲板运动进行预测. 部分研究引入BP[10]和RNN[11] 等神经网络模型, 借助深度学习进行甲板运动预估[12]. 但BP神经网络忽视数据的时序性, RNN虽然考虑了时序性, 却容易受到梯度消失和梯度爆炸等影响, 不适用于长相关性数据预测. 而基于长短记忆(Long short term memory, LSTM)神经网络通过添加门结构与单元状态, 有效避免了RNN的缺陷[13], 成为甲板运动预测的有效方法[1415].

    为提升着舰轨迹跟踪控制性能, 部分学者考虑在控制器设计中引入预设性能或障碍李雅普诺夫函数, 对跟踪误差或着舰姿态进行直接限制, 抑制其幅值及波动. 文献[16]采用时变矢量制导律计算随甲板运动变化的着舰引导指令, 借助性能函数提升着舰控制精度. 文献[17]设计基于反步架构的预设性能着舰控制策略, 将着舰轨迹跟踪误差限制在设置的性能范围内. 部分研究考虑直接升力控制技术, 实现低动压和低速状态下减小飞行轨迹跟踪误差[18]. 文献[19]基于多操纵面分配的综合直接力着舰控制方法, 降低升降舵配平能力需求并减小操纵负担. 文献[20]考虑将用于航迹跟踪与姿态控制的变量进行解耦, 提出基于非线性动态逆控制框架下的直接升力着舰策略, 实现姿态控制与航迹误差的准确修正. 文献[21]在直接升力控制中应用深度强化学习更新参数, 设计基于近端策略优化算法的自动着舰纵向控制器, 提升执行机构的响应速度并降低动态误差.

    上述控制器应用在舰载机着舰控制系统中, 其收敛特性往往为渐进收敛, 稳定时间较长且不利于着舰时的姿态稳定. 而在舰载机着舰时为确保成功率, 控制器误差必须在短时间内收敛. 针对该问题, 部分研究采用有限时间控制策略, 文献[22]针对无人机自动着舰系统设计自适应神经网络有限时间滑模控制方法; 文献[23]利用扰动观测器估计外部扰动, 借助有限时间滑模鲁棒控制保证着舰轨迹和姿态跟踪的快速收敛. 然而有限时间控制的收敛时间与系统初值密切相关, 不同的初值的收敛时间不尽相同. 为改进这一缺陷, 部分研究考虑采用固定时间策略, 文献[24]基于固定时间制导律调整着舰轨迹, 设计非奇异快速终端积分滑模控制器提升收敛速度. 文献[25]进一步考虑着舰过程中的状态约束, 采用基于障碍李雅普诺夫函数的固定时间控制方法, 保证位置跟踪误差在固定时间收敛时姿态跟踪不超过约束边界. 虽然固定时间控制保证收敛时间是与初值无关的常数, 但该数值通常是系统参数的复杂函数, 难以根据实际工况和任务约束进行调整, 限制了其在工程实际中的应用.

    针对已有研究的局限性, 部分学者提出预定义时间控制[2627], 该方法引入了时间常数与控制参数之间的显示关系, 其收敛时间不仅与系统初值无关, 而且可以通过控制器参数自由设置. 文献[2829]分别给出基于预定义时间的反步法控制和自适应滑模控制框架. 文献[30]针对非线性多智能体系统给出了基于预定义时间的自适应复合学习控制方法, 保证了系统内信号和轨迹跟踪误差能够在设定的时间内稳定. 工程中, 典型的应用场景是机械臂末端轨迹控制[3132]、受油机姿态稳定控制[33] 等系统, 但在舰载机着舰控制领域中鲜有报道.

    基于以上分析, 本文建立了由着舰轨迹生成、着舰引导、姿态控制和进近动力补偿等子系统组成的舰载机着舰引导控制系统. 面向甲板运动和舰尾流等复杂扰动影响下的舰载机着舰轨迹跟踪问题, 设计了基于反步架构的预定义时间的自适应鲁棒控制方法. 不同于已有方法未能对收敛时间进行设置, 该方法在控制器设计中引入预定义时间结构项, 通过设定参数限制收敛时间. 考虑甲板运动引起的着舰点位置偏差, 采用LSTM神经网络进行预估并在着舰引导指令中予以修正, 减小轨迹跟踪控制误差. 借助扰动观测器估计舰尾流等引起的未知扰动, 实现系统集总不确定的前馈补偿. 通过数字仿真和半实物仿真进行验证, 仿真结果表明, 在甲板运动和舰尾流等扰动作用下, 所提方法能够实现舰载机着舰轨迹的快速准确跟踪, 飞机姿态在指定时间内收敛, 且跟踪精度更高、稳定性更好.

    考虑如下舰载机动力学模型[34]

    $$ \begin{equation} \left\{ \begin{aligned} &\dot X = V\cos \gamma \cos \chi \\ &\dot Y = V\cos \gamma \sin \chi \\ &\dot Z = - V\sin \gamma \end{aligned} \right. \end{equation} $$ (1)
    $$ \left\{ \begin{aligned} &\dot V = (T\cos \alpha \cos \beta - D - mg\sin \gamma )/m \\ &\dot \chi = [T(\sin \alpha \sin \mu - \cos \alpha \sin \beta \cos \mu )+\\&\qquad L\sin \mu - Y\cos \mu ]/mV\cos \gamma \\& \dot \gamma = [T(\sin \mu \sin \beta \cos \alpha + \cos \mu \sin \alpha )+\\ &\qquad L\cos \mu + Y\sin \mu - mg\cos \gamma ]/mV \end{aligned} \right. $$ (2)
    $$ \left\{ {\begin{aligned} &{\dot p = ({c_1}r + {c_2}p)q + {c_3}l + {c_4}n}\\ &{\dot q = {c_5}pr - {c_6}({p^2} - {r^2}) + {c_7}m}\\ &{\dot r = ({c_8}p - {c_2}r)q + {c_4}l + {c_9}n} \end{aligned}} \right. $$ (3)
    $$ \left\{ \begin{aligned} &\dot \alpha = q - (p\cos \alpha + r\sin \alpha )-\\ &\qquad (\dot \gamma \cos \mu + \dot \chi \sin \mu \cos \gamma )/\cos \beta \\ &\dot \beta = p\sin \alpha - r\cos \alpha - \dot \gamma \sin \mu + \dot \chi \cos \mu \cos \gamma \\ &\dot \mu = \dot \chi (\sin \gamma + \cos \gamma \sin \mu \tan \beta ) + \dot \gamma \cos \alpha \tan \beta +\\ &\qquad (p\cos \alpha + r\sin \alpha )/\cos \beta \end{aligned} \right. $$ (4)

    该模型的控制输入为$ {\boldsymbol{u}} = {\left[ {{\delta _e},\;{\delta _a},\;{\delta _r}} \right]^{\rm{T}}} $, 状态量为$ {\boldsymbol{x}} = {\left[ {\alpha ,\;\beta ,\;\mu ,\;p,\;q,\;r,\;V,\;\chi ,\;\gamma ,\;X,\;Y,\;Z} \right]^{\rm{T}}} $; $ V $, $ \alpha $和$ \beta $分别表示飞行速度、迎角和侧滑角; $ p $, $ q $和$ r $分别表示在机体坐标系下舰载机绕三轴转动的角速率; $ \chi $, $ \gamma $和$ \mu $分别表示航迹方位角、航迹倾斜角和航迹滚转角; $ X $, $ Y $和$ Z $分别表示惯性坐标系下舰载机的三轴位置; $ m $表示舰载机重量, $ g $表示重力加速度常数; $ T $表示发动机推力, $ {c_i}(i = 1,\;\cdots,\;8) $均表示转动惯量系数[24]; $ L $, $ D $和$ Y $分别表示升力、阻力和侧力; $ l $, $ m $和$ n $分别表示滚转力矩、俯仰力矩和偏航力矩, 其表达式分别为

    $$ \begin{split} &\left[ {\begin{array}{*{20}{l}} L\\ D\\ Y \end{array}} \right] = \bar qS\left[ {\begin{array}{*{20}{l}} {({C_{L0}} + {C_{L\alpha }}\alpha )}\\ {({C_{D0}} + {C_{D\alpha }}\alpha + {C_{D{\alpha ^2}}}{\alpha ^2})}\\ {{C_{Y\beta }}\beta } \end{array}} \right]\\& \left[ {\begin{array}{*{20}{l}} l\\ m\\ n \end{array}} \right] = \bar qS\left[ {\begin{array}{*{20}{l}} {b{C_{ltot}}}\\ {\bar c{C_{mtot}}}\\ {b{C_{ntot}}} \end{array}} \right] \end{split} $$

    其中,

    $$ \begin{split} &{C_{ltot}} = {C_{l\beta }}\beta + {C_{lp}}\frac{{bp}}{{2V}} + {C_{lr}}\frac{{br}}{{2V}} + {C_{l{\delta _a}}}{\delta _a} + {C_{l{\delta _r}}}{\delta _r}\\ &{C_{mtot}} = {C_{m0}} + {C_{m\alpha }}\alpha + {C_{mq}}\frac{{cq}}{{2V}} + {C_{m{\delta _e}}}{\delta _e}\\ &{C_{ntot}} = {C_{n\beta }}\beta + {C_{np}}\frac{{bp}}{{2V}} + {C_{nr}}\frac{{br}}{{2V}} + {C_{n{\delta _a}}}{\delta _a} + {C_{n{\delta _r}}}{\delta _r} \end{split} $$

    式中, $ \bar q = 0.5\rho {V^2} $表示动压, $ \rho $表示空气密度, $ S $表示机翼面积, $ \bar c $表示平均气动弦长, $ b $表示机翼展长, $ {\delta _e} $, $ {\delta _a} $和$ {\delta _r} $分别表示升降舵、副翼和方向舵偏角. $ {C_{ij}}\;(i = L,\;D,\;Y,\;l,\;m,\;n$; $j \,= \,0,\;\alpha ,\;\beta ,\;p,\;q,\;r,\; {\delta _e}, {\delta _a},\;{\delta _r}) $均表示气动参数.

    航母在航行中受到海浪无规则波动引起的舰体运动, 造成理想着舰点不断变化, 影响着舰位置精度. 甲板运动可以近似为沿舰体三轴的线运动纵荡$ \Delta {x_s} $、横摇$ \Delta {y_s} $和垂荡$ \Delta {z_s} $, 绕舰体三轴的角运动纵摇$ \theta_s $、横摇$ \varphi _s $和艏摇$ \psi_s $. 引入平稳随机过程理论并借助传递函数描述甲板运动, 线运动和角运动传递函数可表示为

    $$ \begin{split}& {G_T}(s) = \frac{{{b_3}{s^2} + {b_2}s + {b_1}}}{{{s^4} + {a_4}{s^3} + {a_3}{s^2} + {a_2}s + {a_1}}}\\& {G_A}(s) = \frac{{{o_3}{s^2} + {o_2}s + {o_1}}}{{{s^4} + {h_4}{s^3} + {h_3}{s^2} + {h_2}s + {h_1}}} \end{split} $$ (5)

    式中, $ a_i $、$ b_j $、$ h_i $和$ {o_j}(i = 1,\;2,\;3,\;4;j = 1,\;2,\;3) $分别表示传递函数参数值.

    受甲板运动影响, 理想着舰点位置变化为:

    $$ \left\{ \begin{aligned} &{x_c} = {V_s}t\cos ({\psi _s} + {\psi _0}) + \Delta {x_1} + \Delta {x_2}\\ &{y_c} = {V_s}t\sin ({\psi _s} + {\psi _0}) + \Delta {y_1} + \Delta {y_2}\\& {z_c} = \Delta {z_1} + \Delta {z_2} \end{aligned} \right. $$ (6)

    式中, $ V_s $表示航母的前进速度, $ \psi _0 $表示航母的速度方向与斜角甲板中心线之间的夹角, $ \left[ {\Delta {x_1},\;\Delta {y_1},\;\Delta {z_1}} \right] $和$ \left[ {\Delta {x_2},\;\Delta {y_2},\;\Delta {z_2}} \right] $分别表示甲板运动的平动和转动, 具体的表达式为

    $$\left\{ \begin{aligned} &\Delta {x_1} = \Delta {x_s}\cos {\psi _s} - \Delta {y_s}\sin {\psi _s}\\& \Delta {y_1} = \Delta {y_s}\sin {\psi _s} + \Delta {y_s}\cos {\psi _s}\\ &\Delta {z_1} = \Delta {z_s} \end{aligned} \right. $$ (7)
    $$ \left\{ \begin{aligned} \Delta {x_2} = \;&- {L_{TD}}\cos {\psi _s} + {L_{TD}} - {Y_{TD}}\sin {\psi _s}-\\ & {G_{TD}}\sin {\theta _s}\cos {\psi _s}\\ \Delta {y_2} =\;& - {L_{TD}}\sin {\psi _s} + {Y_{TD}}\cos {\psi _s} - {Y_{TD}}+\\ &{G_{TD}}\sin {\varphi _s}\cos {\psi _s}\\ \Delta {z_2} =\;& {L_{TD}}\sin {\theta _s} + {Y_{TD}}\sin {\varphi _s}-\\ & {G_{TD}}\sin {\varphi _s}\cos {\theta _s} + {G_{TD}} \end{aligned} \right. $$ (8)

    式中, $ {L_{TD}} $、$ {Y_{TD}} $和$ {G_{TD}} $均表示理想着舰点与航母舰体重心之间的三轴轴向距离.

    舰载机着舰过程中通常受到舰尾流扰动, 参考标准MIL-F-8785C, 典型舰尾流扰动表达式为

    $$ \begin{equation} \left\{ \begin{aligned}& {u_g} = {u_1} + {u_2} + {u_3} + {u_4}\\& {v_g} = {v_1} + {v_2}\\& {w_g} = {w_1} + {w_2} + {w_3} + {w_4} \end{aligned} \right. \end{equation} $$ (9)

    式中, $ u_g $、$ v_g $和$ w_g $分别表示舰尾流水平分量、横向分量和垂直分量, $ u_i $、$ v_i $和$ {w_i}(i = 1,\;2,\;3,\;4) $分别表示舰尾流扰动的随机大气紊流、舰尾流稳态分量、周期性分量及随机扰动四个组成部分.

    考虑舰载机动力学、舰尾流扰动和甲板运动模型, 本文的控制目标是针对着舰过程中面临复杂风场和甲板运动等多种干扰下的轨迹跟踪控制需求, 借助LSTM神经网络预估甲板运动并将其作为校正信息引入着舰引导, 采用非线性扰动观测器估计风干扰影响并进行补偿, 结合预定义时间控制律设计得到着舰末端自适应抗干扰控制器, 实现复杂扰动情形下的快速准确降落至理想着舰点, 保障着舰成功率. 整个着舰引导控制系统结构由着舰轨迹生成、着舰引导、着舰姿态控制和进近动力补偿等系统组成, 如图1所示.

    图 1  着舰引导控制系统结构框图
    Fig. 1  Framework of the proposed landing strategy

    定义舰载机理想着舰轨迹$ {{\boldsymbol{p}}_1} = {[{x_g},\;{y_g},\;{z_g}]^{\rm{T}}} $, 其中$ {x_g} = X $和$ {y_g} = y_c $分别表示舰载机在惯性坐标系下纵轴的位置和理想着舰点的横向位置, $ z_g $表示为

    $$ \begin{equation} {z_g} = \left\{ \begin{aligned} &h,&&\frac{{{x_c} - x \ge h}}{{\tan {\gamma _\tau }}}\\&({x_c} - x)\tan {\gamma _\tau }{\kern 1pt} ,&& {\mathrm{else}} \end{aligned} \right. \end{equation} $$ (10)

    式中, $ h $和$ \gamma _\tau $分别表示舰载机相对航母甲板在惯性系下的高度和下滑航迹角, 均为常值. 舰尾流和海浪波动等扰动使得理想着舰点不断变化, 产生侧偏距和高度偏差. 为抵消该偏差, 通常在着舰引导指令中进行补偿. 由于数据传输和系统响应的延迟, 需要超前叠加补偿指令. 本文采用基于LSTM的甲板运动预测方法, 通过甲板运动历史数据采集并训练神经网络对其进行预测, 实现超前补偿.

    LSTM包括遗忘门、输入门和输出门. 上述结构控制信息的流动, 使得记忆单元状态$ {c_t} $通过不同的门进行更新与调整. 遗忘门提供$ {c_t} $被舍弃的比例, 输入门负责更新$ {c_t} $, 输出门改变$ {c_t} $影响当前隐藏状态$ {h_t} $, 使得LSTM能够在长时间跨度上保留与更新甲板运动信息, 如图2所示.

    图 2  LSTM神经网络结构
    Fig. 2  Framework of the LSTM neural network

    遗忘门可表示为

    $$ \begin{equation} {f_t} = \sigma ({W_f}{h_{t - 1}} + {U_f}{x_t} + {b_f}) \end{equation} $$ (11)

    式中, $ {f_t} \in (0,\;1) $为保留的历史信息比例, $ {h_{t - 1}} $和$ {x_t} $分别表示上一拍的隐藏状态和当前拍的甲板运动信息, $ {W_f} $和$ {U_f} $均为权值矩阵, $ {b_f} $表示偏置数值, $ \sigma $表示sigmoid激活函数, 其表达式为

    $$ \begin{equation} \sigma (x) = 1/(1 + {{\rm{e}}^{ - x}}) \end{equation} $$ (12)

    输入门可表示为

    $$ \begin{equation} {\tilde c_t} = {i_t} \cdot \tanh ({W_c}{h_{t - 1}} + {U_c}{x_t} + {b_c}) \end{equation} $$ (13)

    式中, $ {\tilde c_t} $表示待选择的记忆单元状态, $ {W_c} $和$ {U_c} $均为权值矩阵, $ {b_c} $为偏置数值, $ {i_t} $表示选择系数, 其表达式为

    $$ \begin{equation} {i_t} = \sigma ({W_i}{h_{t - 1}} + {U_i}{x_t} + {b_i}) \end{equation} $$ (14)

    式中, $ {W_i} $和$ {U_i} $均为权值矩阵, $ {b_i} $为偏置数值.

    输出门可表示为

    $$ \begin{equation} {o_t} = \sigma ({W_o}{h_{t - 1}} + {U_o}{x_t} + {b_o}) \end{equation} $$ (15)

    式中, $ {W_o} $和$ {U_o} $均为权值矩阵, $ {b_o} $为偏置数值.

    结合遗忘门和输入门的输出信息, 更新当前拍的记忆单元状态$ {c_t} $为

    $$ \begin{equation} {c_t} = {f_t} \cdot {c_{t - 1}} + {i_t} \cdot {\tilde c_t} \end{equation} $$ (16)

    将当前拍的隐藏状态$ {h_t} $作为的LSTM单元输出信息和下一拍的输入量, 其表达式为

    $$ \begin{equation} {h_t} = {o_t} \cdot \tanh ({c_t}) \end{equation} $$ (17)

    通过LSTM输出信息$ h_t $能够获得预估的甲板运动信息$ {x_p} $

    $$ \begin{equation} {x_p} = {W_p} \cdot {h_t} + {b_p} \end{equation} $$ (18)

    定义2.1节得到的舰载机期望侧向和纵向着舰轨迹为$ {{\boldsymbol{x}}_{1r}}{ = }{[{y_r},\;{z_r}]^{\rm{T}}} $, 实时位置为$ {{\boldsymbol{x}}_1}{ = }{[y,\;z]^{\rm{T}}} $, 令$ {{\boldsymbol{x}}_{1r}} $通过一阶滤波器, 可得

    $$ \begin{equation} {\kappa _0}{{\dot{\boldsymbol{x}}}_{1\bar c}} + {{\boldsymbol{x}}_{1\bar c}} = {{\boldsymbol{x}}_{1r}} \end{equation} $$ (19)

    式中, $ {{\boldsymbol{x}}_{1\bar c}}(0) = {{\boldsymbol{x}}_{1r}}(0) $, $ {\kappa _0} > 0 $为设计参数.

    定义期望轨迹的跟踪误差为$ {{{\boldsymbol{e}}_{{{{\boldsymbol{x}}}_{\bf{1}}}}}} = {{\boldsymbol{x}}_{1\bar c}} - {{\boldsymbol{x}}_1} $, 设计着舰引导律为

    $$ \begin{equation} {{\dot{\boldsymbol{x}}}_{1d}} = {{\bar{\boldsymbol{K}}}_1}{\mathop{\rm sgn}} ({{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}}) {\left\| {{{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}}} \right\|^{0.5}} + {{\boldsymbol{K}}_1}{{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}} + {{\dot{\boldsymbol{x}}}_{1\bar c}} \end{equation} $$ (20)

    式中, $ {{\boldsymbol{K}}_1} = {\rm{diag}} \{{k_{11}},\;{k_{12}}\} $和$ {{\bar{\boldsymbol{K}}}_1} = {\rm{diag}} \{{\bar k_{11}},\;{\bar k_{12}}\} $为正定矩阵, $ {{\boldsymbol{x}}_{1d}}{ = }{[{y_d},\;{z_d}]^{\rm{T}}} $.

    选择李雅普诺夫函数为

    $$ \begin{equation} {V_1} = 0.5{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}^{\rm{T}}{{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}} + 0.5{\boldsymbol{\varepsilon }}_1^{\rm{T}}{{\boldsymbol{\varepsilon }}_1} \end{equation} $$ (21)

    式中, $ {{\boldsymbol{\varepsilon }}_1} = {{\boldsymbol{x}}_{1\bar c}} - {{\boldsymbol{x}}_{1r}} $, 对$ {V_1} $求导可得

    $$ \begin{split} {{\dot V}_1} =\;& - {\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}^{\rm{T}}({{{\bar{\boldsymbol{K}}}}_1}{\mathop{\rm sgn}} ({{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}}){\left\| {{{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}}} \right\|^{0.5}}{ + }{{\boldsymbol{K}}_1}{{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}}) - \\ &\kappa _0^{ - 1}{\boldsymbol{\varepsilon }}_1^{\rm{T}}{{\boldsymbol{\varepsilon }}_1} - {\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}^{\rm{T}}({{{\dot{\boldsymbol{x}}}}_1} - {{{\dot{\boldsymbol{x}}}}_{1d}}) - {\boldsymbol{\varepsilon }}_1^{\rm{T}}{{{\dot{\boldsymbol{x}}}}_{1r}} \end{split} $$ (22)

    进一步可得

    $$ \begin{split} {{\dot V}_1} \le\;& - {\lambda _{\min }}({{{\bar{\boldsymbol{K}}}}_1}){\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}^{\rm{T}}{\mathop{\rm sgn}} ({{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}}){\left\| {{{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}}} \right\|^{0.5}} - \\ &{\lambda _{\min }}({{\boldsymbol{K}}_1}){\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}^{\rm{T}}{{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}} - \kappa _0^{ - 1}{\boldsymbol{\varepsilon }}_1^{\rm{T}}{{\boldsymbol{\varepsilon }}_1} - \\ & {\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}^{\rm{T}}({{{\dot{\boldsymbol{x}}}}_1} - {{{\dot{\boldsymbol{x}}}}_{1d}}) - {\boldsymbol{\varepsilon }}_1^{\rm{T}}{{{\dot{\boldsymbol{x}}}}_{1r}} \end{split} $$ (23)

    式中, $ {\lambda _{\min }}(*) $为矩阵的最小特征值.

    考虑以下不等式

    $$ \begin{equation} \left\{ \begin{aligned} &-{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}^{\rm{T}}({{{\dot{\boldsymbol{x}}}}_1} - {{{\dot{\boldsymbol{x}}}}_{1d}}) \le 0.5{\sigma _1}{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}^{\rm{T}}{{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}} + \\& \qquad 0.5\sigma _1^{ - 1}{\left\| {{{{\dot{\boldsymbol{x}}}}_1} - {{{\dot{\boldsymbol{x}}}}_{1d}}} \right\|^2}\\& - {\boldsymbol{\varepsilon }}_1^{\rm{T}}{{{\dot{\boldsymbol{x}}}}_{1r}} \le 0.5{\sigma _2}{\boldsymbol{\varepsilon }}_1^{\rm{T}}{{\boldsymbol{\varepsilon }}_1} + 0.5\sigma _2^{ - 1}{\left\| {{{{\dot{\boldsymbol{x}}}}_{1r}}} \right\|^2} \end{aligned} \right. \end{equation} $$ (24)

    式中, $ {\sigma _1} $和$ {\sigma _2} $为正常数.

    带入式(23)可得

    $$ \begin{split} {{\dot V}_1} \le\;& - {\lambda _{\min }}({{{\bar{\boldsymbol{K}}}}_1}){\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}^{\rm{T}}{\mathop{\rm sgn}} ({{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}}){\left\| {{{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}}} \right\|^{0.5}} + \\& 0.5\sigma _2^{ - 1}{\left\| {{{{\dot{\boldsymbol{x}}}}_{1r}}} \right\|^2} - ({\lambda _{\min }}({{\boldsymbol{K}}_1}) - 0.5{\sigma _1}){\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}^{\rm{T}}{{\boldsymbol{e}}_{{{\boldsymbol{x}}_1}}} - \\ & (\kappa _0^{ - 1} - 0.5{\sigma _2}){\boldsymbol{\varepsilon }}_1^{\rm{T}}{{\boldsymbol{\varepsilon }}_1} + 0.5\sigma _1^{ - 1}{\left\| {{{{\dot{\boldsymbol{x}}}}_1} - {{{\dot{\boldsymbol{x}}}}_{1d}}} \right\|^2} \end{split} $$ (25)

    设计参数使得$ \kappa _0^{ - 1} - 0.5{\sigma _2} $和$ {\lambda _{\min }}({{\boldsymbol{K}}_1}) - 0.5{\sigma _1} $均大于零, 式(25) 可进一步表示为

    $$ \begin{equation} {\dot V_1} \le - 2{K_{v1}}{V_1} + {\sigma _{v1}} \end{equation} $$ (26)

    式中, $ {K_{v1}} = \min \left\{ {{\lambda _{\min }}({{\boldsymbol{K}}_1}) - 0.5{\sigma _1},\;\kappa _0^{ - 1} - 0.5{\sigma _2}} \right\} $, $ {\sigma _{v1}} = 0.5\sigma _1^{ - 1}{\left\| {{{{\dot{\boldsymbol{x}}}}_1} - {{{\dot{\boldsymbol{x}}}}_{1d}}} \right\|^2} + 0.5\sigma _2^{ - 1}{\left\| {{{{\dot{\boldsymbol{x}}}}_{1r}}} \right\|^2} $

    由式(26)可得, 李雅普诺夫函数(21)中的信号有界稳定, 当舰载机前向速度和下滑速度确定时, 可求出期望航迹方位角和航迹倾斜角为

    $$ \begin{equation} \left[ {\begin{array}{*{20}{c}} {{\chi _c}}\\ {{\gamma _c}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {\arctan ({{\dot y}_d}/\dot X)}\\ { - \arcsin ({{\dot z}_d}/V)} \end{array}} \right] \end{equation} $$ (27)

    将式(1) ~ (4)中的状态量转换为仿射形式, 定义$ {x_2} = \chi $, $ {{\boldsymbol{x}}_3} = {\left[ {\mu ,\;\theta ,\;\beta } \right]^{\rm{T}}} $, $ {{\boldsymbol{x}}_4} = {\left[ {p,\;q,\;r} \right]^{\rm{T}}} $, 并考虑舰尾流引起的时变扰动$ {d_i}(i = \chi ,\;\mu ,\;\theta ,\;\beta ,\;p,\;q,\;r,\;\alpha ) $, 则舰载机姿态控制和进近动力补偿子系统可表示为

    $$ \begin{equation} \left\{ \begin{aligned} &{{\dot x}_2} = {f_2} + {g_2}\mu + {d_\chi }\\& {{{\dot{\boldsymbol{x}}}}_3} = {{\boldsymbol{f}}_3} + {{\boldsymbol{g}}_3}{{\boldsymbol{x}}_4} + {{\boldsymbol{d}}_3}\\ &{{{\dot{\boldsymbol{x}}}}_4} = {{\boldsymbol{f}}_4} + {{\boldsymbol{g}}_4}{\boldsymbol{u}} + {{\boldsymbol{d}}_4}\\ &\dot \alpha = {f_\alpha } + {g_\alpha }T + {d_\alpha } \end{aligned} \right. \end{equation} $$ (28)

    式中, $ {f}_i $和$ {{\boldsymbol{g}}_i}(i = 1,\;2,\;3,\;4,\;\alpha ) $的详细表达式在后续各个子控制系统设计中给出, $ {{\boldsymbol{d}}_3} = {\left[ {{d_\mu },\;{d_\theta },\;{d_\beta }} \right]^{\rm{T}}} $, $ {{\boldsymbol{d}}_4} = {\left[ {{d_p},\;{d_q},\;{d_r}} \right]^{\rm{T}}} $.

    步骤1. 航迹方位角控制: 取姿态控制子系统中关于航迹方位角$ x_2 $的仿射形式表达式为

    $$ \begin{equation} {\dot x_2} = {f_2} + {g_2}\mu + {d_\chi } \end{equation} $$ (29)

    式中, $ {f_2} $和$ {g_2} $分别为

    $$ \begin{split} &{f_2} = [T(\sin \alpha \sin \mu - \cos \alpha \sin \beta \cos \mu ) + \\ &\ \ \ \ \ \ \ L(\sin \mu - \mu ) - Y\cos \mu ]/mV\cos \gamma\\& {g_2} = L/mV\cos \gamma \end{split} $$

    期望航迹方位角信号$ x_c $通过一阶低通滤波器获取参考值及一阶导数

    $$ \begin{equation} {\kappa _1}{\dot x_{2d}} + {x_{2d}} = {x_{2c}} \end{equation} $$ (30)

    式中, $ {x_{2d}}(0) = {x_{2c}}(0) $, $ {\kappa _1} > 0 $为设计参数.

    定义航迹方位角误差为$ {e_2} = {x_2} - {x_{2d}} $, 则航迹方位角误差动力学为

    $$ \begin{equation} {\dot e_2} = {f_2} + {g_2}\mu + {d_\chi } - {\dot x_{2d}} \end{equation} $$ (31)

    定义滑模面$ {s_2} $为

    $$ \begin{split} &{s_2} = {e_2} + {\Phi _2}\\ &{\Phi _2} = \frac{\pi }{{2{\eta _3}{T_{c3}}\sqrt {{n_{\chi 1}}{n_{\chi 2}}} }}({n_{\chi 1}}V_{21}^{ - \frac{{{\eta _3}}}{2}} + {n_{\chi 2}}V_{21}^{\frac{{{\eta _3}}}{2}}){{\dot e}_2} \end{split} $$ (32)

    式中, $ {\eta _3} \in (0,\;1) $, $ {n_{\chi 1}} > 0 $, $ {n_{\chi 2}} > 0 $均表示设计参数, $ {T_{c3}} > 0 $为预定义时间常数, 选择航迹方位角误差的李雅普诺夫函数为$ {V_{21}} = 0.5e_2^{\rm{T}}{e_2} $.

    对$ s_2 $求导可得

    $$ \begin{equation} {\dot s_2} = {f_2} + {g_2}\mu + {d_\chi } - {\dot \chi _d} + {\dot \Phi _2} \end{equation} $$ (33)

    设计虚拟控制输入$ \mu_c $为

    $$ \begin{split} {\mu _c} =\;& g_2^{ - 1}\Big[ \frac{\pi }{{2{\eta _4}{T_{c4}}\sqrt {{n_{\chi 3}}{n_{\chi 4}}} }}{s_2}({n_{\chi 3}}V_{22}^{ - \frac{{{\eta _4}}}{2}} + {n_{\chi 4}}V_{22}^{\frac{{{\eta _4}}}{2}}) - \\ &{f_2} - {{\overset{\frown} d}_\chi } + {{\dot \chi }_d} - {\dot{ {\overset{\frown} \Phi} }_2}\Big] \\[-1pt]\end{split} $$ (34)

    式中, $ {\eta _4} \in (0,\;1) $, $ {n_{\chi 3}} > 0 $, $ {n_{\chi 4}} > 0 $均表示设计参数, $ {T_{c4}} > 0 $为预定义时间常数, $ {\dot{ {\overset{\frown} \Phi} }_2} $为参考信号$ {\Phi _2} $通过TD跟踪微分器后得到的数值微分信号. $ {{\overset{\frown} d} _\chi } = {\hat d_\chi }{\mathop{\rm sgn}} ({s_2}) $, $ {\hat d_\chi } $表示扰动估计值, 估计误差为$ {\tilde d_\chi } = {d_\chi } - {\hat d_\chi } $, $ {V_{22}} $的表达式为

    $$ \begin{equation} {V_{22}} = 0.5s_2^{\rm{T}}{s_2} + 0.5\tilde d_\chi ^{\rm{T}}{\tilde d_\chi } \end{equation} $$ (35)

    设计扰动观测器为

    $$ \begin{split} &{{\hat d}_\chi } = {K_2}(\chi - {D_2})\\ &{{\dot D}_2} = {{\hat d}_\chi } + {f_2} + {g_2}\mu - \\& \ \ \ \ {\kern 11pt} \frac{{\pi K_2^{ - 1}}}{{2{\eta _4}{T_{c4}}\sqrt {{n_{\chi 3}}{n_{\chi 4}}} }}{{\tilde d}_\chi }({n_{\chi 3}}V_{22}^{ - \frac{{{\eta _4}}}{2}} + {n_{\chi 4}}V_{22}^{\frac{{{\eta _4}}}{2}}) \end{split} $$ (36)

    式中, $ {K_2} > 0 $为设计参数.

    则$ {\tilde d_\chi } $的导数为

    $$ \begin{split} {\dot{ \tilde d}_\chi } =\;& {{\dot d}_\chi } - {K_2}{{\tilde d}_\chi } - \\ & \frac{\pi }{{2{\eta _4}{T_{c4}}\sqrt {{n_{\chi 3}}{n_{\chi 4}}} }}{{\tilde d}_\chi }({n_{\chi 3}}V_{22}^{ - \frac{{{\eta _4}}}{2}} + {n_{\chi 4}}V_{22}^{\frac{{{\eta _4}}}{2}}) \end{split} $$ (37)

    步骤2. 姿态角控制: 在舰载机着舰过程中, 期望的侧滑角$ {\beta _r} = 0 $, 期望的攻角$ \alpha_r $保持配平攻角, 当$ \beta = 0 $时, 有$ \theta = \alpha + \gamma $, 则期望的俯仰角为$ {\theta _c} = {\alpha _r} + {\gamma _c} $. 取姿态控制子系统中关于航迹滚转角、俯仰角和侧滑角$ {{\boldsymbol{x}}_3} $的仿射形式表达式为

    $$ \begin{equation} {{\dot{\boldsymbol{x}}}_3} = {{\boldsymbol{f}}_3} + {{\boldsymbol{g}}_3}{{\boldsymbol{x}}_4} + {{\boldsymbol{d}}_3} \end{equation} $$ (38)

    式中, $ {{\boldsymbol{f}}_3} $和$ {{\boldsymbol{g}}_3} $分别为

    $$ \begin{split} &{{\boldsymbol{f}}_3} = \left[ {\begin{array}{*{20}{c}} {\dot \chi (\sin \gamma + \cos \gamma \sin \mu \tan \beta ) + \dot \gamma \cos \alpha \tan \beta }\\ { - \dot \chi \cos \gamma \sin \mu - \dot \gamma (\cos \mu + \cos \beta )\sec \beta }\\ { - \dot \gamma \sin \mu + \dot \chi \cos \mu \cos \gamma } \end{array}} \right]\\ &{{\boldsymbol{g}}_3} = \left[ {\begin{array}{*{20}{c}} {\cos \alpha \sec \beta }&0&{\sin \alpha \sec \beta }\\ { - \cos \alpha \tan \beta }&1&{ - \sin \alpha \tan \beta }\\ {\sin \alpha }&0&{ - \cos \alpha } \end{array}} \right] \end{split} $$

    考虑$ {{\dot{\boldsymbol{x}}}_3} $的期望参考信号为$ {{\boldsymbol{x}}_{3c}} = \left[ {{\mu _c},\;{\theta _c},\;{\beta _c}} \right] $, 令其通过一阶低通滤波器可得

    $$ \begin{equation} {{\boldsymbol{\kappa }}_2}{{\dot{\boldsymbol{x}}}_{3d}} + {{\boldsymbol{x}}_{3d}} = {{\boldsymbol{x}}_{3c}} \end{equation} $$ (39)

    式中, $ {{\boldsymbol{x}}_{3d}}({\bf{0}}) = {{\boldsymbol{x}}_{3c}}({\bf{0}}) $, $ {{\boldsymbol{\kappa }}_2} = {\rm{diag}}\{ {{\kappa _{21}},\;{\kappa _{22}},\;{\kappa _{23}}}\} $, $ {\kappa _{2i}}(i = 1,\;2,\;3) > 0 $为设计参数.

    定义姿态角误差为$ {{\boldsymbol{e}}_3} = {{\boldsymbol{x}}_3} - {{\boldsymbol{x}}_{3d}} $, 则姿态角误差动力学为

    $$ \begin{equation} {{\dot{\boldsymbol{e}}}_3} = {{\boldsymbol{f}}_3} + {{\boldsymbol{g}}_3}{{\boldsymbol{x}}_4} + {{\boldsymbol{d}}_3} - {{\dot{\boldsymbol{x}}}_{3d}} \end{equation} $$ (40)

    设计滑模面$ {{\boldsymbol{s}}_3} $为

    $$ \begin{split} &{{\boldsymbol{s}}_3} = {{\boldsymbol{e}}_3} + {{\boldsymbol{\Phi }}_3}\\ &{{\boldsymbol{\Phi }}_3} = \frac{\pi }{{2{\eta _5}{T_{c5}}\sqrt {{n_{\theta 1}}{n_{\theta 2}}} }}({n_{\theta 1}}V_{31}^{ - \frac{{{\eta _5}}}{2}} + {n_{\theta 2}}V_{31}^{\frac{{{\eta _5}}}{2}}){{{\dot{\boldsymbol{e}}}}_3} \end{split} $$ (41)

    式中, $ {\eta _5} \in (0,\;1) $, $ {n_{\theta 1}} > 0 $, $ {n_{\theta 2}} > 0 $均表示设计参数, $ {T_{c5}} > 0 $为预定义时间常数, 选择姿态角误差的李雅普诺夫函数为$ {V_{31}} = 0.5{\boldsymbol{e}}_3^{\rm{T}}{{\boldsymbol{e}}_3} $.

    对$ {{\boldsymbol{s}}_3} $求导可得

    $$ \begin{equation} {{\dot{\boldsymbol{s}}}_3} = {{\boldsymbol{f}}_3} + {{\boldsymbol{g}}_3}{{\boldsymbol{x}}_4} + {{\boldsymbol{d}}_3} - {{\dot{\boldsymbol{x}}}_{3d}} + {{\dot{\boldsymbol{\Phi}}}_3} \end{equation} $$ (42)

    设计虚拟控制输入$ {{\boldsymbol{x}}_{4c}} $为

    $$ \begin{split} {{\boldsymbol{x}}_{4c}} =\;& {\boldsymbol{g}}_3^{ - 1}( - {{\boldsymbol{f}}_3} - {{{\overset{\frown} {\boldsymbol{d}}}}_3} + {{{\dot{\boldsymbol{x}}}}_{3d}} - {{\dot{ {\overset{\frown} {\boldsymbol{\Phi}}} }}_3} - \\ & \frac{\pi }{{2{\eta _6}{T_{c6}}\sqrt {{n_{\theta 3}}{n_{\theta 4}}} }}{{\boldsymbol{s}}_3}({n_{\theta 3}}V_{32}^{ - \frac{{{\eta _6}}}{2}} + {n_{\theta 4}}V_{32}^{\frac{{{\eta _6}}}{2}}) \end{split} $$ (43)

    式中, $ {\eta _6} \in (0,\;1) $, $ {n_{\theta 3}} > 0 $, $ {n_{\theta 4}} > 0 $均表示设计参数, $ {T_{c6}} > 0 $为预定义时间常数, $ {{\dot{\overset{\frown} {\boldsymbol{\Phi}}}_3}} $为参考信号$ {{\boldsymbol{\Phi }}_3} $通过跟踪微分器后得到的数值微分信号. $ {{{\overset{\frown} {\boldsymbol{d}}} }_3} = {[{\hat d_{31}}{\mathop{\rm sgn}} ({s_{31}}),\;{\hat d_{32}}{\mathop{\rm sgn}} ({s_{32}}),\;{\hat d_{33}}{\mathop{\rm sgn}} ({s_{33}})]^{\rm{T}}} $, 此处$ {\hat d_{3i}}(i = 1,\;2,\;3) $和$ {s_{3i}} $分别表示扰动估计值$ {{\hat{\boldsymbol{d}}}_3} = [ {{\hat d}_\mu },\;{{\hat d}_\theta }, {{\hat d}_\beta } ]^{\rm{T}} $和滑模面$ {{\boldsymbol{s}}_3} $的第$ i $个分量, $ {{\tilde{\boldsymbol{d}}}_3} = {{\boldsymbol{d}}_3} - {{\hat{\boldsymbol{d}}}_3} $为扰动观测器估计误差, $ {V_{32}} $的表达式为.

    $$ \begin{equation} {V_{32}} = 0.5{\boldsymbol{s}}_3^{\rm{T}}{{\boldsymbol{s}}_3} + 0.5{\tilde{\boldsymbol{d}}}_3^{\rm{T}}{{\tilde{\boldsymbol{d}}}_3} \end{equation} $$ (44)

    设计扰动观测器$ {{\hat{\boldsymbol{d}}}_3} $为

    $$ \begin{split} &{{{\hat{\boldsymbol{d}}}}_3} = {{\boldsymbol{K}}_3}({{\boldsymbol{x}}_3} - {{\boldsymbol{D}}_3})\\& {{{\dot{\boldsymbol{D}}}}_3} = {{{\hat{\boldsymbol{d}}}}_3} + {{\boldsymbol{f}}_3} + {{\boldsymbol{g}}_3}{{\boldsymbol{x}}_4} - \\& \ \ \ \ \ \ \ \ \frac{{\pi {\boldsymbol{K}}_3^{ - 1}}}{{2{\eta _6}{T_{c6}}\sqrt {{n_{\theta 3}}{n_{\theta 4}}} }}{{{\tilde{\boldsymbol{d}}}}_3}({n_{\theta 3}}V_{32}^{ - \frac{{{\eta _6}}}{2}} + {n_{\theta 4}}V_{32}^{\frac{{{\eta _6}}}{2}}) \end{split} $$ (45)

    式中, $ {{\boldsymbol{K}}_3} = {\rm{diag}}\{{k_{31}},\;{k_{32}},\;{k_{33}}\} $为正定矩阵.

    则$ {{\tilde{\boldsymbol{d}}}_3} $的导数为

    $$ \begin{split} {{{\dot{\tilde{\boldsymbol{d}}}}_3}} =\;& {{{\dot{\boldsymbol{d}}}}_3} - {{\boldsymbol{K}}_3}{{{\tilde{\boldsymbol{d}}}}_3} - \\& \frac{\pi }{{2{\eta _6}{T_{c6}}\sqrt {{n_{\theta 3}}{n_{\theta 4}}} }}{{{\tilde{\boldsymbol{d}}}}_3}({n_{\theta 3}}V_{32}^{ - \frac{{{\eta _6}}}{2}} + {n_{\theta 4}}V_{32}^{\frac{{{\eta _6}}}{2}}) \end{split} $$ (46)

    步骤3. 角速率控制: 取姿态控制子系统中关于角速率$ {{\boldsymbol{x}}_4} $的仿射形式表达式为

    $$ \begin{equation} {{\dot{\boldsymbol{x}}}_4} = {{\boldsymbol{f}}_4} + {{\boldsymbol{g}}_4}{\boldsymbol{u}} + {{\boldsymbol{d}}_4} \end{equation} $$ (47)

    式中, $ {{\boldsymbol{f}}_4} $和$ {{\boldsymbol{g}}_4} $分别为

    $$ \begin{split} &{{\boldsymbol{f}}_4} = \left[ {\begin{array}{*{20}{l}} {c_3}\bar qSb\left({C_{lp}}\dfrac{{bp}}{{2V}} + {C_{lr}}\dfrac{{br}}{{2V}}{C_{l\beta }}\beta \right)+\\\quad {c_4}\bar qSb\left({C_{n\beta }}\beta + {C_{np}}\dfrac{{bp}}{{2V}} + {C_{nr}}\dfrac{{br}}{{2V}}\right)+\\ \quad ({c_1}r + {c_2}p)q{\kern 1pt} {\kern 1pt} {\kern 1pt} ; \\ {c_5}pr - {c_6}({p^2} - {r^2}) + {c_7}\bar qS\bar c\Bigg({C_{m0}}+\\ \quad {C_{m\alpha }}\alpha + {C_{mq}}\dfrac{{cq}}{{2V}}\Bigg){\kern 1pt} {\kern 1pt} {\kern 1pt} ;\\ {c_4}\bar qSb\left({C_{lp}}\dfrac{{bp}}{{2V}} + {C_{lr}}\dfrac{{br}}{{2V}} + {C_{l\beta }}\beta \right)\\\quad + {c_9}\bar qSb\left({C_{n\beta }}\beta + {C_{np}}\dfrac{{bp}}{{2V}} + {C_{nr}}\dfrac{{br}}{{2V}}\right)+\\ \quad ({c_8}p - {c_2}r)q \end{array}} \right]\\ &\dfrac{{{{\boldsymbol{g}}_4}}}{{\bar qS}} = \left[ {\begin{array}{*{20}{c}} 0& {\begin{array}{*{20}{c}}b({c_3}{C_{l{\delta _a}}}{\delta _a}+\\ {c_4}{C_{l{\delta _a}}}{\delta _a})\end{array}} & {\begin{array}{*{20}{c}}b({c_3}{C_{l{\delta _r}}}{\delta _r}\\ + {c_4}{C_{l{\delta _r}}}{\delta _r})\end{array}} \\ {\bar c{C_{m{\delta _e}}}{\delta _e}}&0&0\\ 0& {\begin{array}{*{20}{c}}b({c_4}{C_{n{\delta _a}}}{\delta _a}+\\ {c_9}{C_{n{\delta _a}}}{\delta _a}) \end{array}}&{\begin{array}{*{20}{c}} b({c_4}{C_{n{\delta _r}}}{\delta _r}\\ + {c_9}{C_{n{\delta _r}}}{\delta _r})\end{array}} \end{array}} \right] \end{split} $$

    令步骤2中得到的期望角速率信号$ {{\boldsymbol{x}}_{4c}} $通过一阶低通滤波器可得

    $$ \begin{equation} {\kappa _3}{{\dot{\boldsymbol{x}}}_{4d}} + {{\boldsymbol{x}}_{4d}} = {{\boldsymbol{x}}_{4c}} \end{equation} $$ (48)

    式中, $ {{\boldsymbol{x}}_{4d}}({\bf{0}}) = {{\boldsymbol{x}}_{4c}}({\bf{0}}) $, $ {{\boldsymbol{\kappa }}_3} = {\rm{diag}}\{ {{\kappa _{31}},\;{\kappa _{32}},\;{\kappa _{33}}} \} $, $ {\kappa _{3i}}(i = 1,\;2,\;3) > 0 $为设计参数.

    定义角速率误差为$ {{\boldsymbol{e}}_4} = {{\boldsymbol{x}}_4} - {{\boldsymbol{x}}_{4d}} $, 则角速率误差动力学为

    $$ \begin{equation} {{\dot{\boldsymbol{e}}}_4} = {{\boldsymbol{f}}_4} + {{\boldsymbol{g}}_4}{\boldsymbol{u}} + {{\boldsymbol{d}}_4} - {{\dot{\boldsymbol{x}}}_{4d}} \end{equation} $$ (49)

    设计滑模面$ {{\boldsymbol{s}}_4} $为

    $$ \begin{split} &{{\boldsymbol{s}}_4} = {{\boldsymbol{e}}_4} + {{\boldsymbol{\Phi }}_4}\\& {{\boldsymbol{\Phi }}_4} = \frac{\pi }{{2{\eta _7}{T_{c7}}\sqrt {{n_{p1}}{n_{p2}}} }}({n_{p1}}V_{41}^{ - \frac{{{\eta _7}}}{2}} + {n_{p2}}V_{41}^{\frac{{{\eta _7}}}{2}}){{{\dot{\boldsymbol{e}}}}_4} \end{split} $$ (50)

    式中, $ {\eta _7} \in (0,\;1) $, $ {n_{p1}} > 0 $, $ {n_{p2}} > 0 $均表示设计参数, $ {T_{c7}} > 0 $为预定义时间常数, 选择角速率误差的李雅普诺夫函数为$ {V_{41}} = 0.5{\boldsymbol{e}}_4^{\rm{T}}{{\boldsymbol{e}}_4} $.

    对$ {{\boldsymbol{s}}_4} $求导可得

    $$ \begin{equation} {{\dot{\boldsymbol{s}}}_4} = {{\boldsymbol{f}}_4} + {{\boldsymbol{g}}_4}{\boldsymbol{u}} + {{\boldsymbol{d}}_4} - {{\dot{\boldsymbol{x}}}_{4d}} + {{\dot{\boldsymbol{\Phi}}}_4} \end{equation} $$ (51)

    设计舵面控制量$ {{\boldsymbol{u}}_c} $为

    $$ \begin{split} {{\boldsymbol{u}}_c} =\;& {\boldsymbol{g}}_4^{ - 1}( - {{\boldsymbol{f}}_4} - {{{\overset{\frown} {\boldsymbol{d}}}}_4} + {{{\dot{\boldsymbol{x}}}}_{4d}} - {{{\dot{ {\overset{\frown} {\boldsymbol{\Phi}}} }}_4}} -\\& \frac{\pi }{{2{\eta _8}{T_{c8}}\sqrt {{n_{p3}}{n_{p4}}} }}{{\boldsymbol{s}}_4}({n_{p3}}V_{42}^{ - \frac{{{\eta _8}}}{2}} + {n_{p4}}V_{42}^{\frac{{{\eta _8}}}{2}}) \end{split} $$ (52)

    式中, $ {\eta _8} \in (0,\;1) $, $ {n_{p3}} > 0 $, $ {n_{p4}} > 0 $均表示设计参数, $ {T_{c8}} > 0 $为预定义时间常数, $ {{\dot{\overset{\frown} {\boldsymbol{\Phi}}}_4}} $为参考信号$ {{\boldsymbol{\Phi }}_4} $通过跟踪微分器后得到的数值微分信号. $ {{{\overset{\frown} {\boldsymbol{d}}} }_4} = {[{\hat d_{41}}{\mathop{\rm sgn}} ({s_{41}}),\;{\hat d_{42}}{\mathop{\rm sgn}} ({s_{42}}),\;{\hat d_{43}}{\mathop{\rm sgn}} ({s_{43}})]^{\rm{T}}} $, 此处$ {\hat d_{4i}}(i = 1,\;2,\;3) $和$ {s_{4i}} $分别表示$ {{\hat{\boldsymbol{d}}}_4} = {[ {{{\hat d}_p},\;{{\hat d}_q},\;{{\hat d}_r}} ]^{\rm{T}}} $扰动估计值和滑模面$ {{\boldsymbol{s}}_4} $的第$ i $个分量; $ {{\tilde{\boldsymbol{d}}}_4} = {{\boldsymbol{d}}_4} - {{\hat{\boldsymbol{d}}}_4} $为扰动观测器估计误差, $ {V_{42}} $的表达式为

    $$ \begin{equation} {V_{42}} = 0.5{\boldsymbol{s}}_4^{\rm{T}}{{\boldsymbol{s}}_4} + 0.5{\tilde{\boldsymbol{d}}}_4^{\rm{T}}{{\tilde{\boldsymbol{d}}}_4} \end{equation} $$ (53)

    设计扰动观测器$ {{\hat{\boldsymbol{d}}}_4} $为

    $$ \begin{split} {{{\hat{\boldsymbol{d}}}}_4} = \;&{{\boldsymbol{K}}_4}({{\boldsymbol{x}}_4} - {{\boldsymbol{D}}_4})\\ {{{\dot{\boldsymbol{D}}}}_4} = \;&{{{\hat{\boldsymbol{d}}}}_4} + {{\boldsymbol{f}}_4} + {{\boldsymbol{g}}_4}{\boldsymbol{u}} - \\ & \frac{{\pi {\boldsymbol{K}}_4^{ - 1}}}{{2{\eta _8}{T_{c8}}\sqrt {{n_{p3}}{n_{p4}}} }}{{{\tilde{\boldsymbol{d}}}}_4}({n_{p3}}V_{42}^{ - \frac{{{\eta _8}}}{2}} + {n_{p4}}V_{42}^{\frac{{{\eta _8}}}{2}}) \end{split} $$ (54)

    式中, $ {{\boldsymbol{K}}_4} = {\rm{diag}}\{{k_{41}},\;{k_{42}},\;{k_{43}}\} $为正定矩阵.

    则$ {{{\hat{\boldsymbol{d}}}}_4} $的导数为

    $$ \begin{split} {{{\dot{\tilde{\boldsymbol{d}}}}_4}} =\;& {{{\dot{\boldsymbol{d}}}}_4} - {{\boldsymbol{K}}_4}{{{\tilde{\boldsymbol{d}}}}_4} - \\ & \frac{\pi }{{2{\eta _8}{T_{c8}}\sqrt {{n_{p3}}{n_{p4}}} }}{{{\tilde{\boldsymbol{d}}}}_4}({n_{p3}}V_{42}^{ - \frac{{{\eta _8}}}{2}} + {n_{p4}}V_{42}^{\frac{{{\eta _8}}}{2}}) \end{split} $$ (55)

    舰载机着舰时处于低速低空区域, 其迎角、速度及推力呈反区特性, 需调整推力值保持恒定的迎角. 考虑舰尾流造成的时变扰动, 式(3)中$ \alpha $的仿射形式表达式为

    $$ \begin{equation} \dot \alpha = {f_\alpha } + {g_\alpha }{\bar T} + {d_\alpha } \end{equation} $$ (56)

    式中, $ {f_\alpha } $和$ {g_\alpha } $的表达式为

    $$ \begin{split} {f_\alpha } = \;&q - (p\cos \alpha + r\sin \alpha ) + \\ &(mg\cos \mu \cos \gamma - L)/mV\cos \beta \\ {g_\alpha } =\;&- \sin \alpha /mV\cos \beta \end{split} $$

    由于迎角的参考值$ {\alpha _r} $为常数, 其导数为零. 定义迎角误差为$ {e_\alpha } = \alpha - {\alpha _r} $. 设计滑模面$ {s_5} $为

    $$ \begin{split} &{s_5} = {e_\alpha } + {\Phi _\alpha }\\& {\Phi _\alpha } = \frac{\pi }{{2{\eta _9}{T_{c9}}\sqrt {{n_{\alpha 1}}{n_{\alpha 2}}} }}({n_{\alpha 1}}V_{51}^{ - \frac{{{\eta _9}}}{2}} + {n_{\alpha 2}}V_{51}^{\frac{{{\eta _9}}}{2}}){{\dot e}_\alpha } \end{split} $$ (57)

    式中, $ {\eta _9} \in (0,\;1) $, $ {n_{\alpha 1}} > 0 $, $ {n_{\alpha 2}} > 0 $均表示设计参数, $ {T_{c9}} > 0 $为预定义时间常数, 选择迎角误差的李雅普诺夫函数为$ {V_{51}} = 0.5e_\alpha ^{\rm{T}}{e_\alpha } $.

    对$ s_5 $求导可得

    $$ \begin{equation} {\dot s_5} = {f_\alpha } + {g_\alpha }{\bar T} + {d_\alpha } + {\dot \Phi _\alpha } \end{equation} $$ (58)

    设计油门控制量$ {{\bar T}_c} $为

    $$ \begin{split} {{\bar T}_c} =\;& g_\alpha ^{ - 1}[ - {f_\alpha } - {{\overset{\frown} d}_\alpha } - {{\dot{ {\overset{\frown} \Phi} }_\alpha }} - \\ & \frac{{ \pi }}{{2{\eta _{10}}{T_{c10}}\sqrt {{n_{\alpha 3}}{n_{\alpha 4}}} }}{s_5}({n_{\alpha 3}}V_{52}^{ - \frac{{{\eta _{10}}}}{2}} + {n_{\alpha 4}}V_{52}^{\frac{{{\eta _{10}}}}{2}})] \end{split} $$ (59)

    式中, $ {\eta _{10}} \in (0,\;1) $, $ {n_{\alpha 3}} > 0 $, $ {n_{\alpha 4}} > 0 $均表示设计参数, $ {T_{c10}} > 0 $为预定义时间常数, $ {{\dot{ {\overset{\frown} \Phi} }_\alpha }} $为参考信号$ {\Phi _\alpha } $通过TD跟踪微分器后得到的数值微分信号. $ {{\overset{\frown} d} _\alpha } = {\hat d_\alpha }{\mathop{\rm sgn}} ({s_5}) $, $ {\hat d_\alpha } $表示扰动估计值, 估计误差为$ {\tilde d_\alpha } = {d_\alpha } - {\hat d_\alpha } $, $ {V_{52}} $表达式为

    $$ \begin{equation} {V_{52}} = 0.5s_5^{\rm{T}}{s_5} + 0.5\tilde d_5^{\rm{T}}{\tilde d_5} \end{equation} $$ (60)

    设计扰动观测器$ {\hat d_\alpha } $为

    $$ \begin{split} {{\hat d}_\alpha } = \;&{K_5}(\alpha - {D_\alpha })\\ {{\dot D}_\alpha } = \;&{{\hat d}_\alpha } + {f_\alpha } + {g_\alpha }{\bar T} - \\ &\frac{{ \pi {{\tilde d}_\alpha }}}{{2{\eta _{10}}{T_{c10}}\sqrt {{n_{\alpha 3}}{n_{\alpha 4}}} }}({n_{\alpha 3}}V_{52}^{ - \frac{{{\eta _{10}}}}{2}} + {n_{\alpha 4}}V_{52}^{\frac{{{\eta _{10}}}}{2}}) \end{split} $$ (61)

    式中, $ {K_5} > 0 $为设计参数.

    则$ {\tilde d_\alpha } $的导数为

    $$ \begin{split} {{\dot{ \tilde d}_\alpha }} = \;&{{\dot d}_\alpha } - {K_5}{{\tilde d}_\alpha } - \\ & \frac{{ \pi K_5^{ - 1}{{\tilde d}_\alpha }}}{{2{\eta _{10}}{T_{c10}}\sqrt {{n_{\alpha 3}}{n_{\alpha 4}}} }}({n_{\alpha 3}}V_{52}^{ - \frac{{{\eta _{10}}}}{2}} + {n_{\alpha 4}}V_{52}^{\frac{{{\eta _{10}}}}{2}}) \end{split} $$ (62)

    注释1. 为了得到参考信号$ {\Phi _i}(i = 2,\;3,\;4,\;\alpha ) $的数值微分, 考虑借助文献[35]中给出的离散二阶系统形式的TD跟踪微分器

    $$ \begin{align*} \left\{ \begin{aligned} &{z_1}(k + 1) = {z_1}(k) + {z_2}(k)h\\& {z_2}(k + 1) = {z_2}(k) + {u_{TD}}h \end{aligned} \right. \end{align*} $$

    式中, $ {z_2}(k) $为$ {\Phi _i}(k) $的数值微分值, $ h $为采样时间, 控制信号$ {u_{TD}} = {f_{TD}}({z_1}(k) - {\Phi _i}(k),\;{z_2}(k),\;{r_0},\;{h_0}) $. 其中$ {r_0} $和$ {h_0} $分别为速度和滤波因子, 均为可调节参数, $ {f_{TD}}(*) $为快速控制最优综合函数, 其表达式为

    $$ \begin{align*} \left\{ {\begin{aligned} &{{w_T} = {r_0}{h_0},\;{w_d} = {w_T}{h_0}}\\ &{{l_{TD}} = {z_1} + {z_2}{h_0}}\\ &{{a_0} = \sqrt {w_T^2 + 8{h_0}\left| {{l_{TD}}} \right|} }\\ &{{a_{TD}} = \left\{ \begin{aligned} &{z_2} + ({a_0} - {w_T}){\mathop{\rm sgn}} ({l_{TD}})/2&& \left| {{l_{TD}}} \right| > {w_d}\\ &{z_2} + {l_{TD}}/{h_0}&&\left| {{l_{TD}}} \right| \le {w_d} \end{aligned} \right.}\\ &{{f_{TD}} = \left\{ \begin{aligned} &- {r_0}{\mathop{\rm sgn}} ({a_{TD}})&& \left| {{a_{TD}}} \right| > {w_d}\\ &- {r_0}{a_{TD}}/{w_T}&& \left| {{a_{TD}}} \right| \le {w_d} \end{aligned} \right.} \end{aligned}} \right. \end{align*} $$

    在本文中, 采样时间$ h $设置为0.01, 速度因子和滤波因子$ r_0 $和$ h_0 $分别设置为10和0.1.

    引理1[27]. 对于定义在$ t \in [{t_0},\;\infty ) $上的动态系统$ {\boldsymbol{x}} = f({\boldsymbol{x}}) + {\boldsymbol{d}} $, 其中$ {t_0} \in {{\mathbb{R}}_ + } \cup \{ 0\} $, 若存在一个连续径向无界的李雅普诺夫函数$ V({\boldsymbol{x}}):{{\mathbb{R}}^n} \to {\mathbb{R}} $, 使得任意解$ {\boldsymbol{x}}(t,\;{{\boldsymbol{x}}_0}) $均满足

    $$ \begin{equation} \dot V({\boldsymbol{x}}) \le - \frac{{{\pi}}}{{{k_T}{T_c}\sqrt {{k_1}{k_2}} }}({k_1}{V^{1 - \frac{{{k_T}}}{2}}} + {V^{1 + \frac{{{k_T}}}{2}}}) + \varepsilon \end{equation} $$ (63)

    式中, $ {T_c} > 0 $, $ {k_1} > 0 $, $ {k_2} > 0 $及$ 0 < {k_T} \le 1 $表示设计参数, $ \varepsilon \in [0,\;\infty ) $为正常数. 则动态系统是预定义时间稳定的, 且收敛时间为$ {T_c} $.

    定理1. 对于舰载机航迹方位角模型(29), 在预定义时间虚拟控制律(34) 作用下, 通过设置系统参数$ {T_c} = {T_{c3}} + {T_{c4}} $, 着舰航迹方位角跟踪误差可在预定义时间$ {T_c} $内收敛到平衡点邻域内.

    证明. 对定理2的证明分为两个阶段, 滑模面以及航迹方位角跟踪误差趋近于平衡点邻域.

    1)首先证明第二阶段, 则如果滑模面达到稳定点, 即$ {s_2} = 0 $, 则式(32)可以简化为

    $$ \begin{equation} {e_2} = - {\Phi _2} \end{equation} $$ (64)

    对航迹方位角误差李雅普诺夫函数$ {V_{21}} $求导, 将式(64)带入能够得到

    $$ \begin{split} {{\dot V}_{21}} = \;&e_2^{\rm{T}}{{\dot e}_2} = - \Phi _2^{\rm{T}}{{\dot e}_2}=\\ & \frac{- \pi }{{{\eta _3}{T_{c3}}\sqrt {{n_{\chi 1}}{n_{\chi 2}}} }}({n_{\chi 1}}V_{21}^{1 - \frac{{{\eta _3}}}{2}} + {n_{\chi 2}}V_{21}^{1 + \frac{{{\eta _3}}}{2}}) \end{split} $$ (65)

    根据引理1, 舰载机着舰航迹方位角误差$ {e_2} $能够在预定义时间$ {T_{c3}} $内稳定至0.

    2)对$ {V_{22}} $求导, 并将控制律(34)带入可得

    $$ \begin{split} {{\dot V}_{22}} =\;& \tilde d_\chi ^{\rm{T}}{{\dot d}_\chi } - {K_2}\tilde d_\chi ^{\rm{T}}{{\tilde d}_\chi } + s_2^{\rm{T}}{{\hat d}_\chi }{\mathop{\rm sgn}} ({s_2}) - \\ & \frac{ \pi }{{{\eta _4}{T_{c4}}\sqrt {{n_{\chi 3}}{n_{\chi 4}}} }}({n_{\chi 3}}V_{22}^{1 - \frac{{{\eta _4}}}{2}} + {n_{\chi 4}}V_{22}^{1 + \frac{{{\eta _4}}}{2}}) - \\ &s_2^{\rm{T}}{d_\chi } + s_2^{\rm{T}}({{\dot \Phi }_2} - {{\dot{ {\overset{\frown} \Phi} }_2}})\\[-1pt] \end{split} $$ (66)

    考虑不等式: $ \tilde d_\chi ^{\rm{T}}{\dot d_\chi } \le 0.5\tilde d_\chi ^{\rm{T}}{\tilde d_\chi } + 0.5{\| {{{\dot d}_\chi }} \|^2} $, 并根据跟踪微分器收敛理论, 存在正常数$ {\varepsilon _\chi } $使得$ \| s_2^{\rm{T}}({{\dot \Phi }_2} - {{{\dot{\overset{\frown} \Phi}}_2}}) \| \le {\varepsilon _\chi } $, 则式(66) 可进一步写为

    $$ \begin{split} {{\dot V}_{22}} \le\;& - {{\bar K}_2}\tilde d_\chi ^{\rm{T}}{{\tilde d}_\chi } + {\varepsilon _\chi } + 0.5{\left\| {{{\dot d}_\chi }} \right\|^2} - \\ & \frac{\pi }{{{\eta _4}{T_{c4}}\sqrt {{n_{\chi 3}}{n_{\chi 4}}} }}({n_{\chi 3}}V_{22}^{1 - \frac{{{\eta _4}}}{2}} + {n_{\chi 4}}V_{22}^{1 + \frac{{{\eta _4}}}{2}}) \end{split} $$ (67)

    式中, $ {\bar K_2} = {K_2} - 0.5 $.

    通过选择$ {K_2} $的取值使得$ {\bar K_2} > 0 $, 能够得到

    $$ \begin{equation} {\dot V_{22}} \le \frac{{ - \pi }}{{{\eta _4}{T_{c4}}\sqrt {{n_{\chi 3}}{n_{\chi 4}}} }}({n_{\chi 3}}V_{22}^{1 - \frac{{{\eta _4}}}{2}} + {n_{\chi 4}}V_{22}^{1 + \frac{{{\eta _4}}}{2}}) + {\varepsilon _{{V_{22}}}} \end{equation} $$ (68)

    式中, $ {\varepsilon _{{V_{22}}}} = {\varepsilon _\chi } + 0.5{\left\| {{{\dot d}_\chi }} \right\|^2} $.

    根据$ {\varepsilon _{{V_{22}}}} $的有界性, 由引理1可得滑模面(32)和扰动观测器(36)构造的式(35)中$ {s_2} $和$ {\tilde d_\chi } $将在预定义时间$ {T_{c4}} $内收敛.

    结合1)步和2)步的证明, 舰载机着舰航迹方位角控制系统有界稳定, 航迹角跟踪误差在预定义时间$ {T_c} = {T_{c3}} + {T_{c4}} $内收敛至平衡点邻域内. 姿态角、角速率控制与进近动力补偿系统的稳定性证明类似, 不再重复赘述.  

    定理2. 考虑着舰控制系统(29)、(38)和(47)以及进近动力补偿系统(56), 设置$ {T_c} = \sum\nolimits_{i = 1}^{10} {{T_{c1i}}} $在预定义时间控制(34)、(43)、(52)和(59)作用下, 李雅普诺夫函数(69)中的信号是一致终值有界的, 且上述控制量在预定义时间$ {T_c} $内收敛.

    证明. 选择李雅普诺夫函数$ {V_6} $为

    $$ \begin{split} {V_6} =\;& \sum\limits_{i = 2}^5 {\frac{1}{2}(e_i^{\rm{T}}{e_i} + } s_i^{\rm{T}}{s_i} + {\tilde d_i^{\rm{T}}}{\tilde d_i})= \\ & \sum\limits_{i = 2}^5 {({V_{i1}} + {V_{i2}})} \end{split} $$ (69)

    对$ {V_6} $求导可得

    $$ \begin{equation} {\dot V_6} = \sum\limits_{i = 2}^5 {{{\dot V}_{i1}}} + \sum\limits_{i = 2}^5 {{{\dot V}_{i2}}} \end{equation} $$ (70)

    由式(65)及姿态角、角速率控制与进近动力补偿系统的稳定性证明第2阶段可知

    $$ \begin{split} \sum\limits_{i = 2}^5 {{{\dot V}_{i1}}} \le\;& \sum\limits_{i = 2}^5 {\frac{{ - \pi ({{\bar n}_{{m_1}}}V_{i1}^{1 - \frac{{{\eta _j}}}{2}} + {{\bar n}_{{m_2}}}V_{i1}^{1 + \frac{{{\eta _j}}}{2}})}}{{{\eta _j}{T_{cj}}\sqrt {{{\bar n}_{{m_1}}}{{\bar n}_{{m_2}}}} }}} \\ &(j = 3,\;5,\;7,\;9) \end{split} $$ (71)

    式中, $ {\bar n_{{m_i}}}(i = 1,\;2) = {\rm{Max}}({n_{\chi i}},\;{n_{\theta i}},\;{n_{pi}},\;{n_{\alpha i}}) $.

    由式(68)及姿态角、角速率控制与进近动力补偿系统的稳定性证明第1阶段可知

    $$ \begin{split} \sum\limits_{i = 2}^5 {{{\dot V}_{i2}}} \le\;& \sum\limits_{i = 2}^5 {\frac{{ - \pi ({{\bar n}_{{m_3}}}V_{i2}^{1 - \frac{{{\eta _j}}}{2}} + {{\bar n}_{{m_4}}}V_{i2}^{1 + \frac{{{\eta _j}}}{2}})}}{{{\eta _j}{T_{cj}}\sqrt {{{\bar n}_{{m_3}}}{{\bar n}_{{m_4}}}} }} + {\varepsilon _M}} \\ &(j = 4,\;6,\;8,\;10) \\[-1pt]\end{split} $$ (72)

    式中, $ {\bar n_{{m_i}}}(i = 3,\;4) = {\rm{Max}}({n_{\chi i}},\;{n_{\theta i}},\;{n_{pi}},\;{n_{\alpha i}}) $, $ {\varepsilon _M} $为有界值, $ {\varepsilon _M} = \sum\nolimits_{i = 2}^5 {{\varepsilon _{{V_{i2}}}}} $.

    选择参数使得$ \bar \eta = {\rm{Max}}({\eta _i})i = 1,\; \cdots ,\;10 $, 可得

    $$ \begin{equation} {\dot V_6} \le \frac{{ -\pi }}{{\bar \eta {T_c}\sqrt {{{\bar n}_{{{\bar m}_1}}}{{\bar n}_{{{\bar m}_2}}}} }}({\bar n_{{{\bar m}_1}}}V_6^{1 - \frac{{\bar \eta }}{2}} + {\bar n_{{{\bar m}_2}}}V_6^{1 + \frac{{\bar \eta }}{2}}) + {\varepsilon _M} \end{equation} $$ (73)

    式中, $ {\bar n_{{{\bar m}_1}}},\;{\bar n_{{{\bar m}_2}}} = {\rm{Max(}}{\bar n_{{m_i}}})(i = 1,\;2,\;3,\;4) $为参数的最大值和次大值.

    由引理1可知, 李雅普诺夫函数(69)中的信号一致终值有界且姿态控制系统与进近功率补偿系统误差在预定义时间内收敛.  

    注释2. 根据李雅普诺夫稳定性定理, 需要选择低通滤波器设计参数$ {\kappa _i}(i = 0,\;1,\;2,\;3) $均大于零或正定; 控制器设计参数$ {n_{ij}}(i = \chi ,\;\theta ,\;p,\;\alpha ;\;j = 1,\;2, 3,\;4) $均大于零、$ {\eta _i}(i = 3,\; \cdots ,\;10) $均为(0, 1)之间的正常数; 设定对应的预定义时间常数$ {T_{ci}}(i = 3, \cdots ,\;10) $并选择扰动观测器调节参数$ {K_i}(i = 2,\;3,\; 4,\;\alpha ) $使得$ {\bar K_i}(i = 2,\;3,\;4,\;\alpha ) > 0 $. 在实际参数整定中, 首先给出时间常数$ {T_{ci}} $, 并初步给出参数$ {\kappa _i} $, $ {n_{ij}} $和$ {\eta _i} $使系统满足初始响应性能, 之后随参数$ {K_i} $一起调整以提高系统的跟踪精度.

    算例飞机为F/A-18A, 其模型参数和执行机构模型在文献[34]和[36]中给出. 航母速度设置为13.89 m/s, 着舰甲板与中线的角度为$ {9^ \circ } $. 算例飞机的初始状态设置为: 迎角$ {\alpha _0} = {8.2^ \circ } $, 高度$ h_0 = 183 $ m, 速度$ {V_0} = 70\;{\mathop{\rm m}\nolimits} /{\mathop{\rm s}\nolimits} $.

    根据文献[6]和[37], 理想着舰点与航母舰体重心之间的三轴轴向距离$ {L_{TD}} $、$ {Y_{TD}} $和$ {G_{TD}} $分别为−90 m, −20 m和−5 m. 甲板运动中的线运动和角运动可采用如下传递函数描述

    $$ \begin{split} &{G_z}(s) = \frac{{1.16{s^2} + 0.0464s}}{{{s^4} + 0.38{s^3} + 0.4977{s^2} + 0.0836s + 0.0484}}\\ &{G_\theta }(s) = \frac{{0.3341{s^2}}}{{{s^4} + 0.604{s^3} + 0.7966{s^2} + 0.2063s + 0.1239}}\\ &{G_\phi }(s) = \frac{{0.2384{s^2}}}{{{s^4} + 0.2088{s^3} + 0.3976{s^2} + 0.0386s + 0.0342}} \end{split} $$ (74)

    使用LSTM神经网络对甲板运动进行预估, 对应的参数设置如下: 数据集为1 000s的甲板线运动和角运动数据, 前900s作为训练集, 后100s作为测试集, 选择输入、输出维度分别为101和21, LSTM层数和单元数分别为2和100. 超前5s预测的甲板运动如所图3示. 由图可知, 使用LSTM预估的甲板运动曲线与实际曲线基本吻合, 能够根据历史数据预测甲板运动未来的变化趋势.

    图 3  甲板运动实际值与预测值 ((a)垂荡; (b)纵摇; (c)横摇)
    Fig. 3  Deck motion estimation and actual value ((a) Heaving; (b) Pitching; (c) Rolling)

    设置仿真步长设置为0.01s, 仿真周期为舰载机由初始高度下降至航母甲板高度. 仿真过程中, 飞机先平飞, 之后沿期望的航迹倾斜角$ {\gamma _r} = - {3.5^ \circ } $和迎角$ {\alpha _r} = {8^ \circ } $下滑着舰. 着舰引导系统参数设置为: $ {\kappa _0} = 3.73 $、$ {{\bar{\boldsymbol{K}}}_1} = {\rm{diag}}\{1.77,\;1.77\} $和$ {{\boldsymbol{K}}_1} = {\rm{diag}}\{3, 3\} $. 航迹方位角控制器参数设置为: $ {\kappa _1} = 4.2 $, $ {\eta _i} = 0.6(i = 3,\;4) $, $ {n_{\chi 1}} = 2 $, $ {n_{\chi 2}} = 3 $, $ {n_{\chi 3}} = 1.1 $, $ {n_{\chi 4}} = 0.44 $, $ {T_{c3}} = 4s $, $ {T_{c4}} = 2s $和$ {K_2} = 1.8 $. 姿态角控制器参数设置为: $ {n_{\theta 1}} = 5 $, $ {n_{\theta 2}} = 6.2 $, $ {n_{\theta 3}} = 3.1 $, $ {n_{\theta 4}} = 2.7 $, $ {T_{c5}} = 4s $, $ {T_{c6}} = 2s $, $ {\eta _i} = 0.8(i = 5,\;6) $, $ {{\boldsymbol{K}}_3} = {\rm{diag}}\{3, \;3,\;3\} $, $ {{\boldsymbol{\kappa }}_2} = {\rm{diag}}\{ {1.2,\;1.2,\;1.2} \} $. 角速率控制器参数设置为: $ {{\boldsymbol{\kappa }}_3} = {\rm{diag}}\{ {5.5,\;5.5,\;5.5} \} $, $ {n_{p1}} = 1.3 $, $ {n_{p2}} = 6 $, $ {n_{p3}} = 2.87 $, $ {n_{p4}} = 3.22 $, $ {T_{c7}} = 4s $, $ {T_{c8}} = 2s $, $ {\eta _i} = 0.4 (i = 7,\;8) $, $ {{\boldsymbol{K}}_4} = {\rm{diag}}\{7,\;7,\;7\} $. 进近动力补偿系统参数设置为: $ {n_{\alpha 1}} = 3.68 $, $ {n_{\alpha 2}} = 5.81 $, $ {n_{\alpha 3}} = 6.52 $, $ {n_{\alpha 4}} = 3.57 $, $ {T_{c9}} = 4s $, $ {T_{c10}} = 2s $, $ {K_5} = 1.6 $以及$ {\eta _i} = 0.9(i = 9,\;10) $.

    为验证设计方法的有效性, 在仿真中设置对比如下: 本文所提出的基于反步架构的预定义时控制方法得到的着舰过程曲线记为“PT”, 文献[38]提出的有限时间控制方法得到的着舰过程曲线记为“LT”, 文献[39]提出的非线性动态逆方法的得到的着舰过程曲线记为“NDI”. 着舰轨迹如图4所示, 着舰时的高度和侧偏距及偏差如图5 ~ 6所示.

    图 4  舰载机着舰轨迹
    Fig. 4  Landing trajectory of the carrier-based aircraft
    图 5  高度跟踪及其跟踪误差 ((a)指令跟踪; (b)跟踪误差)
    Fig. 5  Altitude tracking and tracking errors ((a)Command tracking; (b)Tracking error)
    图 6  侧偏距跟踪及其跟踪误差 ((a)指令跟踪; (b)跟踪误差)
    Fig. 6  Lateral performance tracking and tracking errors ((a) Command tracking; (b) Tracking error)

    图5 ~ 6显示, 采用三种方法均能使舰载机跟踪期望的着舰轨迹, 但在着舰精度和收敛速度存在一定差异. 仿真开始时, 舰载机前向距离$ \Delta x = 0 $m, 当舰载机降落在甲板上时, 相对移动距离为3256 m. 如图5(b)所示, “PT”方法与“LT”、“NDI”方法的最大高度跟踪误差分别为4.63 m、4.87 m和7.26 m; 着舰时的跟踪误差分别为0.05 m、0.53 m和1.74 m. 如图6(b)所示, “PT”方法与“LT”、“NDI”方法的最大侧偏距跟踪误差分别为5.52 m、6.49 m和7.39 m; 着舰时的跟踪误差分别为0.013 m、0.26 m和0.78 m. 本文所提出的“PT”方法能够在机舰相对距离为628 m时收敛并在0.1 m的范围内波动, 能够显著抑制舰尾流和甲板运动扰动的影响.

    图7为舰载机着舰时不同方法的迎角与侧滑角. 由图7(a)可知, 当舰载机从平飞阶段进入下滑阶段时, 其迎角迅速下降并产生一定震荡, 随后返回配平值并保持平稳. 在着舰阶段受舰尾流和甲板运动等扰动影响, 存在较小的幅值波动. “PT”方法与“LT”、“NDI”方法的最大迎角波动值分别为1.75°、2.76°和4.63°. 由图7(b)可知, 舰载机需要不断调整其侧向位置, 在初始时存在较大的侧滑波动, 随后保持在0值附近. “PT”方法与“LT”、“NDI”方法的最大侧滑角波动分别为0.58°、1.37°和1.7°. “PT”方法在飞机前向飞行至639m时保持在0值附近, 稳定时间为4.84s, 在设定的$ {T_c} = 6 $s内. 在着舰过程中迎角和侧滑角保持更加平稳.

    图 7  不同方法的迎角与侧滑角 ((a)迎角; (b)侧滑角)
    Fig. 7  Angle of attack and sideslip of different methods ((a) Angle of attack; (b) Sideslip angle)

    图8为舰载机着舰时不同方法的姿态变化. 在初始阶段, 航迹滚转角迅速增加以减小舰载机与理想着舰轨迹的侧偏距, 航迹方位角同时增加. 侧偏距偏差基本消除时, 航迹滚转角减小至0值附近, 航迹方位角保持稳定. 俯仰角在平飞阶段保持不变, 在下降阶段下降至5.3°附近. 由图8可知, “PT”方法在飞机前向飞行至628m时保持在0值附近, 稳定时间为4.69s, 在设定的$ {T_c} = 6 $s内, 着舰过程中姿态角更加稳定且具有更强的抗干扰性.

    图 8  不同方法的航迹滚转角, 俯仰角与航迹方位角 ((a)航迹滚转角; (b)俯仰角; (c)航迹方位角)
    Fig. 8  Roll, pitch and heading angle of different methods ((a) Roll angle; (b) Pitch angle; (c) Heading angle)

    图9为执行机构偏转曲线. 三种方法着舰过程的升降舵、副翼和方向舵均处于合理范围内, 且本文所提的“PT”方法舵偏更加平缓. 图1011为扰动观测器观测值与实际飞行状态扰动对比, 子图(a)、(b)和(c)分别为舰尾流引起的舰载机迎角、侧滑角和航迹滚转角的扰动实际值与观测值. 由图1011可知, 在扰动观测器作用下能够实现集总扰动的准确估计, 提升着舰过程的轨迹跟踪精度.

    图 9  执行机构偏转 ((a)升降舵; (b)副翼; (c)方向舵)
    Fig. 9  Actuators deflection ((a) Elevator; (b) Aileron; (c) Rudder)
    图 10  不同状态扰动实际值与干扰观测器观测值对比 ((a)迎角; (b)侧滑角; (c)航迹滚转角)
    Fig. 10  Different states actual disturbance values and disturbance observe ((a) Angle of attack; (b) Sideslip angle; (c) Roll angle)
    图 11  观测误差 ((a)迎角; (b)侧滑角; (c)航迹滚转角)
    Fig. 11  Disturbance observe errors ((a) Angle of attack; (b) Sideslip angle; (c) Roll angle)

    为进一步验证该方法的有效性, 采用如下装置组成半实物仿真环境: 1)IPC-610-L工控计算机, 实现动力学和运动学模型; 2)PX7飞控板, 搭载设计的控制律; 3)状态显示计算机, 作为上位机显示飞机实景; 4)网线和串口模块, 实现UDP和RS232串口通信和数值传输. 图12为半实物仿真实验平台架构, 图13是实验设备. 半实物仿真与数字仿真参数设置相同, 考虑实际工况中的信号传递损失和量测噪声, 在机舰相对距离量测中加入均值为0, 方差为$ 5\;{\rm{m}^2} $的高斯白噪声.

    图 12  半实物仿真实验平台
    Fig. 12  Hardware-in-loop simulation platform
    图 13  实验设备
    Fig. 13  Experimental equipment

    图14为半实物仿真实验下高度和侧偏距跟踪误差. 由图可知在量测噪声的影响下, 三种方法均能实现着舰轨迹跟踪控制且本文所提的“PT” 方法误差最小. 与数字仿真相比, 由于量测噪声的存在, 跟踪误差不断波动, 但仍处于合理范围内. 改变甲板运动和舰尾流的初始相位和振幅, 利用蒙特卡洛模拟进行验证, 三种方法的着舰点分布图如图15所示, 可以看出本文所提的“PT”方法着舰点大都处于半径为0.5m的圆形着舰边界范围内, 小于其他两种方法. 可见该方法保证了不同干扰条件下着舰轨迹跟踪误差总体最小, 提高了着舰成功率.

    图 14  半实物仿真实验下高度和侧偏距跟踪偏差 ((a)高度跟踪误差; (b)侧偏距跟踪误差)
    Fig. 14  Height and lateral movement tracking errors during hardware-in-loop simulation ((a) Height tracking errors; (b )L ateral movement tracking errors)
    图 15  着舰跟踪误差
    Fig. 15  Path following error at touchdown

    综上所述, 在舰尾流和甲板运动等扰动作用下, 所提出的基于反步架构的预定义时间控制策略能够在指定时间内跟踪期望的着舰控制指令, 扰动观测器准确估计集总扰动并进行补偿, 实现舰载机着舰轨迹跟踪的快速准确跟踪.

    本文针对F/A-18A舰载机模型, 考虑舰尾流和甲板运动等复杂扰动, 进行基于预定义时间的自适应抗干扰着舰控制方法研究, 主要的研究内容总结如下:

    1) 建立舰载机着舰引导控制系统, 将着舰轨迹跟踪任务分解并通过轨迹生成、引导、控制与进近动力补偿等子系统完成.

    2) 考虑甲板运动对理想着舰点的变动影响, 通过LSTM神经网络实现甲板运动预估在相对运动模型解算中予以修正. 借助非线性扰动观测器实现集总扰动估计, 并在控制器设计中进行前馈补偿. 结合反步架构提出一种基于预定时间的自适应着舰控制策略.

    3) 通过李雅普诺夫定理对系统稳定性进行分析, 证明系统能够在指定时间内收敛. 数字仿真和半实物仿真结果表明所提方法能够在舰尾流和甲板运动等扰动影响下, 消除高度和侧偏距偏差并在指定时间内使得航迹方位角、姿态角和角速率信号保持稳定, 实现快速准确的着舰轨迹跟踪控制.

  • 图  1  视频超分辨率重建数据集REDS (左)和Vimeo-90K (右)示例

    Fig.  1  Examples of video super-resolution datasets from REDS (left) and Vimeo-90K (right)

    图  2  部分VSR模型在REDS据集的可视化比较结果

    Fig.  2  Visual comparison results of VSR methods on REDS dataset

    图  3  部分VSR模型在Vid4数据集的可视化比较结果

    Fig.  3  Visual comparison results of VSR methods on Vid4 dataset

    图  4  本文的结构图

    Fig.  4  Architecture of the paper

    图  5  基于深度学习的视频超分辨率重建时间脉络图

    Fig.  5  Timeline of video super-resolution based on deep learning

    图  6  VSRNet结构图

    Fig.  6  Architecture of VSRNet

    图  7  VESPCN结构图

    Fig.  7  Architecture of VESPCN

    图  8  SOFVSR结构图

    Fig.  8  Architecture of SOFVSR

    图  9  TOFlow结构图

    Fig.  9  Architecture of TOFlow

    图  10  DUF结构

    Fig.  10  Architecture of DUF

    图  11  FSTRN结构图

    Fig.  11  Architecture of FSTRN

    图  12  TDAN结构图

    Fig.  12  Architecture of TDAN

    图  13  EDVR结构图

    Fig.  13  Architecture of EDVR

    图  14  TGA结构图

    Fig.  14  Architecture of TGA

    图  15  MuCAN结构图

    Fig.  15  Architecture of MuCAN

    图  16  MANA结构图

    Fig.  16  Architecture of MANA

    图  17  IAM结构图

    Fig.  17  Architecture of IAM

    图  18  VSR Transformer结构图

    Fig.  18  Architecture of VSR Transformer

    图  19  VRT结构图

    Fig.  19  Architecture of VRT

    图  20  DRVSR结构图

    Fig.  20  Architecture of DRVSR

    图  21  FRVSR结构图

    Fig.  21  Architecture of FRVSR

    图  22  RBPN结构图

    Fig.  22  Architecture of RBPN

    图  23  RLSP结构图

    Fig.  23  Architecture of RLSP

    图  24  RSDN结构图

    Fig.  24  Architecture of RSDN

    图  25  RRN结构图

    Fig.  25  Architecture of RRN

    图  26  DAP结构图

    Fig.  26  Architecture of DAP

    图  27  ETDM结构图

    Fig.  27  Architecture of ETDM

    图  28  TMP结构图

    Fig.  28  Architecture of TMP

    图  29  BRCN结构图

    Fig.  29  Architecture of BRCN

    图  30  RRCN结构图

    Fig.  30  Architecture of RRCN

    图  31  PFNL结构图和PFRB细节图

    Fig.  31  Architecture of PFNL and Detail of PFRB

    图  32  RISTN结构图

    Fig.  32  Architecture of RISTN

    图  33  LOVSR(左)和GOVSR(右)结构图

    Fig.  33  Architectures of LOVSR (left) and GOVSR (right)

    图  34  BasicVSR(左)和ICONVSR(右)结构图

    Fig.  34  Architectures of BasicVSR (left) and ICONVSR (right)

    图  35  TTVSR结构图

    Fig.  35  Architecture of TTVSR

    图  36  CTVSR结构图

    Fig.  36  Architecture of CTVSR

    图  37  RefVSR结构图

    Fig.  37  Architecture of RefVSR

    图  38  C2-Mathching结构图

    Fig.  38  Architecture of C2-Mathching

    图  39  RealBasicVSR结构图

    Fig.  39  Architecture of RealBasicVSR

    图  40  FTVSR结构图

    Fig.  40  Architecture of FTVSR

    图  41  BasicVSR++ 结构图

    Fig.  41  Architecture of BasicVSR++

    图  42  PSRT结构图

    Fig.  42  Architecture of PSRT

    图  43  IART结构图

    Fig.  43  Architecture of IART

    图  44  MFPI结构图

    Fig.  44  Architecture of MFPI

    图  45  DFVSR结构图

    Fig.  45  Architecture of DFVSR

    图  46  MIA-VSR结构图

    Fig.  46  Architecture of MIA-VSR

    图  47  RVRT结构图

    Fig.  47  Architecture of RVRT

    图  48  TeoGAN结构图

    Fig.  48  Architecture of TeoGAN

    图  49  StableVSR结构图

    Fig.  49  Architecture of StableVSR

    图  50  MGLD结构图

    Fig.  50  Architecture of MGLD

    图  51  Upscale-A-Video结构图

    Fig.  51  Architecture of Upscale-A-Video

    图  52  不同帧间对齐模式示意图

    Fig.  52  Illustration of different inter-frame alignment

    图  53  基于光流的显式运动对齐

    Fig.  53  Explicit alignment based on optical flow

    图  54  基于可变形卷积的对齐

    Fig.  54  Deformable convolution-based alignment

    图  55  光流引导的可变形对齐和光流引导的可变形注意力

    Fig.  55  Flow-guided deformable alignment and flow-guided deformable attention

    图  56  基于3D卷积的帧间对齐

    Fig.  56  Inter-frame alignment based on 3D convolution

    表  1  基于深度学习的视频超分辨率重建数据集

    Table  1  Datasets of video super-resolution based on deep learning

    数据集 类型 视频数量 帧数 分辨率 颜色空间
    合成数据集YUV25[15]训练集25386 $ \times $ 288YUV
    TDTFF[16]Turbine测试集5648 $ \times $ 528YUV
    Dancing950 $ \times $ 530
    Treadmill700$ \times $600
    Flag1000 $ \times $ 580
    Fan990 $ \times $ 740
    Vid4[13]Foliage测试集449720 $ \times $ 480RGB
    Walk47720 $ \times $ 480
    Calendar41720 $ \times $ 576
    City34704 $ \times $ 576
    YUV21[17]测试集21100352 $ \times $ 288YUV
    Venice[18]训练集11 0773 840$ \times $2 160RGB
    Myanmar[19]训练集15273 840$ \times $2 160RGB
    CDVL[20]训练集100301 920$ \times $1 080RGB
    UVGD[21]测试集163 840$ \times $2 160YUV
    LMT[22]训练集261 920$ \times $1 080YCbCr
    SPMCS[23]训练集和测试集97531960$ \times $540RGB
    MM542[24]训练集542321 280$ \times $720RGB
    UDM10[25]测试集10321 272$ \times $720RGB
    Vimeo-90K[12]训练集和测试集91 7017448$ \times $256RGB
    REDS[14]训练集和测试集2701001 280$ \times $720RGB
    Parkour[26]测试集14960$ \times $540RGB
    真实数据集RealVSR[27]训练集和测试集500501 024$ \times $512RGB/YCbCr
    VideoLQ[28]测试集501001 024$ \times $512RGB
    RealMCVSR[29]训练集和测试集1611 920$ \times $1 080RGB
    MVSR4$ \times $[30]训练集和测试集3001001 920$ \times $1 080RGB
    DTVIT[31]训练集和测试集1961001 920$ \times $1 080RGB
    YouHQ[32]训练集和测试集38 616321 920$ \times $1 080RGB
    下载: 导出CSV

    表  2  对双三次插值下采样后的视频进行VSR的性能对比结果

    Table  2  Performance comparison of video super-resolution algorithm with bicubic downsampling

    对比方法 训练帧数 参数量(M) 双三次插值下采样
    REDS (RGB通道) Vimeo-90K-T (Y通道) Vid4 (Y通道)
    Bicubic 26.14/0.7292 31.32/0.8684 23.78/0.6347
    VSRNet[40] 0.27 −/− −/− 22.81/0.6500
    VSRResFeatGAN[41] −/− −/− 24.50/0.7023
    VESPCN[42] −/− −/− 25.35/0.7577
    VSRResNet[41] −/− −/− 25.51/0.7530
    SPMC[23] 2.17 −/− −/− 25.52/0.7600
    3DSRNet[43] −/− −/− 25.71/0.7588
    RRCN[44] −/− −/− 25.86/0.7591
    TOFlow[12] 5/7 1.41 27.98/0.7990 33.08/0.9054 25.89/0.7651
    STARNet[45] 111.61 −/− 30.83/0.9290 −/−
    MEMC-Net[46] −/− 33.47/0.9470 24.37/0.8380
    STMN[47] −/− −/− 25.90/0.7878
    SOFVSR[48] 1.71 −/− −/− 26.01/0.7710
    RISTN[49] 3.67 −/− −/− 26.13/0.7920
    MMCNN[24] 10.58 −/− −/− 26.28/0.7844
    RTVSR[50] 15.00 −/− −/− 26.36/0.7900
    TDAN[51] 1.97 −/− −/− 26.42/0.7890
    D3DNet[52] −/7 2.58 −/− 35.65/0.9330 26.52/0.7990
    FFCVSR[53] −/− −/− 26.97/0.8300
    EVSRNet[54] 27.85/0.8000 −/− −/−
    StableVSR[55] 27.97/0.8000 −/− −/−
    DUF[56] 7/7 5.8 28.63/0.8251 −/− 27.33/0.8319
    PFNL[57] 7/7 3 29.63/0.8502 36.14/0.9363 26.73/0.8029
    DNSTNet[58] −/− 36.86/0.9387 27.21/0.8220
    RBPN[59] 7/7 12.2 30.09/0.8590 37.07/0.9435 27.12/0.8180
    DSMC[60] 11.58 30.29/0.8381 −/− 27.29/0.8403
    Boosted EDVR[31] 30.53/0.8699 −/− −/−
    TMP[61] 3.1 30.67/0.8710 −/− 27.10/0.8167
    MuCAN[62] 5/7 30.88/0.8750 37.32/0.9465 −/−
    MSFFN[63] −/− 37.33/0.9467 27.23/0.8218
    DAP[64] 15/5 30.59/0.8703 −/− −/−
    MultiBoot VSR[65] 60.86 31.00/0.8822 −/− −/−
    SSL-bi[66] 15/14 1.0 31.06/0.8933 36.82/0.9419 27.15/0.8208
    EDVR[67] 5/7 20.6 31.09/0.8800 37.61/0.9489 27.35/0.8264
    RLSP[68] 4.2 −/− 37.39/0.9470 27.15/0.8202
    TGA[69] 5.8 −/− 37.43/0.9480 27.19/0.8213
    KSNet-bi[70] 3.0 31.14/0.8862 37.54/0.9503 27.22/0.8245
    VSR-T[71] 5/7 32.6 31.19/0.8815 37.71/0.9494 27.36/0.8258
    PSRT-sliding[72] 5/− 14.8 31.32/0.8834 −/− −/−
    SeeClear[73] 5/5 229.23 31.32/0.8856 37.64/0.9503 27.80/0.8404
    DPR[74] 6.3 31.38/0.8907 37.11/0.9446 27.19/0.8243
    BasicVSR[75] 15/14 6.3 31.42/0.8909 37.18/0.9450 27.24/0.8251
    Boosted BasicVSR[31] 31.42/0.8917 −/− −/−
    SATeCo[76] 6/6 31.62/0.8932 −/− 27.44/0.8420
    IconVSR[75] 15/14 8.7 31.67/0.8948 37.47/0.9476 27.39/0.8279
    ICNet[77] 18.34 31.71/0.8963 37.72/0.9477 27.43/0.8287
    MSHPFNL[78] 7.77 −/− 36.75/0.9406 27.70/0.8472
    PA[79] 5/7 38.2 32.05/0.8941 −/− 28.02/0.8373
    FTVSR[80] 10.8 31.82/0.8960 −/− −/−
    $ C^2 $-Matching[81] 32.05/0.9010 −/− 28.87/0.8960
    ETDM[82] 8.4 32.15/0.9024 −/− −/−
    BasicVSR++[83] 30/14 7.3 32.39/0.9069 37.79/0.9500 27.79/0.8400
    RTA[84] 5/7 17 31.30/0.8850 37.84/0.9498 27.90/0.8380
    Semantic Lens[85] 5/− 31.42/0.8881 −/− −/−
    TCNet[86] 9.6 31.82/0.9002 37.94/0.9514 27.48/0.8380
    TTVSR[87] 50/− 6.8 32.12/0.9021 −/− −/−
    VRT[88] 16/7 35.6 32.19/0.9006 38.20/0.9530 27.93/0.8425
    CTVSR[89] 16/14 34.5 32.28/0.9047 −/− 28.03/0.8487
    FTVSR++[90] 10.8 32.42/0.9070 −/− −/−
    LGDFNet-BPP[91] 9.0 32.53/0.9007 −/− 27.99/0.8409
    PP-MSVSR-L[92] 7.4 32.53/0.9083 −/− −/−
    CFD-BasicVSR++[127] 30/7 7.5 32.51/0.9083 37.90/0.9504 27.84/0.8406
    RVRT[93] 30/14 10.8 32.75/0.9113 38.15/0.9527 27.99/0.8426
    DFVSR[94] 7.1 32.76/0.9081 38.25/0.9556 27.92/0.8427
    PSRT-recurrent[72] 16/14 13.4 32.72/0.9106 38.27/0.9536 28.07/0.8485
    MFPI[95] −/− 7.3 32.81/0.9106 38.28/0.9534 28.11/0.8481
    EvTexture[96] 15/− 8.9 32.79/0.9174 38.23/0.9544 29.51/0.8909
    MIA-VSR[97] 16/14 16.5 32.78/0.9220 38.22/0.9532 28.20/0.8507
    CFD-PSRT[127] 30/7 13.6 32.83/0.9140 38.33/0.9548 28.18/0.8503
    IART[98] 16/7 13.4 32.90/0.9138 38.14/0.9528 28.26/0.8517
    EvTexture+[96] 15/− 10.1 32.93/0.9195 38.32/0.9558 29.78/0.8983
    下载: 导出CSV

    表  3  对高斯模糊下采样后的视频进行VSR的性能对比结果

    Table  3  Performance comparison of video super-resolution algorithm with blur downsampling

    对比方法 训练帧数 参数量(M) 高斯模糊下采样
    UDM10 (Y通道) Vimeo-90K-T (Y通道) Vid4 (Y通道)
    Bicubic 28.47/0.8253 31.30/0.8687 21.80/0.5246
    BRCN[99] −/− −/− 24.43/0.6334
    ToFNet[12] 5/7 1.41 36.26/0.9438 34.62/0.9212 25.85/0.7659
    TecoGAN[100] 3.00 −/− −/− 25.89/−
    SOFVSR[48] 1.71 −/− −/− 26.19/0.7850
    RRN[101] 3.4 38.96/0.9644 −/− 27.69/0.8488
    TDAN[51] 1.97 −/− −/− 26.86/0.8140
    FRVSR[102] 5.1 −/− −/− 26.69/0.8220
    DUF[56] 7/7 5.8 38.48/0.9605 36.87/0.9447 27.38/0.8329
    RLSP[68] 4.2 38.48/0.9606 36.49/0.9403 27.48/0.8388
    PFNL[57] 7/7 3 38.74/0.9627 −/− 27.16/0.8355
    RBPN[59] 7/7 12.2 38.66/0.9596 37.20/0.9458 27.17/0.8205
    TMP[61] 3.1 −/− 37.33/0.9481 27.61/0.8428
    TGA[69] 5.8 38.74/0.9627 37.59/0.9516 27.63/0.8423
    SSL-bi[66] 15/14 1.0 39.35/0.9665 37.06/0.9458 27.56/0.8431
    RSDN[103] 6.19 −/− 37.23/0.9471 27.02/0.8505
    DAP[64] 15/5 39.50/0.9664 37.25/0.9472 −/−
    SeeClear[73] 5/5 229.23 39.72/0.9675 −/− −/−
    EDVR[67] 5/7 20.6 39.89/0.9686 37.81/0.9523 27.85/0.8503
    DPR[74] 6.3 39.72/0.9684 37.24/0.9461 27.89/0.8539
    BasicVSR[75] 15/14 6.3 39.96/0.9694 37.53/0.9498 27.96/0.8553
    IconVSR[75] 15/14 8.7 40.03/0.9694 37.84/0.9524 28.04/0.8570
    R2D2[104] 8.25 39.53/0.9670 −/− 28.13/0.9244
    FTVSR[80] 10.8 −/− −/− 28.31/0.8600
    FDAN[105] 39.91/0.9686 37.75/0.9522 27.88/0.8508
    PP-MSVSR[92] 1.45 40.06/0.9699 37.54/0.9499 28.13/0.8604
    GOVSR[106] 40.14/0.9713 37.63/0.9503 28.41/0.8724
    ETDM[82] 8.4 40.11/0.9707 −/− 28.81/0.8725
    TTVSR[87] 50/− 6.8 40.41/0.9712 37.92/0.9526 28.40/0.8643
    BasicVSR++[83] 30/14 7.3 40.72/0.9722 38.21/0.9550 29.04/0.8753
    CFD-BasicVSR++[127] 30/7 7.5 40.77/0.9726 38.36/0.9557 29.14/0.8760
    TCNet[86] 9.6 −/− −/− 28.44/0.8730
    VRT[88] 16/7 35.6 41.05/0.9737 38.72/0.9584 29.42/0.8795
    CTVSR[89] 16/14 34.5 41.20/0.9740 38.83/0.9580 29.28/0.8811
    FTVSR++[90] 10.8 −/− −/− 28.80/0.8680
    LGDFNet-BPP[91] 9.0 40.81/0.9756 −/− 29.39/0.8798
    RVRT[93] 30/14 10.8 40.90/0.9729 38.59/0.9576 29.54/0.8810
    DFVSR[94] 7.1 40.97/0.9733 38.51/0.9571 29.56/0.8983
    MFPI[95] −/− 7.3 41.08/0.9741 38.70/0.9579 29.34/0.8781
    下载: 导出CSV

    表  4  真实场景下的VSR性能对比结果

    Table  4  Performance comparison of real-world video super-resolution algorithm

    对比方法 推理帧数 RealVSR MVSR $ 4\times $
    PSNR/SSIM/LPIPS PSNR/SSIM/LPIPS
    RSDN[103] 之前帧 23.91/0.7743/0.224 23.15/0.7533/0.279
    FSTRN[107] 7 23.36/0.7683/0.240 22.66/0.7433/0.315
    TOF[12] 7 23.62/0.7739/0.220 22.80/0.7502/0.279
    TDAN[51] 7 23.71/0.7737/0.229 23.07/0.7492/0.282
    EDVR[67] 7 23.96/0.7781/0.216 23.51/0.7611/0.268
    BasicVSR[75] 所有帧 24.00/0.7801/0.209 23.38/0.7594/0.270
    MANA[108] 所有帧 23.89/0.7781/0.224 23.15/0.7513/0.285
    TTVSR[87] 所有帧 24.08/0.7837/0.213 23.60/0.7686/0.277
    ETDM[82] 所有帧 24.13/0.7896/0.206 23.61/0.7662/0.260
    BasicVSR++[83] 所有帧 24.24/0.7933/0.216 23.70/0.7713/0.263
    RealBasicVSR[28] 所有帧 23.74/0.7676/0.174 23.15/0.7603/0.202
    EAVSR[30] 所有帧 24.20/0.7862/0.208 23.61/0.7618/0.264
    EAVSR+[30] 所有帧 24.41/0.7953/0.212 23.94/0.7726/0.259
    EAVSRGAN+[30] 所有帧 23.99/0.7726/0.170 23.35/0.7611/0.199
    下载: 导出CSV

    表  5  不同帧间对齐方式的性能和参数比较

    Table  5  Performance and parameter comparisons of different inter-frame alignment

    对齐方式 参数量(M) 插值方法 光流
    GT SpyNet
    显式对齐(光流) 1.35 最近邻插值 31.84 31.78
    双线性插值 31.92 31.85
    双三次插值 31.93 31.89
    混合对齐(光流引导 1.60 双线性插值 32.08 31.98
    的可变形卷积)
    混合对齐(光流引导 1.56 双线性插值 32.03 31.94
    的可变形注意力)
    混合对齐(光流引导 1.35 最近邻插值 31.81 31.82'
    的图像块对齐)
    混合对齐(光流引导 1.36 基于注意力的 32.14 32.05
    的隐式对齐) 隐式插值
    下载: 导出CSV

    表  6  GeForce RTX 3090平台下VSR的性能和推理时间对比结果

    Table  6  Performance and inference time comparisons of VSR algorithm on GeForce RTX 3090 platform

    对比方法参数量(M)推理时间(ms)对齐方式双三次插值下采样高斯模糊下采样
    REDS (RGB通道)Vimeo-90K-T
    (Y通道)
    Vid4
    (Y通道)
    Vimeo-90K-T
    (Y通道)
    Vid4
    (Y通道)
    UDM10
    (Y通道)
    Bicubic<126.23/0.731931.32/0.868423.78/0.637431.30/0.868721.80/0.534628.47/0.8253
    TOFlow[12]1.41250显式27.96/0.798133.08/0.905425.89/0.765134.62/0.921225.85/0.765936.26/0.9438
    DUF[56]5.8737.5无需28.63/0.8251−/−27.33/0.831936.87/0.944727.38/0.832938.48/0.9605
    EDVR[67]20.6188.2隐式31.09/0.880037.61/0.948927.35/0.826437.81/0.952327.85/0.850339.89/0.9686
    TMP[61]3.131.5隐式30.67/0.8710−/−27.10/0.816737.33/0.948127.61/0.8428−/−
    BasicVSR[75]6.345.4显式31.42/0.890937.18/0.945027.24/0.825137.53/0.949827.96/0.855339.96/0.9694
    ICONVSR[75]8.758.4显式31.67/0.894837.47/0.947627.39/0.827937.84/0.952428.04/0.857040.03/0.9694
    TTVSR[87]6.8123.3混合32.12/0.9021−/−−/−37.92/0.952628.40/0.864340.41/0.9712
    VRT[88]35.61679混合32.17/0.900238.20/0.953027.93/0.842538.72/0.958429.37/0.879241.04/0.9737
    BasicVSR++[83]7.360.2混合32.39/0.906937.79/0.950027.79/0.840038.21/0.955029.04/0.875340.72/0.9722
    PSRT[72]13.41280.2混合32.72/0.910638.27/0.953628.07/0.8485−/−−/−−/−
    MIA-VSR[97]16.51194.6无需32.78 0.922038.22/0.953228.20/0.8507−/−−/−−/−
    下载: 导出CSV
  • [1] Wan Z, Zhang B, Chen D, et al. Bringing old films back to life. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 17694−17703
    [2] Li G, Ji J, Qin M, et al. Towards high-quality and efficient video super-resolution via spatial-temporal data overfitting. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE, 2023. 10259−10269
    [3] Zhu H, Wei Y, Liang X, et al. CTP: Towards vision-language continual pretraining via compatible momentum contrast and topology preservation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Paris, France: IEEE, 2023. 22257−22267
    [4] Jiao S, Wei Y, Wang Y, et al. Learning mask-aware clip representations for zero-shot segmentation. In: Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS). New Orleans, USA: 2023. 35631−35653.
    [5] Liu C, Sun D. On Bayesian adaptive video super resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(2): 346−360 doi: 10.1109/TPAMI.2013.127
    [6] Ma Z, Liao R, Tao X, et al. Handling motion blur in multi-frame super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015. 5224−5232
    [7] Wu Y, Li F, Bai H, et al. Bridging component learning with degradation modelling for blind image super-resolution. IEEE Transactions on Multimedia, DOI: 10.1109/TMM.2022.3216115
    [8] 张帅勇, 刘美琴, 姚超, 林春雨, 赵耀. 分级特征反馈融合的深度图像超分辨率重建. 自动化学报, 2022, 48(4): 992−1003

    Zhang Shuai-Yong, Liu Mei-Qin, Yao Chao, Lin Chun-Yu, Zhao Yao. Hierarchical feature feedback network for depth super-resolution reconstruction. Acta Automatica Sinica, 2022, 48(4): 992−1003
    [9] Charbonnier P, Blanc-Feraud L, Aubert G, et al. Two deterministic half-quadratic regularization algorithms for computed imaging. In: Proceedings of 1st International Conference on Image Processing (ICIP). Austin, USA: IEEE, 1994. 168−172
    [10] Lai W S, Huang J B, Ahuja N, et al. Fast and accurate image super-resolution with deep laplacian pyramid networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 41(11): 2599−2613
    [11] Zha L, Yang Y, Lai Z, et al. A lightweight dense connected approach with attention on single image super-resolution. Electronics, 2021, 10(11): 1234 doi: 10.3390/electronics10111234
    [12] Xue T, Chen B, Wu J, et al. Video enhancement with task-oriented flow. International Journal of Computer Vision, 2019, 127(8): 1106−1125 doi: 10.1007/s11263-018-01144-2
    [13] Liu C, Sun D. A bayesian approach to adaptive video super resolution. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Colorado Springs, USA: IEEE, 2011. 209−216
    [14] Nah S, Baik S, Hong S, et al. Ntire 2019 challenge on video deblurring and super-resolution: Dataset and study. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Long Beach, USA: IEEE, 2019. 1996−2005
    [15] Protter M, Elad M, Takeda H, et al. Generalizing the nonlocal-means to super-resolution reconstruction. IEEE Transactions on Image Processing, 2008, 18(1): 36−51
    [16] O. Shahar, A. Faktor, and M. Irani, Space-time super-resolution from a single video. In: Proceedings of the IEEE Conference Computer Vision and Pattern Recognition (CVPR). Colorado Springs, USA: IEEE, 2011. 3353−3360
    [17] Li D, Wang Z. Video superresolution via motion compensation and deep residual learning. IEEE Transactions on Computational Imaging, 2017, 3(4): 749−762 doi: 10.1109/TCI.2017.2671360
    [18] Venice[Online], available: https://www.harmonicinc.com/free-4k-demo-footage/, May 1, 2017
    [19] Myanmar 60p, Harmonic Inc. [Online], available: http://www.harmonicinc.com/resources/videos/4k-video-clip-center, May 1, 2017
    [20] ITS, "Consumer digital video library''[Online], available: https://www.cdvl.org, March 20, 2024
    [21] Mercat A, Viitanen M, Vanne J. UVG dataset: 50/120fps 4K sequences for video codec analysis and development. In: Proceedings of the ACM Multimedia Systems Conference. Istanbul, Turkey: ACM, 2020. 297−302
    [22] Liu D, Wang Z, Fan Y, et al. Robust video super-resolution with learned temporal dynamics. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 2507−2515
    [23] Li D, Wang Z. Video superresolution via motion compensation and deep residual learning. IEEE Transactions on Computational Imaging, 2017, 3(4): 749−762 doi: 10.1109/TCI.2017.2671360
    [24] Wang Z, Yi P, Jiang K, et al. Multi-memory convolutional neural network for video super-resolution. IEEE Transactions on Image Processing, 2018, 28(5): 2530−2544
    [25] Yi P, Wang Z, Jiang K, et al. Progressive fusion video super-resolution network via exploiting non-local spatio-temporal correlations. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea: IEEE, 2019. 3106−3115
    [26] Yu J, Liu J, Bo L, et al. Memory-augmented non-local attention for video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 17834−17843
    [27] Yang X, Xiang W, Zeng H, et al. Real-world video super-resolution: A benchmark dataset and a decomposition based learning scheme. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE, 2021. 4781−4790
    [28] Chan K C K, Zhou S, Xu X, et al. Investigating tradeoffs in real-world video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 5962−5971
    [29] Lee J, Lee M, Cho S, et al. Reference-based video super-resolution using multi-camera video triplets. In: Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 17824−17833
    [30] Wang R, Liu X, Zhang Z, et al. Benchmark dataset and effective inter-frame alignment for real-world video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE, 2023. 1168−1177
    [31] Huang Y, Dong H, Pan J, et al. Boosting video super resolution with patch-based temporal redundancy optimization. In: Proceedings of International Conference on Artificial Neural Networks (ICANN). Heraklion, Greece: Springer, 2023. 362−375
    [32] Zhou S, Yang P, Wang J, et al. Upscale-A-Video: Temporal-consistent diffusion model for real-world video super-resolution. arXiv preprint arXiv: 2312.06640, 2023.
    [33] Wang X, Xie L, Dong C, et al. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). Montreal, Canada: IEEE, 2021. 1905−1914
    [34] Singh A, Singh J. Survey on single image based super-resolution—implementation challenges and solutions. Multimedia Tools and Applications, 2020, 79(3−5): 1641−1672
    [35] You Z, Li Z, Gu J, et al. Depicting beyond scores: Advancing image quality assessment through multi-modal language models. arXiv preprint arXiv: 2312.08962, 2023.
    [36] You Z, Gu J, Li Z, et al. Descriptive image quality assessment in the wild. arXiv preprint arXiv: 2405.18842, 2024.
    [37] Xie L, Wang X, Zhang H, et al. VFHQ: A high-quality dataset and benchmark for video face super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 657−666
    [38] Zhou F, Sheng W, Lu Z, et al. A database and model for the visual quality assessment of super-resolution videos. IEEE Transactions on Broadcasting, 2024, 70(2): 516−532 doi: 10.1109/TBC.2024.3382949
    [39] Jin J, Zhang X, Fu X, et al. Just noticeable difference for deep machine vision. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 32(6): 3452−3461
    [40] Kappeler A, Yoo S, Dai Q, et al. Video super-resolution with convolutional neural networks. IEEE Transactions on Computational Imaging, 2016, 2(2): 109−122 doi: 10.1109/TCI.2016.2532323
    [41] Lucas A, Lopez-Tapia S, Molina R, et al. Generative adversarial networks and perceptual losses for video super-resolution. IEEE Transactions on Image Processing, 2019, 28(7): 3312−3327 doi: 10.1109/TIP.2019.2895768
    [42] Caballero J, Ledig C, Aitken A, et al. Real-time video super-resolution with spatio-temporal networks and motion compensation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017. 4778−4787
    [43] Kim S Y, Lim J, Na T, et al. 3DSRNet: video super-resolution using 3D convolutional neural networks. arXiv preprint arXiv: 1812.09079, 2018.
    [44] Li D, Liu Y, Wang Z. Video super-resolution using non-simultaneous fully recurrent convolutional network. IEEE Transactions on Image Processing, 2018, 28(3): 1342−1355
    [45] Haris M, Shakhnarovich G, Ukita N. Space-time-aware multi-resolution video enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 2859−2868
    [46] Bao W, Lai W S, Zhang X, et al. MEMC-Net: Motion estimation and motion compensation driven neural network for video interpolation and enhancement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 43(3): 933−948
    [47] Zhu X, Li Z, Lou J, et al. Video super-resolution based on a spatio-temporal matching network. Pattern Recognition, 2021, 110: 107619 doi: 10.1016/j.patcog.2020.107619
    [48] Wang L, Guo Y, Liu L, et al. Deep video super-resolution using HR optical flow estimation. IEEE Transactions on Image Processing, 2020, 29: 4323−4336 doi: 10.1109/TIP.2020.2967596
    [49] Zhu X, Li Z, Zhang X Y, et al. Residual invertible spatio-temporal network for video super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Honolulu, USA: AAAI, 2019. 5981−5988
    [50] Bare B, Yan B, Ma C, et al. Real-time video super-resolution via motion convolution kernel estimation. Neurocomputing, 2019, 367: 236−245 doi: 10.1016/j.neucom.2019.07.089
    [51] Tian Y, Zhang Y, Fu Y, et al. TDAN: Temporally-deformable alignment network for video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 3360−3369
    [52] Ying X, Wang L, Wang Y, et al. Deformable 3D convolution for video super-resolution. IEEE Signal Processing Letters, 2020, 27: 1500−1504 doi: 10.1109/LSP.2020.3013518
    [53] Yan B, Lin C, Tan W. Frame and feature-context video super-resolution. In: Proceedings of the 33th AAAI Conference on Artificial Intelligence (AAAI). Honolulu, USA: AAAI, 2019: 5597−5604
    [54] Liu S, Zheng C, Lu K, et al. Evsrnet: Efficient video super-resolution with neural architecture search. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 2480−2485
    [55] Rota C, Buzzelli M, van de Weijer J. Enhancing perceptual quality in video super-resolution through temporally-consistent detail synthesis using diffusion models. arXiv preprint arXiv: 2311.15908, 2023.
    [56] Jo Y, Oh S W, Kang J, et al. Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE, 2018. 3224−3232
    [57] Yi P, Wang Z, Jiang K, et al. Progressive fusion video super-resolution network via exploiting non-local spatio-temporal correlations. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea: IEEE, 2019. 3106−3115
    [58] Sun W, Sun J, Zhu Y, et al. Video super-resolution via dense non-local spatial-temporal convolutional network. Neurocomputing, 2020, 403: 1−12 doi: 10.1016/j.neucom.2020.04.039
    [59] Haris M, Shakhnarovich G, Ukita N. Recurrent back-projection network for video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 3897−3906
    [60] Liu H, Zhao P, Ruan Z, et al. Large motion video super-resolution with dual subnet and multi-stage communicated upsampling. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Virtual Event: AAAI, 2021. 2127−2135
    [61] Zhang Z, Li R, Guo S, et al. TMP: Temporal motion propagation for online Video super-sesolution. arXiv preprint arXiv: 2312.09909, 2023.
    [62] Li W, Tao X, Guo T, et al. Mucan: Multi-correspondence aggregation network for video super-resolution. In: Proceedings of the European Conference on Computer Vision (ECCV). Glasgow, UK: Springer, 2020. 335−351
    [63] Song H, Xu W, Liu D, et al. Multi-stage feature fusion network for video super-resolution. IEEE Transactions on Image Processing, 2021, 30: 2923−2934 doi: 10.1109/TIP.2021.3056868
    [64] Fuoli D, Danelljan M, Timofte R, et al. Fast online video super-resolution with deformable attention pyramid. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Waikoloa, USA: IEEE, 2023. 1735−1744
    [65] Kalarot R, Porikli F. Multiboot VSR: Multi-stage multi-reference bootstrapping for video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Long Beach, USA: IEEE, 2019. 2060−2069
    [66] Xia B, He J, Zhang Y, et al. Structured sparsity learning for efficient video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023: 22638−22647
    [67] Wang X, Chan K C K, Yu K, et al. EDVR: Video restoration with enhanced deformable convolutional networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Long Beach, USA: IEEE, 2019. 1954−1963
    [68] Fuoli D, Gu S, Timofte R. Efficient video super-resolution through recurrent latent space propagation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). Seoul, Korea: IEEE, 2019. 3476−3485
    [69] Isobe T, Li S, Jia X, et al. Video super-resolution with temporal group attention. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 8008−8017
    [70] Jin S, Liu M, Yao C, et al. Kernel Dimension Matters: To activate available kernels for real-time video super-resolution. In: Proceedings of the ACM International Conference on Multimedia (ACM MM). Ottawa, Canada: ACM, 2023. 8617−8625
    [71] Cao J, Li Y, Zhang K, et al. Video super-resolution transformer. arXiv preprint arXiv: 2106.06847, 2021.
    [72] Shi S, Gu J, Xie L, et al. Rethinking alignment in video super-resolution transformers. In: Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS). New Orleans, USA: 2022. 36081−36093
    [73] Tang Q, Zhao Y, Liu M, et al. SeeClear: Semantic distillation enhances pixel condensation for video super-resolution. arXiv preprint arXiv: 2410.05799, 2024.
    [74] Huang C, Li J, Chu L, et al. Disentangle propagation and restoration for efficient video recovery. In: Proceedings of the ACM International Conference on Multimedia (ACM MM). Ottawa, Canada: ACM, 2023. 8336−8345
    [75] Chan K C K, Wang X, Yu K, et al. Basicvsr: The search for essential components in video super-resolution and beyond. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 4947−4956
    [76] Chen Z, Long F, Qiu Z, et al. Learning spatial adaptation and temporal coherence in diffusion models for video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2024. 9232−9241
    [77] Leng J, Wang J, Gao X, et al. Icnet: Joint alignment and reconstruction via iterative collaboration for video super-resolution. In: Proceedings of the ACM International Conference on Multimedia (ACM MM). Lisboa, Portugal: ACM, 2022. 6675−6684
    [78] Yi P, Wang Z, Jiang K, et al. A progressive fusion generative adversarial network for realistic and consistent video super-resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 44(5): 2264−2280
    [79] Zhang F, Chen G, Wang H, et al. Multi-scale video super-resolution transformer with polynomial approximation. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(9): 4496−4506 doi: 10.1109/TCSVT.2023.3278131
    [80] Qiu Z, Yang H, Fu J, et al. Learning spatiotemporal frequency-transformer for compressed video super-resolution. In: Proceedings of the European Conference on Computer Vision (ECCV). Tel Aviv, Israel: Springer, 2022. 257−273
    [81] Jiang Y, Chan K C K, Wang X, et al. Reference-based image and video super-resolution via C2-matching. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(7): 8874−8887
    [82] Isobe T, Jia X, Tao X, et al. Look back and forth: Video super-resolution with explicit temporal difference modeling. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 17411−17420
    [83] Chan K C K, Wang X, Yu K, et al. Basicvsr: The search for essential components in video super-resolution and beyond. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 4947−4956
    [84] Zhou K, Li W, Lu L, et al. Revisiting temporal alignment for video restoration. In: Proceedings/CVF of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 6053−6062
    [85] Tang Q, Zhao Y, Liu M, et al. Semantic lens: Instance-centric semantic alignment for video super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Vancouver, Canada: AAAI, 2024. 5154−5161
    [86] Liu M, Jin S, Yao C, et al. Temporal consistency learning of inter-frames for video super-resolution. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 33(4): 1507−1520
    [87] Liu C, Yang H, Fu J, et al. Learning trajectory-aware transformer for video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 5687−5696
    [88] Liang J, Cao J, Fan Y, et al. VRT: A video restoration transformer. IEEE Transactions on Image Processing, 2024, 33: 2171−2182 doi: 10.1109/TIP.2024.3372454
    [89] Tang J, Lu C, Liu Z, et al. CTVSR: Collaborative spatial-temporal transformer for video super-resolution. IEEE Transactions on Circuits and Systems for Video Technology, DOI: 10.1109/TCSVT.2023.3340439
    [90] Qiu Z, Yang H, Fu J, et al. Learning degradation-robust spatiotemporal frequency-transformer for video super-resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(12): 14888−14904 doi: 10.1109/TPAMI.2023.3312166
    [91] Zhang C, Wang X, Xiong R, et al. Local-global dynamic filtering network for video super-resolution. IEEE Transactions on Computational Imaging, 2023, 9: 963−976 doi: 10.1109/TCI.2023.3321980
    [92] Jiang L, Wang N, Dang Q, et al. PP-MSVSR: multi-stage video super-resolution. arXiv preprint arXiv: 2112.02828, 2021.
    [93] Liang J, Fan Y, Xiang X, et al. Recurrent video restoration transformer with guided deformable attention. Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS). New Orleans, USA: 2022. 378−393
    [94] Dong S, Lu F, Wu Z, et al. DFVSR: directional frequency video super-resolution via asymmetric and enhancement alignment network. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). Macao, China: IJCAI, 2023. 681−689
    [95] Li F, Zhang L, Liu Z, et al. Multi-frequency representation enhancement with privilege information for video super-resolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Paris, France: IEEE, 2023. 12814−12825
    [96] Kai D, Lu J, Zhang Y, et al. EvTexture: Event-driven texture enhancement for video super-resolution. arXiv preprint arXiv: 2406.13457, 2024.
    [97] Zhou X, Zhang L, Zhao X, et al. Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention. arXiv preprint arXiv: 2401.06312, 2024.
    [98] Xu K, Yu Z, Wang X, et al. An implicit alignment for video super-resolution. arXiv preprint arXiv: 2305.00163, 2023.
    [99] Huang Y, Wang W, Wang L. Video super-resolution via bidirectional recurrent convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(4): 1015−1028
    [100] Chu M, Xie Y, Mayer J, et al. Learning temporal coherence via self-supervision for GAN-based video generation. ACM Transactions on Graphics, 2020, 39(4): 75
    [101] Isobe T, Zhu F, Jia X, et al. Revisiting temporal modeling for video super-resolution. arXiv preprint arXiv: 2008.05765, 2020.
    [102] Sajjadi M S M, Vemulapalli R, Brown M. Frame-recurrent video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE, 2018. 6626−6634
    [103] Isobe T, Jia X, Gu S, et al. Video super-resolution with recurrent structure-detail network. In: Proceedings of the European Conference on Computer Vision (ECCV). Glasgow, UK: Springer, 2020. 645−660
    [104] Baniya A A, Lee T K, Eklund P W, et al. Online video super-resolution using information replenishing unidirectional recurrent model. Neurocomputing, 2023, 546: 126355 doi: 10.1016/j.neucom.2023.126355
    [105] Lin J, Huang Y, Wang L. FDAN: Flow-guided deformable alignment network for video super-resolution. arXiv preprint arXiv: 2105.05640, 2021.
    [106] Yi P, Wang Z, Jiang K, et al. Omniscient video super-resolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE, 2021. 4429−4438
    [107] Li S, He F, Du B, et al. Fast spatio-temporal residual network for video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 10522−10531
    [108] Yu J, Liu J, Bo L, et al. Memory-augmented non-local attention for video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 17834−17843
    [109] Tao X, Gao H, Liao R, et al. Detail-revealing deep video super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 4472−4480
    [110] Yang X, He C, Ma J, et al. Motion-guided latent diffusion for temporally consistent real-world video super-resolution. arXiv preprint arXiv: 2312.00853, 2023.
    [111] Liu H, Ruan Z, Zhao P, et al. Video super-resolution based on deep learning: a comprehensive survey. Artificial Intelligence Review, 2022, 55(8): 5981−6035 doi: 10.1007/s10462-022-10147-y
    [112] Tu Z, Li H, Xie W, et al. Optical flow for video super-resolution: A survey. Artificial Intelligence Review, 2022, 55(8): 6505−6546 doi: 10.1007/s10462-022-10159-8
    [113] Baniya A A, Lee G, Eklund P, et al. A methodical study of deep learning based video super-resolution. Authorea Preprints, DOI: 10.36227/techrxiv.23896986.v1
    [114] 江俊君, 程豪, 李震宇, 刘贤明, 王中元. 深度学习视频超分辨率技术概述. 中国图象图形学报, 2023, 28(7): 1927−1964 doi: 10.11834/jig.220130

    Jiang Jun-Jun, Cheng Hao, Li Zhen-Yu, Liu Xian-Ming, Wang Zhong-Yuan. Deep learning based video-related super-resolution technique: A survey. Journal of Image and Graphics, 2023, 28(7): 1927−1964 doi: 10.11834/jig.220130
    [115] Dong C, Loy C C, He K, et al. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 38(2): 295−307
    [116] Drulea M, Nedevschi S. Total variation regularization of local-global optical flow. In: Proceedings of the International IEEE Conference on Intelligent Transportation Systems (ITSC). Washington, USA: IEEE, 2011. 318−323
    [117] Haris M, Shakhnarovich G, Ukita N. Deep back-projection networks for super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE, 2018. 1664−1673
    [118] Dai J, Qi H, Xiong Y, et al. Deformable convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 764−773
    [119] Zhu X, Hu H, Lin S, et al. Deformable convnets v2: More deformable, better results. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 9308−9316
    [120] Chan K C K, Wang X, Yu K, et al. Understanding deformable alignment in video super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Virtual Event: AAAI, 2021: 973−981
    [121] Butler D J, Wulff J, Stanley G B, et al. A naturalistic open source movie for optical flow evaluation. In: Proceedings of European Conference on Computer Vision (ECCV). Florence, Italy: Springer, 2012. 611−625
    [122] Lian W, Lian W. Sliding window recurrent network for efficient video super-resolution. In: Proceedings of the European Conference on Computer Vision Workshops (ECCVW). Tel Aviv, Israel: Springer Nature Switzerland, 2022. 591−601
    [123] Xiao J, Jiang X, Zheng N, et al. Online video super-resolution with convolutional kernel bypass grafts. IEEE Transactions on Multimedia, 2023, 25: 8972−8987 doi: 10.1109/TMM.2023.3243615
    [124] Li D, Shi X, Zhang Y, et al. A simple baseline for video restoration with grouped spatial-temporal shift. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE, 2023. 9822−9832
    [125] Geng Z, Liang L, Ding T, et al. Rstt: Real-time spatial temporal transformer for space-time video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 17441−17451
    [126] Lin L, Wang X, Qi Z, et al. Accelerating the training of video super-resolution models. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Washington, USA: AAAI, 2023. 1595−1603
    [127] Li H, Chen X, Dong J, et al. Collaborative feedback discriminative propagation for video super-resolution. arXiv preprint arXiv: 2404.04745, 2024.
    [128] Hu M, Jiang K, Wang Z, et al. Cycmunet+: Cycle-projected mutual learning for spatial-temporal video super-resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(11): 13376−13392
    [129] Xiao Y, Yuan Q, Jiang K, et al. Local-global temporal difference learning for satellite video super-resolution. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34(4): 2789−2802 doi: 10.1109/TCSVT.2023.3312321
    [130] Hui Y, Liu Y, Liu Y, et al. VJT: A video transformer on joint tasks of deblurring, low-light enhancement and denoising. arXiv preprint arXiv: 2401.14754, 2024.
    [131] Song Y, Wang M, Yang Z, et al. NegVSR: Augmenting negatives for generalized noise modeling in real-world video super-resolution. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Vancouver, Canada: AAAI, 2024. 10705−10713
    [132] Wang Y, Isobe T, Jia X, et al. Compression-aware video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE, 2023. 2012−2021
    [133] Youk G, Oh J, Kim M. FMA-Net: Flow-guided dynamic filtering and iterative feature refinement with multi-attention for joint video super-resolution and deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2024. 44−55
    [134] Zhang Y, Yao A. RealViformer: Investigating attention for real-world video super-resolution. arXiv preprint arXiv: 2407.13987, 2024.
    [135] Xiang X, Tian Y, Zhang Y, et al. Zooming slow-mo: Fast and accurate one-stage space-time video super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020: 3370−3379
    [136] Jeelani M, Cheema N, Illgner-Fehns K, et al. Expanding synthetic real-world degradations for blind video super resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Vancouver, Canada: IEEE, 2023. 1199−1208
    [137] Bai H, Pan J. Self-supervised deep blind video super-resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(7): 4641−4653 doi: 10.1109/TPAMI.2024.3361168
    [138] Pan J, Bai H, Dong J, et al. Deep blind video super-resolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR). Seattle, USA: IEEE, 2024. 4811−4820
    [139] Chen H, Li W, Gu J, et al. Low-res leads the way: Improving generalization for super-resolution by self-supervised learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2024. 25857−25867
    [140] Yuan J, Ma J, Wang B, et al. Content-decoupled contrastive learning-based implicit degradation modeling for blind image super-resolution. arXiv preprint arXiv: 2408.05440, 2024.
    [141] Chen Y H, Chen S C, Lin Y Y, et al. MoTIF: Learning motion trajectories with local implicit neural functions for continuous space-time video super-resolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Paris, France: IEEE, 2023. 23131−23141
    [142] Huang C, Li J, Chu L, et al. Arbitrary-scale video super-resolution guided by dynamic context. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Vancouver, Canada: AAAI, 2024. 2294−2302
    [143] Li Z, Liu H, Shang F, et al. SAVSR: Arbitrary-scale video super-resolution via a learned scale-adaptive network. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Vancouver, Canada: AAAI, 2024. 3288−3296
    [144] Huang Z, Huang A, Hu X, et al. Scale-adaptive feature aggregation for efficient space-time video super-resolution. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Waikoloa, USA: IEEE, 2024. 4228−4239
    [145] Xu Y, Park T, Zhang R, et al. VideoGigaGAN: Towards detail-rich video super-resolution. arXiv preprint arXiv: 2404.12388, 2024.
    [146] He Q, Wang S, Liu T, et al. Enhancing measurement precision for rotor vibration displacement via a progressive video super resolution network. IEEE Transactions on Instrumentation and Measurement, 2024, 73: 1−13
    [147] Chang J, Zhao Z, Jia C, et al. Conceptual compression via deep structure and texture synthesis. IEEE Transactions on Image Processing, 2022, 31: 2809−2823 doi: 10.1109/TIP.2022.3159477
    [148] Chang J, Zhang J, Li J, et al. Semantic-aware visual decomposition for image coding. International Journal of Computer Vision, 2023, 131(9): 2333−2355 doi: 10.1007/s11263-023-01809-7
    [149] Ren B, Li Y, Liang J, et al. Sharing key semantics in transformer makes efficient image restoration. arXiv preprint arXiv: 2405.20008, 2024.
    [150] Wu R, Sun L, Ma Z, et al. One-step effective diffusion network for real-world image super-resolution. arXiv preprint arXiv: 2406.08177, 2024.
    [151] Sun H, Li W, Liu J, et al. Coser: Bridging image and language for cognitive super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2024. 25868−25878
    [152] Wu R, Yang T, Sun L, et al. Seesr: Towards semantics-aware real-world image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2024. 25456−25467
    [153] Zhang Y, Zhang H, Chai X, et al. MRIR: Integrating multimodal insights for diffusion-based realistic image restoration. arXiv preprint arXiv: 2407.03635, 2024.
    [154] Zhang Y, Zhang H, Chai X, et al. Diff-restorer: Unleashing visual prompts for diffusion-based universal image restoration. arXiv preprint arXiv: 2407.03636, 2024.
    [155] Ouyang H, Wang Q, Xiao Y, et al. Codef: Content deformation fields for temporally consistent video processing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2024: 8089−8099
    [156] Hu J, Gu J, Yu S, et al. Interpreting low-level vision models with causal effect maps. arXiv preprint arXiv: 2407.19789, 2024.
    [157] Gu J, Dong C. Interpreting super-resolution networks with local attribution maps. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Virtual: IEEE, 2021. 9199−9208
    [158] Cao J, Liang J, Zhang K, et al. Towards interpretable video super-resolution via alternating optimization. In: Proceedings of the European Conference on Computer Vision (ECCV). Tel Aviv, Israel: Springer, 2022. 393−411
  • 加载中
计量
  • 文章访问数:  147
  • HTML全文浏览量:  233
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-04-29
  • 录用日期:  2024-10-16
  • 网络出版日期:  2025-03-06

目录

/

返回文章
返回