-
摘要: 主成分分析(Principle component analysis,PCA)是一种被广泛应用的降维方法.然而经典PCA的构造基于L2-模导致了其对离群点和噪声点敏感,同时经典PCA也不具备稀疏性的特点.针对此问题,本文提出基于Lp-模的稀疏主成分分析降维方法(LpSPCA).LpSPCA通过极大化带有稀疏正则项的Lp-模样本方差,使得其在降维的同时保证了稀疏性和鲁棒性.LpSPCA可用简单的迭代算法求解,并且当p≥1时该算法的收敛性可在理论上保证.此外通过选择不同的p值,LpSPCA可应用于更广泛的数据类型.人工数据及人脸数据上的实验结果表明,本文所提出的LpSPCA不仅具有较好的降维效果,并且具有较强的抗噪能力.Abstract: Principle component analysis (PCA) is a widely applied dimensionality reduction method. However, the construction of classical PCA is based on L2-norm, which leads to its sensitivity to outliers and noises, as well as sparsity. To solve this problem, the paper proposes a sparse principal component analysis method based on Lp-norm for dimensionality reduction (LpSPCA). In particular, LpSPCA maximizes the Lp-norm variance with sparse regularization term, which ensures the sparseness and robustness while reducing dimensions. LpSPCA can be solved by a simple iterative algorithm, and its convergence is theoretically guaranteed when p≥1. Besides, by choosing a different p, LpSPCA can be used for more types of data sets. Experimental results on both synthetic and human face data sets demonstrate that the proposed LpSPCA not only has better dimensionality reduction ability but also has strong anti-noise property.
-
Key words:
- Principal component analysis (PCA) /
- sparseness /
- robustness /
- dimensionality reduction /
- Lp-norm
-
马尔科夫跳变系统是一类包含连续时间状态变量和离散时间模态变量的混杂系统.在马尔科夫跳变系统中,离散的模态变量是一个在连续时间下具有离散模态的马尔科夫过程,其模态值取自一个有限的集合.马尔科夫跳变系统被广泛应用于那些存在突发故障或环境变化的系统中,包括电力系统、航空航天系统、制造业系统和网络控制系统等[1].近些年,马尔科夫跳变系统逐渐成为了控制理论领域的一个研究热点,主要研究包括稳定性与控制设计[2-15]、故障检测与容错控制[3, 16-20]、滤波及状态和故障估计[3, 7, 9, 16, 20, 21-24].在针对马尔科夫跳变系统估计问题的研究中,文献[9]针对奇异马尔科夫跳变系统设计了观测器.文献[16]基于描述系统的方法,对一类具有延迟和非线性项的马尔科夫跳变系统设计了滑模观测器,给出了系统状态和传感器故障的估计,并将其应用到容错控制中.文献[20]针对一类具有伊藤型随机运动的马尔科夫跳变系统处理了容错控制问题.针对无法在线实时获得系统模态的广义马尔科夫跳变系统,文献[21]研究了部分模态依赖观测器和控制器设计问题.文献[7, 22]考虑了状态估计和滤波问题.文献[23]针对具有非线性扰动的描述马尔科夫跳变系统设计全维和降维观测器来估计系统的状态.文献[24]基于自适应观测器对马尔科夫跳变系统讨论了故障估计问题.在以上的介绍中,文献[7, 21-23]考虑的是马尔科夫跳变系统不具有执行器和传感器的情形,文献[24]只考虑了执行器故障的估计问题,虽然文献[16, 20]基于滑模观测器给出了执行器和传感器故障的同时估计,但是需要事先获知故障以及其导数的上界.由此可见,目前国内外对同时具有执行器和传感器故障的马尔科夫跳变系统进行状态、执行器故障和传感器故障同时估计的研究并不多见.此外,在实际系统中,延迟环节往往是导致系统不稳定的因素之一,状态转移概率也往往是在线估计获得的,具有一些不确定性,因此对于具有延迟环节和状态转移概率不确定性的情形进行相关议题的讨论具有重大意义.
综上所述,本文针对一类具有不确定状态转移概率的延迟马尔科夫跳变系统设计了自适应观测器来同时估计执行器和传感器故障.本文的贡献在于:1) 在状态转移概率不确定的情形下,对一类具有延迟环节和参数不确定性的马尔科夫跳变系统给出了执行器和传感器故障的同时估计;2) 本文假设状态转移概率矩阵是其估计值且具有不确定性,相较于基于精确状态转移概率矩阵的文献[16, 20]更具实用性;3) 本文设计过程中无需事先获知执行器或传感器故障的任何信息,比如,文献[16]要求传感器上界已知等,因此本文具有更小的保守性.
1. 系统模型和问题描述
考虑如下在概率空间$({ \rm {\Omega }},{ {F}},{{P}})$上具有参数不确定性的线性延迟马尔科夫跳变系统
$\left\{ \begin{array}{*{35}{l}} \dot{x}(t)=(A({{r}_{t}})+\Delta A({{r}_{t}}))x(t)~+ \\ ~~~~~~~{{A}_{d}}({{r}_{t}})x(t-\tau )+B({{r}_{t}})u(t)+D({{r}_{t}}){{f}_{a}}(t) \\ y(t)=C({{r}_{t}})x(t)+G({{r}_{t}}){{f}_{s}}(t) \\ x(t)=\phi (t),t\in [\begin{matrix} -\tau &0 \\ \end{matrix}] \\ \end{array} \right.$
(1) 其中,${\rm{\Omega }}$是样本空间,${ {F}}$是样本空间上的 $\sigma$-代数子集,${ {P}}$为概率测度. ${\pmb x}(t) \in {{\bf R}^n}$,${\pmb u}(t)\in {{\bf R}^m}$分别为系统状态和控制输入. ${{\pmb f}_a}(t) \in{{\bf R}^q}$和${{\pmb f}_s}(t) \in {{\bf R}^w}$分别是未知的执行器故障和传感器故障[16, 20].$\left\{ {{r_t}} \right\}$是在有限集${ S} = \left\{ {1,\cdots,s}\right\}$内取值的连续时间离散状态的马尔科夫过程,它具有如下状态转移概率:
${P_r}({r_{t + h}} = j|{r_t} = i) = \left\{ \begin{array}{l}{\pi _{ij}}h + o(h),{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}~~~~~ i \ne j\\1 + {\pi _{ii}}h + o(h),{\kern 1pt} {\kern 1pt} {\kern 1pt} i = j\end{array} \right.$
其中$h > 0$,$\mathop {\lim }_{h \to 0} {{o(h)}}/{h} = 0$,${\pi _{ij}}$是从时间$t$处状态$i$到时间$t + h$处状态$j$的状态转移概率,且有${\pi _{ii}} = -\sum\nolimits_{j = 1,i \ne j}^s {{\pi _{ij}}} $,${\pi _{ij}}\ge 0$. 定义$\Pi = \left\{ {{\pi _{ij}}}\right\}$为未知的状态转移概率矩阵,且满足
$\Pi \subseteq \left\{ \hat{\Pi }+\Delta \Pi :\left| \Delta {{\pi }_{ij}} \right|\le {{\kappa }_{ij}},{{\kappa }_{ij}}\ge 0,i\ne j,i,j\in S \right\}$
其中~$\hat \Pi = \left\{ {{{\hat \pi }_{ij}}}\right\}$是已知的常数矩阵,${\hat \pi _{ij}} \ge 0,~i \ne j,$$i,j \in { S}$ 是${\pi _{ij}}$的估计值,$\Delta \Pi = \left\{ {\Delta {\pi _{ij}}} \right\}$表示状态转移速率矩阵中的不确定性,$\Delta {\pi _{ij}}$为速率估计误差,并且在有限集$[{\begin{array}{*{20}{c}}{ - {\kappa _{ij}}}&{{\kappa _{ij}}}\end{array}}]$中取值,对于$i \in { S}$,有${\hat \pi _{ii}}=-\sum\nolimits_{j = 1,i \ne j}^s {{{\hat \pi }_{ij}}}$和$\Delta {\pi _{ii}} = - \sum\nolimits_{j = 1,i \ne j}^s {\Delta {\pi _{ij}}} $. $A({r_t})$,${A_d}({r_t})$,$B({r_t})$,$D({r_t})$,$C({r_t})$和$G({r_t})$是具有适当维数关于${r_t}$的矩阵函数. $\Delta A({r_t})$是表示参数不确定性的未知矩阵,并假设$\Delta A({r_t}) =M({r_t})F({r_t},t)H({r_t})$,其中$M({r_t})$和$H({r_t})$是已知的常矩阵,$F({r_t},t)$是未知的时变矩阵满足${F^{\rm T}}({r_t},t)F({r_t},t)\le I$. $\tau > 0$是已知的延迟时间.函数${\pmb \phi}(t)$是在$[{\begin{array}{*{20}{c}} { - \tau }&0\end{array}}]$上的初始状态,系统初始模态为${r_0}$.假设${G(r_t)}$是列满秩.
注 1.本文假设${G(r_t)}$是列满秩是具有一般性的,许多关于传感器故障估计的文献都用到了此假设\[16, 20, 25].关于参数不确定性$\Delta A({r_t}) =M({r_t})F({r_t},t)H({r_t})$的假设也在针对不确定性系统问题的研究中被频繁应用[26-27].
为了表示方便,我们定义对任意矩阵${ \Psi}$有${ \Psi}({r_t} = i) = {{\Psi}_i}$,${\kern 1pt} i \in {\bf S}$,任意变量${\pmb \chi} (t)$有${\pmb \chi}(t) = {\pmb \chi}$.
定义 1. 定义变量${\pmb \Im} = \{ {\pmb \Im} (t)\}\in {L_2}[\begin{array}{*{20}{c}} 0&\infty\end{array})$,则它的${L_2}$范数为${\left\| {\pmb \Im} \right\|_{_2}} = \sqrt {\int_0^\infty {{\pmb \Im} {{(t)}^{\rm T}}{\pmb \Im} (t){\rm d}t} } $.
定义 2[25]. 对于$\tau > 0$,如果对于${\pmb u}\equiv {\bf 0}$,${\Delta}A_i \equiv {\bf 0}$,${\pmb f}_a \equiv{\bf 0}$,${\pmb f}_s \equiv {\bf 0}$,初始条件$({{\pmb x}_0},{r_0})$和所有定义在$[{\begin{array}{*{20}{c}} { - \tau }&0\end{array}}]$上的有限函数${\pmb \phi}(t)$,有
$\left[{\mathop{\rm E}\nolimits} \int_0^\infty {{{\pmb x}^{\rm T}}(t){\pmb x}}(t){\rm d}t|{{\pmb x}_0},{\pmb \phi}(t),{r_0}\right] \le \infty $
其中${\mathop{\rm E}\nolimits}$表示数学期望,则系统(1) 是随机稳定的.
定义 3[28].对马尔科夫跳变系统形如
\begin{equation}\left\{\begin{array}{l}\dot {\pmb x}(t) = {A_i}{\pmb x}(t) + {B_i}{\pmb u}(t) + {B_{\omega i}}{\pmb \omega} (t)\\{\pmb z}_\omega(t) = {C_i}{\pmb x}(t) + {D_i}{\pmb u}(t) +{D_{\omega i}}{\pmb \omega} (t)\end{array} \right.\end{equation}
(2) 其中~${\pmb \omega}(t)$表示干扰. 如果对于$λ >0$,存在常数$M({{\pmb x}_0},{r_0})$且$M(0,{r_0}) = 0$,满足
$\begin{array}{l}\Big[{\mathop{\rm E}\nolimits} \displaystyle \int_0^\infty {{{\pmb z}_{\omega}^{\rm T}}(t){\pmb z}_{\omega}(t){\rm d}t|{{\pmb x}_0},{r_0}{\Big]^{\frac{1}{2}}}} \le ~~~~~~~~~~~~~~~\\ ~~~~~~~~~~~~~~~~~~~~~~\gamma {[\left\| {{\pmb \omega} (t)} \right\|_2^2 + M({{\pmb x}_0},{r_0})]^{\frac{1}{2}}}\end{array}$
则系统(2) 是随机稳定且具有${H_\infty }$干扰抑制指数$λ$.
引理 1. 对于标量$\sigma > 0$和实矩阵${\Theta }_1$,${\Theta}_2$有
${{\Theta }_1 ^{\rm T}}{\Theta}_2 + {{\Theta}_2 ^{\rm T}}{\Theta }_1 \le {\sigma ^{ - 1}}{{\Theta }_1 ^{\rm T}}{\Theta }_1 + \sigma {{\Theta}_2 ^{\rm T}}{\Theta}_2 $
引理 2. 令矩阵$U$,$V'$和$F'(t)$为任意适当的实数矩阵,其中$U$和$V'$为已知,$F'(t)$为未知且满足${F'^{\rm T}}(t)F'(t) \le I$,对于$\varepsilon> 0$,如下不等式
$UF'(t)V' + {V'^{\rm T}}{F'^{\rm T}}(t){U^{\rm T}}\le \varepsilon U{U^{\rm T}} + {\varepsilon ^{ - 1}}{V'^{\rm T}}V'$
是成立的.
假设 1. ${{\pmb f}_a}$是可微的,且${\dot {\pmb f}_a} \in{L_2}[0\ \infty )$.
注 2.本文所设计方法可以适用于任何有界连续的传感器故障和满足假设1的执行器故障,且设计过程不需要知道有关故障的任何信息,例如故障的上界[16]和故障导数的上界[20]等.在实际应用中,故障往往都是经过一个暂态的变化之后几乎保持不变的,即满足${\dot{\pmb f}_a} \in {L_2}[0\ \infty )$.相比于文献[29]中关于${\dot {\pmb f}_a}$有界的假设,文中对于执行器故障的假设1更具有一般性.
为了能达到传感器故障和状态同时估计的目的,定义一个新的变量$\bar{\pmb x} = \left[{\begin{array}{*{20}{c}}{\pmb x}\\{{{\pmb f}_s}}\end{array}} \right] \in {{\bf R}^{n + w}}$,相应地,记${\bar A_i} = [{{A_i}}\ {{0_{n × w}}}]$,${\bar C_i} =[{{C_i}}\ {{G_i}}]$,$E =[{{I_n}}\ {{0_{n × w}}}]$.于是,系统(1) 可以写为
\begin{equation}\left\{ \begin{array}{l}E\dot {\bar {\pmb x}} = {{\bar A}_i}\bar {\pmb x} + \Delta {A_i}{\pmb x} + {A_{di}}{\pmb x}(t - \tau )+ \\ ~~~~~~~~~~{B_i}{\pmb u} + {D_i}{{\pmb f}_a}\\{\pmb y} = {{\bar C}_i}\bar {\pmb x}\end{array} \right.\end{equation}
(3) 系统(3) 是一个广义描述系统,状态包括原系统的状态和传感器故障.如果能够针对系统(3) 设计一个观测器,就能够同时得到原系统的状态和传感器故障的估计.
2. 主要结论
针对系统(3) ,本节将提出一种能同时估计系统状态、执行器和传感器故障的自适应观测器.
设计如下自适应观测器系统
\begin{equation}\left\{ \begin{array}{l}\dot {\pmb z} = {N_i}{\pmb z} + {L_i}{\pmb y} + {T_i}{B_i}{\pmb u}+ \\ ~~~~~~~~{T_i}{D_i}{{\hat {\pmb f}}_a} + {T_i}{A_{di}}\hat {\pmb x}(t - \tau )\\\hat {\bar {\pmb x}} = {\pmb z} + {Q_i}{\pmb y}\\{{\dot {\hat {\pmb f}}}_a} = {\Phi _i}({\pmb y} - \hat {\pmb y})\end{array} \right.\end{equation}
(4) 其中${\pmb z} \in {{\bf R}^{n + w}}$为观测器中间变量,$\hat {\bar {\pmb x}} = \left[{\begin{array}{*{20}{c}}{\hat {\pmb x}}\\{{{\hat {\pmb f}}_s}}\end{array}} \right]$为$\bar {\pmb x} = \left[{\begin{array}{*{20}{c}}{\pmb x}\\{{{\pmb f}_s}}\end{array}} \right]$的估计,${\hat {\pmb f}_a}$为执行器故障${{\pmb f}_a}$的估计,$\hat {\pmb x}(t - \tau )$为延迟状态${\pmb x}(t - \tau )$的估计. ${N_i}$,${L_i}$,${T_i}$,${Q_i}$和${\Phi _i}$为适当维数的待定矩阵.本文的主要目标就是求取矩阵${N_i}$,${L_i}$,${T_i}$,${Q_i}$和${\Phi _i}$使得系统(4) 可以在${H_\infty }$的意义下估计系统(3) 的状态,同时可以给出执行器故障${{\pmb f}_a}$在${H_\infty }$意义下的估计.
因为${G_i}$是列满秩,我们可以得到$\left[{\begin{array}{*{20}{c}}E\\{{{\bar C}_i}}\end{array}} \right] = \left[{\begin{array}{*{20}{c}}{{I_n}}&{{0_{n × w}}}\\{{C_i}}&{{G_i}}\end{array}} \right]$也是列满秩的.因此存在矩阵${T_i} \in {{\bf R}^{(n + w) × n}}$和${Q_i} \in {{\bf R}^{(n + w) × p}}$满足
\begin{equation}{T_i}E + {Q_i}{\bar C_i} = {I_{n + w}}\end{equation}
(5) 且其中一组特解为
$\begin{align} &{{T}_{i}}={{\left[ \begin{matrix} E \\ {{{\bar{C}}}_{i}} \\ \end{matrix} \right]}^{-1}}\left[ \begin{array}{*{35}{l}} {{I}_{n}} \\ {{0}_{p\times n}} \\ \end{array} \right] \\ &{{Q}_{i}}={{\left[ \begin{matrix} E \\ {{{\bar{C}}}_{i}} \\ \end{matrix} \right]}^{-1}}\left[ \begin{array}{*{35}{l}} {{0}_{p\times n}} \\ {{I}_{p}} \\ \end{array} \right] \\ \end{align}$
(6) 其中"$-1$''为矩阵的Moore-Penrose逆.
定义观测误差${\pmb { e}} = \bar {\pmb x} - \hat {\bar {\pmb x}}$,由系统(4) 中的第二式和式(5) 可得
${\pmb { e}} = \bar {\pmb x} -{\pmb z} - {Q_i}{\bar C_i}\bar {\pmb x} = ({I_{n + w}} -{Q_i}{\bar C_i})\bar {\pmb x} - {\pmb z} = {T_i}E\bar {\pmb x} -{\pmb z}$
因此误差的动态方程为
\begin{equation}\begin{array}{l}\dot {\pmb { e}} = {T_i}E\dot {\bar {\pmb x}} - \dot {\pmb z} =\\ ~~~~~~~~{T_i}{{\bar A}_i}\bar {\pmb x} + {T_i}\Delta {A_i}{\pmb x} + {T_i}{A_{di}}{\pmb x}(t - \tau ) + {T_i}{B_i}{\pmb u}+ \\ ~~~~~~~~{T_i}{D_i}{{\pmb f}_a} - {N_i}{\pmb z} - {L_i}{\pmb y}- \\ ~~~~~~~~{T_i}{B_i}{\pmb u} - {T_i}{D_i}{{\hat {\pmb f}}_a} - {T_i}{A_{di}}\hat {\pmb x}(t - \tau ) =\\ ~~~~~~~~ {T_i}{{\bar A}_i}\bar {\pmb x} + {T_i}\Delta {A_i}{\pmb x} + {T_i}{A_{di}}{\pmb x}(t - \tau )+ \\ ~~~~~~~~ {T_i}{B_i}{\pmb u} + {T_i}{D_i}{{\pmb f}_a} - {N_i}{\pmb z} - {L_i}{\pmb y}- \\ ~~~~~~~~{T_i}{B_i}{\pmb u} - {T_i}{D_i}{{\hat {\pmb f}}_a} - {T_i}{A_{di}}\hat {\pmb x}(t - \tau )+ \\ ~~~~~~~~ {N_i}{T_i}E\bar {\pmb x} - {N_i}{T_i}E\bar {\pmb x} =\\ ~~~~~~~~{N_i}{\pmb { e}} + ({T_i}{{\bar A}_i} - {L_i}{{\bar C}_i} - {N_i}{T_i}E)\bar {\pmb x}+ \\ ~~~~~~~~{T_i}{D_i}{{\tilde {\pmb f}}_a} + {T_i}{A_{di}}\tilde {\pmb x}(t - \tau ) + {T_i}\Delta {A_i}{\pmb x}\end{array}\end{equation}
(7) 其中${\tilde {\pmb f}_a} = {{\pmb f}_a} - {\hat {\pmb f}_a}$,$\tilde {\pmb x}(t - \tau ) = {\pmb x}(t - \tau ) - \hat {\pmb x}(t - \tau )$. 如果待定矩阵${N_i} \in {{\bf R}^{(n + w) × (n + w)}}$ 和${L_i} \in {{\bf R}^{(n + w) × p}}$满足
\begin{equation}{T_i}{\bar A_i} - {L_i}{\bar C_i} - {N_i}{T_i}E = 0\end{equation}
(8) 则式(7) 可以化简为
\begin{equation}\dot {\pmb { e}} = {N_i}{\pmb { e}} + {T_i}{D_i}{\tilde {\pmb f}_a} + {T_i}{A_{di}}\tilde {\pmb x}(t - \tau ) + {T_i}\Delta {A_i}{\pmb x}\end{equation}
(9) 不难发现满足式(8) 的一组解为
\begin{equation}{N_i} = {T_i}{\bar A_i} - {K_i}{\bar C_i}\end{equation}
(10) 和
\begin{equation}{L_i} = {K_i} + {N_i}{Q_i}\end{equation}
(11) 其中${K_i}$为具有适当维数的任意矩阵.
由系统(4) 中第三式和假设1可以得到
\begin{equation}{\dot {\tilde{\pmb f}}_a}={\dot {\pmb f}_a} - {\Phi _i}{\bar C_i}{\pmb {e}}\end{equation}
(12) 为了能够方便地求取矩阵${K_i}$和${\Phi_i}$,定义一个新的变量${\pmb \zeta} = \left[{\begin{array}{*{20}{c}}{\pmb { e}}\\{{{\tilde {\pmb f}}_a}}\end{array}} \right] \in {{\bf R}^{n + w + q}}$,则误差方程(9) 和(12) 可以合并为
\begin{equation}\dot {\pmb \zeta} = ({\hat A_i} - {\hat L_i}{\hat C_i}){\pmb \zeta} + {\hat D_i}{\pmb v} + {\hat A_{di}}{\pmb \zeta}(t - \tau ) + \Delta {\hat A_i}{\pmb x}\end{equation}
(13) 其中${\hat A_i} = \left[{\begin{array}{*{20}{c}}{{T_i}{{\bar A}_i}}&{{T_i}{D_i}}\\{{0_{q × (n + w)}}}&{{0_{q × q}}}\end{array}} \right]$,${\hat L_i} = \left[{\begin{array}{*{20}{c}}{{K_i}}\\{{\Phi _i}}\end{array}} \right]$,${\hat C_i} = \left[{\begin{array}{*{20}{c}}{{{\bar C}_i}}&{{0_{p × q}}}\end{array}} \right]$,${\hat D_i} = \left[{\begin{array}{*{20}{c}}{{0_{(n + w) × q}}}\\{{I_q}}\end{array}} \right]$,${\hat A_{di}} = \left[{\begin{array}{*{20}{c}}{{T_i}{{\bar A}_{di}}}&{{0_{(n + w) × q}}}\\{{0_{q × (n + w)}}}&{{0_{q × q}}}\end{array}} \right]$,$\Delta {\hat A_i} = {\hat M_i}{\hat F_i}(t){\hat N_i}$,${\bar A_{di}} = \left[{\begin{array}{*{20}{c}}{{A_{di}}}&{{0_{n × w}}}\end{array}} \right]$,${\hat M_i} = \left[{\begin{array}{*{20}{c}}{{T_i}{M_i}}&{{0_{(n + w) × 1}}}\\{{0_{q × 1}}}&{{0_{q × 1}}}\end{array}} \right]$,${\hat F_i}(t) ={ \rm diag}\left\{{F_i}(t),{F_i}(t)\right\}$,${\hat N_i} = \left[{\begin{array}{*{20}{c}}{{N_i}}\\{{0_{1 × n}}}\end{array}} \right]$和~${\pmb v} = {\dot {\pmb f}_a}$.由${F_i}(t)$的性质可得${\hat F_i}(t)$同样满足$\hat F_i^{\rm T}(t){\hat F_i}(t) \le I$.
由式(10) 、(11) 和(13) 可以看出,只要得到矩阵${\hat L_i}$使得式(13) 是鲁棒随机稳定的,观测器(4) 即可以实现.矩阵${K_i}$和${\Phi _i}$可由下式获得
\begin{equation}\left\{ \begin{array}{l}{K_i} = [{\begin{array}{*{20}{c}} {{I_{n + w}}}&{{0_{(n + w)× q}}}\end{array}}]{{\hat L}_i}\\{\Phi _i} = [{\begin{array}{*{20}{c}} {{0_{q × (n +w)}}}&{{I_q}}\end{array}}]{{\hat L}_i}\end{array} \right.\end{equation}
(14) 下面的定理给出了本文的主要结论.该定理不仅给出了式(13) 的鲁棒稳定性的证明,还给出了矩阵${\hat L_i}$的求取方法.
定理 1. 如果对于${λ _{ij}} > 0$,${\mu _{ij}} >0$,${\varepsilon _{1i}} > 0$ 和${\varepsilon _{2i}} > 0$,$i,j\in { S}$,存在对称正定矩阵 ${P_i} \in {{\bf R}^{(n + w + q)× (n + w + q)}}$,${R_i} \in {{\bf R}^{n× n}}$,$X \in{{\bf R}^{(n + w + q)× (n + w + q)}}$,$Z \in {\bf R}^{n× n}$,矩阵${Y_i} \in {{\bf R}^{(n + w + q) × p}}$和$\gamma> 0$ 使得下述凸优化问题(15) 有解(其中,$ * $表示矩阵的对称部分,这里对对称矩阵${\wp }$,${\wp } <0$表示对称矩阵${\wp }$ 是负定的),则误差系统(13) 是鲁棒随机稳定的,且具有${H_\infty}$干扰抑制水平$\gamma $.其中
$$\begin{array}{l}{\Gamma _{1i}} = {P_i}{{\hat A}_i} - {Y_i}{{\hat C}_i} + \hat A_i^{\rm T}{P_i} - {({Y_i}{{\hat C}_i})^{\rm T}} + {I_{n + w + q}}+ \\ ~~~~~~~~~~\sum\limits_{j = 1,j \ne i}^s {\frac{{\kappa _{ij}^2}}{4}{λ _{ij}}{I_{n + w + q}} + } \sum\limits_{j = 1}^s {{{\hat \pi }_{ij}}{P_j} + X}\end{array}$$ $$\begin{array}{l}{\Gamma _{2i}} = {R_i}{A_i} + A_i^{\rm T}{R_i} +Z+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\~~~~~~~~~~ \sum\limits_{P_i \hat L_i= Y_i}^s {{{\hat \pi }_{ij}}{R_j} + \sum\limits_{j = 1,j \ne i}^s {\frac{{\kappa _{ij}^2}}{4}{\mu _{ij}}{I_n}} }~~~~~~\end{array}$$ $$\begin{array}{l}{{\bar P}_i} = [\begin{array}{*{20}{c}} {{P_1} - {P_i}}& \cdots&{{P_{i-1}} - {P_i}}\end{array}\\~~~~{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}\begin{array}{*{20}{c}} { {P_{i + 1}-{P_i} }}& \cdots &{{P_s} -{P_i}}]\end{array}\end{array}$$ $$\begin{array}{l}{{\bar R}_i} = [\begin{array}{*{20}{c}} {{R_1} - {R_i}}& \cdots&{{R_{i-1}} - {R_i}}\end{array}\\{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}\begin{array}{*{20}{c}} {{R_{i+1}} - {R_i}}& \cdots &{{R_s} -{R_i}}]\end{array}\end{array} $$ $$\begin{array}{l}{λ _{1i}} ={\rm diag}\big\{-{λ _{i1}}{I_{n + w + q}},\cdots ,- {λ _{i(i - 1) }}{I_{n + w + q}} ,\\ {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} - {λ _{i(i + 1) }}{I_{n + w +q}},\cdots ,- {λ _{is}}{I_{n + w + q}}\big\}\end{array} $$ $$\begin{array}{l}{λ _{2i}} = {\rm diag}\big\{-{\mu _{i1}}{I_n},\cdots,- {\mu_{i(i - 1) }}{I_n},\\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} ~~~~~~~~~~~~~~~~~{\kern 1pt}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} - {\mu_{i(i + 1) }}{I_n},\cdots ,- {\mu_{is}}{I_n}\big\}~~~~~~~~~~~~~~~~~~~\end{array} $$
\begin{equation} \left[{\begin{array}{*{20}{c}}{{\Gamma _{1i}}}&{{P_i}{{\hat A}_{di}}}&0&0&0&0&{{P_i}{{\hat D}_i}}&{{P_i}{{\hat M}_i}}&{{{\bar P}_i}}&0&0&0&0\\ * &{ - X}&0&0&0&0&0&0&0&0&0&0&0\\ *&* &{{\Gamma _{2i}}}&{{R_i}{A_d}_i}&{{R_i}{B_i}}&{{R_i}{D_i}}&0&0&0&{\hat N_i^{\rm T}}&{{R_i}{M_i}}&{N_i^{\rm T}}&{{{\bar R}_i}}\\ *&*&* &{ - Z}&0&0&0&0&0&0&0&0&0\\ *&*&*&* &{ - {\gamma ^2}{I_m}}&0&0&0&0&0&0&0&0\\ *&*&*&*&* &{ - {\gamma ^2}{I_q}}&0&0&0&0&0&0&0\\ *&*&*&*&*&* &{ - {\gamma ^2}{I_q}}&0&0&0&0&0&0\\ *&*&*&*&*&*&* &{ - \varepsilon _{1i}^{ - 1}{I_2}}&0&0&0&0&0\\ *&*&*&*&*&*&*&* &{ {λ _{1i}}}&0&0&0&0\\ *&*&*&*&*&*&*&*&* &{ - {\varepsilon _{1i}}{I_2}}&0&0&0\\ *&*&*&*&*&*&*&*&*&* &{ - \varepsilon _{2i}^{ - 1}{I_1}}&0&0\\ *&*&*&*&*&*&*&*&*&*&* &{ - {\varepsilon _{2i}}{I_1}}&0\\ *&*&*&*&*&*&*&*&*&*&*&* &{ {λ _{2i}}}\end{array}} \right]< 0\end{equation}
(15) 证明. 选取Lyapunov-Krasovskii函数
$\begin{array}{l}V({\pmb \zeta},{\pmb x},i) = V({\pmb \zeta},i) + {\displaystyle\int}_{t - \tau }^t {{{\pmb \zeta} ^{\rm T}}(} \theta)X{\pmb \zeta} (\theta ){\rm d}\theta~ +\$2mm] ~~~~~~~~~~~~~~~~~~~~~V({\pmb x},i) + \displaystyle\int_{t - \tau }^t {{{\pmb x}^{\rm T}}(} \theta )Z{\pmb x}(\theta ){\rm d}\theta\end{array}$
其中$V({\pmb \zeta},i) = {{\pmb \zeta}^{\rm T}}{P_i}{\pmb \zeta}$,$V({\pmb x},i) = {{\pmb x}^{\rm T}}{R_i}{\pmb x}$.定义对于具有马尔科夫过程的Lyapunov 函数的弱微分算子为
$\ell V({\pmb \zeta} ,i) = V({\pmb \zeta},{r_t}) \cdot \frac{{{\rm d}{\pmb \zeta} }}{{\rm d}{t}}{|_{{r_t} = i}} + \sum\limits_{j \in{ S}} {{\pi _{ij}}V({\pmb \zeta},j)} $ $\ell V({\pmb x},i) =V({\pmb x},{r_t}) \cdot \frac{{{\rm d}{\pmb x}}}{{{\rm d}t}}{|_{{r_t} = i}} + \sum\limits_{j \in { S}} {{\pi_{ij}}V({\pmb x},j)} $
因此对于任意 $i \in { S}$有
\begin{equation*}\begin{array}{l} \ell V({\pmb\zeta},{\pmb x},i) = {{\pmb \zeta}^{\rm T}}({P_i}({{\hat A}_i} -{{\hat L}_i}{{\hat C}_i})+ \\~~~~~{({{\hat A}_i} -{{\hat L}_i}{{\hat C}_i})^{\rm T}}{P_i}){\pmb \zeta} + 2{{\pmb \zeta} ^{\rm T}}{P_i}{{\hat D}_i}{\pmb v}+ \\ ~~~~~2{{\pmb \zeta} ^{\rm T}}{P_i}\Delta {{\hat A}_i}{\pmb x} + 2{{\pmb \zeta} ^{\rm T}}{P_i}{{\hat A}_{di}}{\pmb \zeta} (t - \tau )+ \\ ~~~~~{{\pmb \zeta} ^{\rm T}}X{\pmb \zeta} - {{\pmb \zeta} ^{\rm T}}(t - \tau )X{\pmb \zeta} (t - \tau )+ \\ ~~~~~{{\pmb x}^{\rm T}}({R_i}{A_i} + A_i^{\rm T}{R_i}){\pmb x} + {{\pmb x}^{\rm T}}Z{\pmb x}+ \\ ~~~~~2{{\pmb x}^{\rm T}}{R_i}{B_i}{\pmb u} + 2{{\pmb x}^{\rm T}}{R_i}{D_i}{{\pmb f}_a}+ \\ ~~~~~2{{\pmb x}^{\rm T}}{R_i}{A_d}_i{\pmb x}(t - \tau ) + 2{{\pmb x}^{\rm T}}{R_i}\Delta {A_i}{\pmb x}- \\~~~~~{{\pmb x}^{\rm T}}(t - \tau )Z{\pmb x}(t - \tau )+ \\~~~~~ {{\pmb \zeta} ^{\rm T}}\sum\limits_{j = 1}^s {{\pi_{ij}}{P_j}{\pmb \zeta} } + {{\pmb x}^{\rm T}}\sum\limits_{j =1}^s {{\pi _{ij}}{R_j}{\pmb x}}\end{array}\end{equation*}
(16) 对于上式中的$2{{\pmb \zeta} ^{\rm T}}{P_i}\Delta {\hat A_i}{\pmb x}$和$2{{\pmb x}^{\rm T}}{R_i}\Delta {A_i}{\pmb x}$ 应用引理2可得,对于${\varepsilon _{1i}} > 0 $和${\varepsilon _{2i}} > 0$,$i\in { S}$有
\begin{equation*}2{{\pmb \zeta} ^{\rm T}}{P_i}\Delta {\hat A_i}{\pmb x} \le {\varepsilon _{1i}}{{\pmb \zeta} ^{\rm T}}{P_i}{\hat M_i}\hat M_i^{\rm T}{P_i}{\pmb \zeta} + \varepsilon_{1i}^{ - 1}{{\pmb x}^{\rm T}}\hat N_i^{\rm T} {\hat N_i}{ {\pmb x}}\end{equation*}
(17) 和
\begin{equation*}2{{\pmb x}^{\rm T}}{R_i}\Delta {A_i}{\pmb x} \le {\varepsilon _{2i}}{{\pmb x}^{\rm T}}{R_i}{M_i}M_i^{\rm T}{R_i}{\pmb x} + \varepsilon _{2i}^{ - 1}{{\pmb x}^{\rm T}}N_i^{\rm T}{N_i}{\pmb x}\end{equation*}
(18) 此外,基于状态转移概率矩阵的特性我们可以分析得到
$\begin{array}{l}\sum\limits_{j = 1}^s {{\pi _{ij}}{P_j}} = \sum\limits_{j = 1}^s {{{\hat \pi }_{ij}}{P_j}} + \sum\limits_{j = 1}^s {\Delta {\pi _{ij}}{P_j}} =\\ ~~~~~~~~~~\sum\limits_{j = 1}^s {{{\hat \pi }_{ij}}{P_j}} + \sum\limits_{j = 1,j \ne i}^s {(\frac{{\Delta {\pi _{ij}}}}{2}({P_j} - {P_i})}+ \\ ~~~~~~~~~~\dfrac{{\Delta {\pi _{ij}}}}{2}({P_j} - {P_i}))\end{array}$
对上式第二行最后一项应用引理1,可以得到对于$i,j \in { S},
$\begin{align} &\sum\limits_{j=1}^{s}{{{\pi }_{ij}}{{P}_{j}}}\le \sum\limits_{j=1}^{s}{{{{\hat{\pi }}}_{ij}}{{P}_{j}}}+ \\ &\sum\limits_{j=1,j\ne i}^{s}{(\frac{\kappa _{ij}^{2}}{4}{{\lambda }_{ij}}{{I}_{n+w+q}}+\lambda 955;_{ij}^{-1}{{({{P}_{j}}-{{P}_{i}})}^{2}})} \\ \end{align}$
(19) 其中${\kappa _{ij}}$为$\Delta {\pi _{ij}}$的上界在第一部分已经被定义,${λ _{ij}} > 0$为任意标量.同理对于$i,j \in { S}$,${\mu _{ij}} > 0$有
\begin{equation*}\begin{array}{l}\sum\limits_{j = 1}^s {{\pi _{ij}}{R_j}} \le \sum\limits_{j = 1}^s {{{\hat \pi }_{ij}}{R_j}} + \\ ~~~~~~~~~~\sum\limits_{j = 1,j \ne i}^s {(\dfrac{{\kappa _{ij}^2}}{4}{\mu _{ij}}{I_n},+,} \mu _{ij}^{ - 1}{({R_j} - {R_i})^2})\end{array}\end{equation*}
(20) 将式(17) ~(20) 代入式(16) 可得
$\begin{array}{l} \ell V({\pmb \zeta},x,i) \le {{\pmb \zeta} ^{\rm T}}({P_i}({{\hat A}_i} -{{\hat L}_i}{{\hat C}_i}) ~+\\~~~~~~ {({{\hat A}_i} - {{\hat L}_i}{{\hat C}_i})^{\rm T}}{P_i}){\pmb \zeta}+ 2{{\pmb \zeta}^{\rm T}}{P_i}{{\hat D}_i}{\pmb v }~+\\~~~~~~{\varepsilon _{1i}}{{\pmb \zeta}^{\rm T}}{P_i}{{\hat M}_i}\hat M_i^{\rm T}{P_i}{\pmb \zeta} +\varepsilon _{1i}^{ - 1}{{\pmb x}^{\rm T}}\hat N_i^{\rm T}{{\hat N}_i}{\pmb x} ~+\\~~~~~~2{{{\pmb \zeta}}^{\rm T}}{P_i}{{\hat A}_{di}}{{\pmb\zeta}} (t -\tau ) + {{ {\pmb \zeta}} ^{\rm T}}X{\pmb \zeta} ~- \\~~~~~~{{ {\pmb\zeta}} ^{\rm T}}(t - \tau )X{\pmb \zeta} (t - \tau)+ {{\pmb x}^{\rm T}}Z{\pmb x} ~+\\ ~~~~~~ {{\pmb x}^{\rm T}}({R_i}{A_i} + A_i^{\rm T}{R_i}){\pmb x}+ 2{{\pmb x}^{\rm T}}{R_i}{B_i}{\pmb u}~+ \\ ~~~~~~2{{\pmb x}^{\rm T}}{R_i}{D_i}{{\pmb f}_a} + {\varepsilon _{2i}}{{\pmb x}^{\rm T}}{R_i}{M_i}M_i^{\rm T}{R_i}{\pmb x} ~+ \\~~~~~~ \varepsilon _{2i}^{ - 1}{{\pmb x}^{\rm T}}N_i^{\rm T}{N_i}{\pmb x}+ 2{{\pmb x}^{\rm T}}{R_i}{A_d}_i{\pmb x}(t - \tau ) ~- \\~~~~~~{{\pmb x}^{\rm T}}(t - \tau )Z{\pmb x}(t - \tau )+ {{\pmb \zeta} ^{\rm T}}\sum\limits_{j = 1}^s {{{\hat \pi }_{ij}}{P_j}} {\pmb \zeta} ~+\\~~~~~~{{\pmb \zeta} ^{\rm T}}\sum\limits_{j = 1,j \ne i}^s {(\displaystyle\frac{{\kappa _{ij}^2}}{4}{λ _{ij}}{I_{n+w+q}}} + λ _{ij}^{ - 1}{({P_j} - {P_i})^2}){\pmb \zeta} ~+\\~~~~~~{{\pmb x}^{\rm T}}\sum\limits_{j = 1}^s {{{\hat \pi }_{ij}}{R_j}} {\pmb x}~+ \\~~~~~~ {{\pmb x}^{\rm T}}\sum\limits_{j = 1,j \ne i}^s {(\dfrac{{\kappa _{ij}^2}}{4}{\mu _{ij}}{I_n} + } \mu _{ij}^{ - 1}{({R_j} - {R_i})^2}){\pmb x}\end{array}$
令$W = \ell V({\pmb \zeta} ,{\pmb x},i) + {{\pmb \zeta} ^{\rm T}}{\pmb \zeta} - {\gamma ^2}{{\pmb \varpi} ^{\rm T}}{\pmb \varpi} $,其中${\pmb \varpi} = \left[{\begin{array}{*{20}{c}}{\pmb u}\\{{{\pmb f}_a}}\\{\pmb v}\end{array}} \right]$,则有
$\begin{array}{l}W \le {{\pmb \zeta} ^{\rm T}}({P_i}({{\hat A}_i} - {{\hat L}_i}{{\hat C}_i}) + {({{\hat A}_i} - {{\hat L}_i}{{\hat C}_i})^{\rm T}}{P_i}){\pmb \zeta}+\\ ~~~~~~~~2{{\pmb \zeta} ^{\rm T}}{P_i}{{\hat D}_i}{\pmb v} + {\varepsilon _{1i}}{{\pmb \zeta} ^{\rm T}}{P_i}{{\hat M}_i}\hat M_i^{\rm T}{P_i}{\pmb \zeta}+ \\ ~~~~~~~~ \varepsilon _{1i}^{ - 1}{{\pmb x}^{\rm T}}\hat N_i^{\rm T}{{\hat N}_i}{\pmb x} + 2{{\pmb \zeta} ^{\rm T}}{P_i}{{\hat A}_{di}}{\pmb \zeta} (t - \tau )+ \\ ~~~~~~~~ {{\pmb \zeta} ^{\rm T}}X{\pmb \zeta} - {{\pmb \zeta} ^{\rm T}}(t - \tau )X{\pmb \zeta} (t - \tau )+ \\ ~~~~~~~~{{\pmb x}^{\rm T}}({R_i}{A_i} + A_i^{\rm T}{R_i}){\pmb x} + {{\pmb x}^{\rm T}}Z{\pmb x} + 2{{\pmb x}^{\rm T}}{R_i}{B_i}{\pmb u}+ \\\end{array}$$\\ \begin{array}{l} ~~~~~~~~ {\varepsilon _{2i}}{{\pmb x}^{\rm T}}{R_i}{M_i}M_i^{\rm T}{R_i}{\pmb x} + \varepsilon _{2i}^{ - 1}{{\pmb x}^{\rm T}}N_i^{\rm T}{N_i}{\pmb x}+ \\~~~~~~~~2{{\pmb x}^{\rm T}}{R_i}{D_i}{{\pmb f}_a} + 2{{\pmb x}^{\rm T}}{R_i}{A_d}_i{\pmb x}(t - \tau )- \\ ~~~~~~~~{{\pmb x}^{\rm T}}(t - \tau )Z{\pmb x}(t - \tau ) + {{\pmb \zeta} ^{\rm T}}\sum\limits_{j = 1}^s {{{\hat \pi }_{ij}}{P_j}} {\pmb \zeta}+ \\ ~~~~~~~~{{\pmb \zeta} ^{\rm T}}\sum\limits_{j = 1,j \ne i}^s {(\dfrac{{\kappa _{ij}^2}}{4}{λ _{ij}}{I_{n + w + q}} + } λ _{ij}^{ - 1}{({P_j} - {P_i})^2}){\pmb \zeta}+ \\ ~~~~~~~~{{\pmb x}^{\rm T}}\sum\limits_{j = 1}^s {{{\hat \pi }_{ij}}{R_j}} {\pmb x} + {{\pmb x}^{\rm T}}\sum\limits_{j = 1,j \ne i}^s {(\dfrac{{\kappa _{ij}^2}}{4}{\mu _{ij}}{I_n}}+ \\ ~~~~~~~~\mu _{ij}^{ - 1}{({R_j} - {R_i})^2}){\pmb x} + {{\pmb \zeta} ^{\rm T}}{\pmb \zeta} - \gamma {{\pmb \varpi} ^{\rm T}}{\pmb \varpi} = {{\pmb \eta} ^{\rm T}}{\Omega _i}{\pmb \eta}\end{array}$
其中
$\begin{align} & {{\Omega }_{i}}=\left[ \begin{matrix} {{r}_{1i}} & {{P}_{i}}{{{\hat{A}}}_{di}} & 0 & 0 & 0 & 0 & {{P}_{i}}{{{\hat{D}}}_{i}} \\ * & -X & 0 & 0 & 0 & 0 & 0 \\ * & * & {{r}_{2i}} & {{R}_{i}}{{A}_{d}}_{i} & {{R}_{i}}{{B}_{i}} & {{R}_{i}}{{D}_{i}} & 0 \\ * & * & * & -Z & 0 & 0 & 0 \\ * & * & * & * & -{{\gamma }^{2}}{{I}_{m}} & 0 & 0 \\ * & * & * & * & * & -{{\gamma }^{2}}{{I}_{q}} & 0 \\ * & * & * & * & * & * & -{{\gamma }^{2}}{{I}_{q}} \\ \end{matrix} \right] \\ & {{r}_{1i}}={{P}_{i}}({{{\hat{A}}}_{i}}-{{{\hat{L}}}_{i}}{{{\hat{C}}}_{i}})+{{({{{\hat{A}}}_{i}}-{{{\hat{L}}}_{i}}{{{\hat{C}}}_{i}})}^{\text{T}}}{{P}_{i}}+{{I}_{n+w+q}}+ \\ & {{\varepsilon }_{1i}}{{P}_{i}}{{{\hat{M}}}_{i}}\hat{M}_{i}^{\text{T}}{{P}_{i}}+X+\sum\limits_{j=1}^{s}{{{{\hat{\pi }}}_{ij}}{{P}_{j}}}+ \\ & \sum\limits_{j=1,j\ne i}^{s}{(\frac{\kappa _{ij}^{2}}{4}{{\lambda }_{ij}}{{I}_{n+w+q}}+}\lambda _{ij}^{-1}{{({{P}_{j}}-{{P}_{i}})}^{2}}) \\ & {{r}_{2i}}={{R}_{i}}{{A}_{i}}+A_{i}^{\text{T}}{{R}_{i}}+Z+\sum\limits_{j=1}^{s}{{{{\hat{\pi }}}_{ij}}{{R}_{j}}}+ \\ & \sum\limits_{j=1,j\ne i}^{s}{(\frac{\kappa _{ij}^{2}}{4}{{\mu }_{ij}}{{I}_{n}}+}\mu _{ij}^{-1}{{({{R}_{j}}-{{R}_{i}})}^{2}})+ \\ & {{\varepsilon }_{2i}}{{R}_{i}}{{M}_{i}}M_{i}^{\text{T}}{{R}_{i}}+\varepsilon _{1i}^{-1}\hat{N}_{i}^{\text{T}}{{{\hat{N}}}_{i}}+\varepsilon _{2i}^{-1}N_{i}^{\text{T}}{{N}_{i}} \\ & \eta ={{\left[ \begin{matrix} \zeta & \zeta (t-\tau ) & x & x(t-\tau ) & \varpi \\ \end{matrix} \right]}^{\text{T}}} \\ \end{align}$
注意到${P_i}{\hat L_i} = {Y_i}$,且对式(15) 计算舒尔补可得$W<0$,即$\ell V + {{\pmb \zeta} ^{\rm T}}{\pmb \zeta} - \gamma {{\pmb\varpi} ^{\rm T}}{\pmb \varpi}<0$.由Dynkin$'$s公式,有
$\begin{array}{l}{\rm E}\left\{ {V({\pmb \zeta},{\pmb x},i)} \right\} - {\rm E}\left\{ {V({{\pmb \zeta} _0},{{\pmb x}_0},{r_0})} \right\}+ \\ {\rm E}\displaystyle\int_0^\infty {{{\pmb \zeta}^{\rm T}}(\theta ){\pmb \zeta} (\theta ){\rm d}\theta -{\rm E}} \displaystyle\int_0^\infty {{\gamma ^2}{{\pmb \varpi}^{\rm T}}(\theta ){\pmb \varpi} (\theta ){\rm d}\theta } <0\end{array}$
其中${\pmb \zeta}_0,{\pmb x}_0,r_0$分别为相应量的初始值.因此我们可以得到
$\begin{array}{l}{\rm E}\displaystyle\int_0^\infty {{{\pmb \zeta} ^{\rm T}}(\theta ){\pmb \zeta} (\theta ){\rm d}\theta - {\rm E}}\displaystyle\int_0^\infty{{\gamma ^2}{{\pmb \varpi} ^{\rm T}}(\theta ){\pmb \varpi} (\theta ){\rm d}\theta }< \\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{\mathop{\rm E}\nolimits} V({{\pmb \zeta} _0},{{\pmb x} _0},{r_0})\end{array}$
由定义1即
$\begin{align} &{{[\text{ }\int\limits_{0}^{\infty }{{{\zeta }^{\text{T}}}(\theta )}\zeta (\theta )\text{d}\theta ]}^{\frac{1}{2}}}\le [{{\gamma }^{2}}\left\| \varpi (\theta ) \right\|_{2}^{2}+~ \\ &V({{\zeta }_{0}},{{x}_{0}},{{r}_{0}}){{]}^{\frac{1}{2}}} \\ \end{align}$
(21) 因此结合定义2和3,系统(13) 鲁棒随机稳定且具有干扰抑制水平$\gamma$.
注 3.本文引入了Lyapunov-Krasovskii函数来处理带有时滞项的稳定性证明,利用Lyapunov-Krasovskii函数中的积分项可以把系统的时滞项连同状态引入到线性矩阵不等式中,从而利用Lyapunov稳定性理论可以使得整个系统的状态满足定义2的要求.
由定理1可以得出,系统(4) 是系统(3) 的鲁棒观测器,并且可以估计出系统状态、执行器和传感器故障.具体的算法如下:
1) 由式(5) 和(6) 计算得出矩阵${T_i}$和${Q_i}$;
2) 求解凸优化问题(15) ,若有解,则可以得到${\hat L_i} = P_i^{ -1}{Y_i}$,并通过式(14) 计算得到矩阵${K_i}$和${\Phi _i}$;
3) 将${K_i}$代入式(10) 和(11) 即可得到矩阵${N_i}$和${L_i}$.
至此观测器的系数矩阵均求取得到,观测器(4) 可以实现.系统的状态和传感器故障可由$\hat{\pmb x} =[{\begin{array}{*{20}{c}} {{I_n}}&{{0_{n × w}}}\end{array}}]\hat {\bar {\pmb x} }$和${\hat {\pmb f} _s} =[{\begin{array}{*{20}{c}}{{0_{w × n}}}&{{I_w}}\end{array}}]\hat {\bar {\pmb x} }$得到. ${\hat {\pmb f} _a}$可由系统(4) 中的第三式在线调节得到.
注 4[13].系统(4) 中的观测器是依赖于马尔科夫跳变系统的模态的,因此当系统跳变到某一个模态时,观测器相应地切换到这个模态.此外,观测器又依赖于转移概率${\pi_{ij}}$,使得其能够处理跳变所带来的影响.因此,观测器(4) 可以保证在系统跳变的情形下始终能估计出系统状态、执行器和传感器故障.
注 5 .本文所设计的方法中传感器估计的思路与文献[16]相类似,都是利用了广义描述系统的思想,但是设计观测器的技术手段是不同的.文献[16]将传感器故障引入到描述系统中,然后针对该描述系统设计了滑模观测器,利用滑模控制律抑制了传感器故障,然后对系统状态(包含传感器故障)做出了估计.而本文并没有将传感器故障引入描述系统中,针对描述系统设计了自适应观测器,该观测器不仅可以估计系统状态和传感器故障,还可以在线自动调节出执行器故障.相较于文献[16],本文存在以下优点: 1) 本文考虑的是一类具有参数不确定且状态转移概率不确定的延迟马尔科夫跳变系统,而文献[16]假定状态转移概率精确可得,这具有一定限制性;2) 本文同时估计了状态、执行器和传感器故障,文献[16]并没有涉及到执行器故障的估计;3) 由于文献[16]通过设计滑模观测器估计系统状态和抑制传感器故障,因此需要提前获知传感器故障的上界,而本文设计无需知道其上界.
3. 仿真分析
3.1 数值例子
为验证本文所提出方法的有效性,考虑一个形如式(1) 的具有两个模态的数值延迟马尔科夫跳变系统,相关参数如下:
$${A_1}= \left[{\begin{array}{*{20}{c}}{ - 5}&0&1\\0&{ - 7.5}&0\\2&0&{ - 5}\end{array}} \right],{A_2} = \left[{\begin{array}{*{20}{c}}{ - 6}&0&{1.1}\\0&{ - 8}&0\\0&0&{ - 5}\end{array}} \right] $$ $${A_{d1}} = \left[{\begin{array}{*{20}{c}}{0.2}&0&{0.1}\\{0.1}&0&0\\0&{0.1}&0\end{array}} \right],{B_1} = \left[{\begin{array}{*{20}{c}}1\\0\\1\end{array}} \right]$$ $${A_{d2}} = \left[{\begin{array}{*{20}{c}}{0.1}&0&{0.05}\\{0.05}&0&0\\0&{0.05}&0\end{array}} \right],{B_2} = \left[{\begin{array}{*{20}{c}}{0.5}\\0\\{0.5}\end{array}} \right]$$ $${D_1} = \left[{\begin{array}{*{20}{c}}{0.2}\\{0.1}\\{0.1}\end{array}} \right],{D_2} = \left[{\begin{array}{*{20}{c}}{0.3}\\{0.05}\\{0.1}\end{array}} \right]$$ $${C_1} = {C_2} = \left[{\begin{array}{*{20}{c}}1&1&0\\0&1&0\end{array}} \right]$$$${G_1} = {G_2} = \left[{\begin{array}{*{20}{c}}{0.1}\\{ - 0.3}\end{array}} \right],{M_1} = {M_2} = \left[{\begin{array}{*{20}{c}}{0.1}\\{0.2}\\{0.1}\end{array}} \right]$$ $${N_1} = {N_2} = \left[{\begin{array}{*{20}{c}}{0.1}&{0.2}&{0.2}\end{array}} \right]$$$${F_1}(t) = {F_2}(t) = \sin (t)$$
估计的状态转移概率矩阵为$\hat \Pi {\rm{ ~ = }}\left[{\begin{array}{*{20}{c}}{{\rm{ - }}0.4}&{0.4}\\{0.3}&{{\rm{ - }}0.3}\end{array}} \right]$,${\kappa _{12}} = {\kappa _{21}} = 1$和${λ _{12}} = {λ _{21}} = {\varepsilon _{11}} = {\varepsilon _{12}} = {\varepsilon _{21}} = {\varepsilon _{22}} = {\mu _{12}} = {\mu _{21}} = 1$,且延迟时间为3s. 执行器故障设定为${{ f}_a} = \sin (5t) + { {\rm e}^{ - 2t}} + 2\cos (t)$,传感器故障设定为${{ f}_s} = \sin (t) + 2\cos (5t)$. 本文假设马尔科夫系统有2 个模态${ S} = \left\{ {1,2} \right\}$.
在仿真中分别设置初始状态${{\pmb x}_0} = {[{\begin{array}{*{20}{c}} 3&{ - 2}&2\end{array}}]^{\rm T}}$,${{\pmb z}_0} = {[{\begin{array}{*{20}{c}}0&0&2\end{array}}]^{\rm T}}$,${r_0} = 1$和${\pmb \phi} (t) = {[{\begin{array}{*{20}{c}}1&0&0\end{array}}]^{\rm T}}$,$t \in [{\begin{array}{*{20}{c}}{ - 3}&0\end{array}}]$.系统状态估计如图 1~3所示.图 4为执行器故障估计效果,图 5为传感器故障估计效果.图 6为马尔科夫跳变系统的切换信号.由图 1~5可以看出本文方法对状态、执行器和传感器故障有很好的估计效果,仿真结果证明了方法的可行性.
3.2 实际例子
为了进一步验证本文所设计方法,接下来针对一个实际例子进行仿真,以此验证设计方法的实用性.考虑一个线性化的F-404飞行器引擎模型,其中矩阵$A$为
$A(t)=\left[ \begin{matrix} -1.46&0&2.428 \\ 0.1643+0.5\beta (t)&-0.4+\beta (t)&-0.3788 \\ 0.3107&0&-2.23 \\ \end{matrix} \right]$
$\beta (t)$是一个不确定的模型参数.假设$\beta (t)$满足一个$N = 2$的Markov过程:
$\beta (t) = \left\{ \begin{array}{l} - 1,\; \; \; \; r(t) = 1\\ - 2,\; \; \; \; r(t) = 2\end{array} \right.$
其他矩阵设置如下:
$${B_1} = \left[{\begin{array}{*{20}{c}}0\\1\\{0.3}\end{array}} \right],{B_2} = \left[{\begin{array}{*{20}{c}}{ - 1}\\{0.2}\\{ - 2}\end{array}} \right],{D_1} = \left[{\begin{array}{*{20}{c}}0\\{ - 0.1}\\0\end{array}} \right]$$ $${D_2} = \left[{\begin{array}{*{20}{c}}{ - 0.1}\\0\\{ - 0.3}\end{array}} \right],{C_1} = {C_2} = \left[{\begin{array}{*{20}{c}}1&0&0\\1&0&1\end{array}} \right]$$ $${G_1} = {G_2} = \left[{\begin{array}{*{20}{c}}{ - 1}\\1\end{array}} \right]$$ $${A_{d1}} = \left[{\begin{array}{*{20}{c}}{0.1}&0&{0.1}\\{0.1}&0&0\\0&{0.1}&{0.2}\end{array}} \right] $$ $${A_{d2}} = \left[{\begin{array}{*{20}{c}}{0.1}&0&{0.05}\\{0.03}&0&0\\0&{0.05}&{0.1}\end{array}} \right]$$ $${M_1} = {M_2} = \left[{\begin{array}{*{20}{c}}{0.1}\\0\\{0.3}\end{array}} \right],{F_1}(t) = {F_2}(t) = \sin (t)$$ $${N_1} = {N_2} = \left[{\begin{array}{*{20}{c}}{0.1}&{0.3}&{0.1}\end{array}} \right]$$
估计的转移概率矩阵为$\hat \Pi = \left[{\begin{array}{*{20}{c}}{ - 3}&3\\4&{ - 4}\end{array}} \right]$,其他参数选取如同实例1.
从参数中不难发现系统满足rank$({D_1}) \ne {\rm rank}({C_1}{D_1}) =0$,这与基于滑模观测器利用等价输出注入信号重构故障方法[30]的匹配条件是矛盾的.因此,传统的基于滑模观测器的方法不能用于该系统.此外,由于本文所讨论的系统是随机系统,因此系统的输出也是随机的,这就意味着基于代数重构的故障估计方法[31]也是不可行的,因为该方法中涉及到输出的微分信息.相比于文献[16]中的设计方法要求传感器故障是有界的且上界已知,以及其一阶微分也是有界的且上界已知[20],本文的设计方法中对这两种故障仅作了如下要求${\dot {\pmb f}_a} \in{L_2}[0\ \infty)$,这就使得本文的设计方法在实际应用中具有更广泛的应用范围.
为了验证本文所设计方法的优越性,选取执行器故障和传感器故障分别为${{f}_a} = 0.3\sin (t) + 0.5\cos (3t)$和${{ f}_s} = \sin (2t)$.
在仿真中分别设置初始状态${{\pmb x}_0} = {[{\begin{array}{*{20}{c}} 1&1&1\end{array}}]^{\rm T}}$,${{\pmb z}_0} = {[{\begin{array}{*{20}{c}}1&2&{0.3}&0\end{array}}]^{\rm T}}$,${r_0} = 1$和${\pmb \phi} (t) = {[{\begin{array}{*{20}{c}}1&0&0\end{array}}]^{\rm T}}$,$t \in [{\begin{array}{*{20}{c}}{ - 3}&0\end{array}}]$.系统状态估计如图 7~9所示.图 10和图 11分别为执行器和传感器故障估计效果.图 12为马尔科夫切换信号.由图 7~11可以看出本文方法对状态、执行器和传感器故障有很好的估计效果,仿真结果也证明了正如注5所示,该方法相较于文献[16, 20]具有优越性.
4. 结论
本文针对具有参数不确定和延迟环节的马尔科夫跳变系统,在状态转移概率矩阵不确定的情形下,讨论了执行器和传感器故障同时估计的方法.首先构造一个广义描述系统,接着针对该系统设计自适应状态观测器使得执行器和传感器故障可以同时估计出.该方法的充分条件由线性矩阵不等式给出.仿真分析证明了该方法的可行性.
-
图 1 PCA和LpSPCA在人工数据集上所得到的第1个主成分及 将数据用LpSPCA、PCA、RSPCA、LpPCA投影到该主成分所张成空间上的重构误差
Fig. 1 The -rst principal components of the arti-cial data set obtained by classical PCA and LpSPCA, and reconstruction errors in the spaces spanned by the -rst principal components obtained by LpSPCA, PCA, RSPCA and LpPCA, respectively
图 3 带遮盖噪声的Yale人脸数据及LpSPCA、PCA、RSPCA和LpPCA在该数据上用前30个相应主成分的重构效果图及 将数据用各方法投影到该1 ~ 70个主成分上的重构误差
Fig. 3 Occluded Yale face database and face reconstruction pictures of the data set constructed by the first thirty principal components obtained by LpSPCA,PCA,RSPCA and LpPCA and reconstruction errors in the spaces spanned by 1 ~ 70 principal components obtained by PCA,RSPCA,LpPCA and LpSPCA,respectively
图 4 原始Yale人脸数据及LpSPCA、PCA、RSPCA和LpPCA 在带哑噪声的Yale数据上用前30个相应主成分的\\重构效果图及将数据用 各方法投影到该1 ~ 70个主成分上的重构误差
Fig. 4 Original Yale face database and face reconstruction pictures of its dummy data set constructed by the first thirty principal components obtained by LpSPCA,PCA,RSPCA and LpPCA and reconstruction errors in the spaces spanned by 1 ~ 70 principal components obtained by PCA,RSPCA,LpPCA and LpSPCA,respectively
表 1 PCA和RSPCA在人工数据集2上提取的前两个主成分
Table 1 The first two PCs extracted by PCA and RSPCA on artificial data set 2
PCA RSPCA PC1 PC2 PC1 PC2 0.0608 -0.1238 0 0.0998 -0.0527 0.0422 0 0.4954 0.0389 0.1514 0 0.4048 -0.1343 -0.1754 -0.3608 0 0.167 -0.1062 0.3679 0 0.2074 0.1841 0 0 0.2135 0.0237 0 0 -0.1254 0.1933 0.2713 0 表 2 LpPCA 在人工数据集2 上所提取的前两个主成分
Table 2 The first two PCs extracted by LpPCA on artificial data set 2
LpPCA (L0.5) LpPCA (L1) LpPCA (L1.5) LpPCA (L2) PC1 PC2 PC1 PC2 PC1 PC2 PC1 PC2 0.161 0.1896 -0.1924 -0.1657 0.0588 -0.1483 -0.1807 -0.1945 -0.1433 0.0855 0.1783 0.0865 -0.1833 -0.036 0.0399 0.1858 -0.1472 -0.1807 0.1511 -0.0542 -0.1771 0.0428 0.1957 -0.1054 0.0773 0.0125 -0.1056 0.131 -0.035 0.1067 0.2457 -0.0457 -0.0942 0.0566 0.1713 0.1395 -0.2173 -0.1576 -0.0184 0.0696 0.1948 -0.1646 0.05 -0.1428 -0.238 0.2375 0.1488 -0.1129 0.0831 0.1925 -0.1428 0.1499 0.2135 0.0899 0.0831 0.1925 -0.0991 0.1179 0.0085 -0.1306 -0.1254 0.1812 -0.0991 0.1179 表 3 LpSPCA 在人工数据集2 上所提取的前两个主成分
Table 3 The first two PCs extracted by LpSPCA on artificial data set 2
LpPCA (L0.5) LpPCA (L1) LpPCA (L1.5) LpPCA (L2) PC1 PC2 PC1 PC2 PC1 PC2 PC1 PC2 0 0.2978 0 -0.4474 0 0.1285 0 0.1126 0 -0.674 0 -0.1696 0 0.4496 -0.0625 0 0 0.0282 0 -0.383 0 0.4220 0 0.4801 0.2726 0 0.2567 0 0.5954 0 0 0.4073 -0.4027 0 0.3741 0 -0.2469 0 -0.5845 0 0.3247 0 0.3693 0 0 0 -0.353 0 0 0 0 0 0 0 0 0 0 0 0 0 0.2713 0 0 0 -
[1] Jolliffe I T. Principal Component Analysis. New York:Springer Verlag, 1986. 1-2 http://www.scirp.org/reference/ReferencesPapers.aspx?ReferenceID=1356395 [2] Tao D P, Lin X, Jin L W, Li X L. Principal component 2-D long short-term memory for font recognition on single Chinese characters. IEEE Transactions on Cybernetics, 2015, 46(3):756-765 https://www.researchgate.net/publication/274394889_Principal_Component_2-D_Long_Short-Term_Memory_for_Font_Recognition_on_Single_Chinese_Characters [3] Jolliffe I T, Cadima J. Principal component analysis:a review and recent developments. Philosophical Transactions of the Royal Society A:Mathematical, Physical and Engineering Sciences, 2016, 374(2065):20150202 doi: 10.1098/rsta.2015.0202 [4] 李伟, 焦松, 陆凌云, 杨明. 基于特征差异的仿真模型验证及选择方法. 自动化学报, 2014, 40(10):2134-2144 http://en.cnki.com.cn/Article_en/CJFDTOTAL-MOTO201410007.htmLi Wei, Jiao Song, Lu Ling-Yun, Yang Ming. Validation and selection of simulation model based on the feature differences. Acta Automatica Sinica, 2014, 40(10):2134-2144 http://en.cnki.com.cn/Article_en/CJFDTOTAL-MOTO201410007.htm [5] 韩敏, 许美玲, 任伟杰. 多元混沌时间序列的相关状态机预测模型研究. 自动化学报, 2014, 40(5):822-829 https://www.researchgate.net/publication/281891771_Research_on_multivariate_chaotic_time_series_prediction_using_mRSM_modelHan Min, Xu Mei-Ling, Ren Wei-Jie. Research on multivariate chaotic time series prediction using mRSM model. Acta Automatica Sinica, 2014, 40(5):822-829 https://www.researchgate.net/publication/281891771_Research_on_multivariate_chaotic_time_series_prediction_using_mRSM_model [6] 樊继聪, 王友清, 秦泗钊. 联合指标独立成分分析在多变量过程故障诊断中的应用. 自动化学报, 2013, 39(5):494-501 https://www.researchgate.net/publication/271231082_Combined_Indices_for_ICA_and_Their_Applications_to_Multivariate_Process_Fault_DiagnosisFan Ji-Cong, Wang You-Qing, Qin S Joe. Combined indices for ICA and their applications to multivariate process fault diagnosis. Acta Automatica Sinica, 2013, 39(5):494-501 https://www.researchgate.net/publication/271231082_Combined_Indices_for_ICA_and_Their_Applications_to_Multivariate_Process_Fault_Diagnosis [7] Zou H, Hastie T, Tibshirani R. Sparse principal component analysis. Journal of Computational and Graphical Statistics, 2006, 15(2):265-286 doi: 10.1198/106186006X113430 [8] Shen H P, Huang J Z. Sparse principal component analysis via regularized low rank matrix approximation. Journal of Multivariate Analysis, 2008, 99(6):1015-1034 doi: 10.1016/j.jmva.2007.06.007 [9] Johnstone I M, Lu A Y. On consistency and sparsity for principal components analysis in high dimensions. Journal of the American Statistical Association, 2009, 104(486):682-693 doi: 10.1198/jasa.2009.0121 [10] Journée M, Nesterov Y, Richtárik P, Sepulchre R. Generalized power method for sparse principal component analysis. The Journal of Machine Learning Research, 2010, 11:517-553 http://www.docin.com/p-721019144.html [11] Croux C, Filzmoser P. Robust factorization of a data matrix. In:Proceedings of the 13th Computational Statistics Symposium. Bristol, Great Britain:Physica-Verlag HD, 1998. 245-250 http://cn.bing.com/academic/profile?id=aa85ede68a85c9d204516038d4913d33&encoded=0&v=paper_preview&mkt=zh-cn [12] Brooks J P, Dulá J H, Boone E L. A pure L1-norm principal component analysis. Computational Statistics and Data Analysis, 2013, 61:83-98 doi: 10.1016/j.csda.2012.11.007 [13] Ke Q F, Kanade T. Robust L1 norm factorization in the presence of outliers and missing data by alternative convex programming. In:Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, CA, USA:IEEE, 2005, 1:739-746 [14] Ding C, Zhou D, He X F, Zha H Y. R1-PCA:rotational invariant L1-norm principal component analysis for robust subspace factorization. In:Proceedings of the 23rd International Conference on Machine Learning. Pittsburgh, USA:ACM, 2006. 281-288 http://cn.bing.com/academic/profile?id=aa5ba5364c85c7d92f7a58a86f9607bd&encoded=0&v=paper_preview&mkt=zh-cn [15] Kwak N. Principal component analysis based on L1-norm maximization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30(9):1672-1680 doi: 10.1109/TPAMI.2008.114 [16] Liang Z Z, Xia S X, Zhou Y, Zhang L, Li Y F. Feature extraction based on L_p-norm generalized principal component analysis. Pattern Recognition Letters, 2013, 34(9):1037-1045 doi: 10.1016/j.patrec.2013.01.030 [17] Kwak N. Principal component analysis by L_p-norm maximization. IEEE Transactions on Cybernetics, 2014, 44(5):594-609 doi: 10.1109/TCYB.2013.2262936 [18] Meng D Y, Zhao Q, Xu Z B. Improve robustness of sparse PCA by L1-norm maximization. Pattern Recognition, 2012, 45(1):487-497 doi: 10.1016/j.patcog.2011.07.009 [19] Wang R, Nie F P, Yang X J, Gao F F, Yao M L. Robust 2DPCA with non-greedy ι1-norm maximization for image analysis. IEEE Transactions on Cybernetics, 2015, 45(5):1108-1112 [20] Lu G F, Zou J, Wang Y. L1-norm and maximum margin criterion based discriminant locality preserving projections via trace Lasso. Pattern Recognition, 2016, 55:207-214 doi: 10.1016/j.patcog.2016.01.029 [21] Li C N, Shao Y H, Deng N Y. Robust L1-norm two-dimensional linear discriminant analysis. Neural Networks, 2015, 65:92-104 doi: 10.1016/j.neunet.2015.01.003 [22] Jolliffe I T, Trendafilov N T, Uddin M. A modified principal component technique based on the LASSO. Journal of Computational and Graphical Statistics, 2003, 12(3):531-547 doi: 10.1198/1061860032148 [23] Yu H, Yang J. A direct LDA algorithm for high-dimensional data——with application to face recognition. Pattern Recognition, 2001, 34(10):2067-2070 doi: 10.1016/S0031-3203(00)00162-X [24] Yang J, Zhang D, Frangi A F, Yang J Y. Two-dimensional PCA:a new approach to appearance-based face representation and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(1):131-137 doi: 10.1109/TPAMI.2004.1261097 [25] He X F, Yan S C, Hu Y X, Niyogi P, Zhang H J. Face recognition using Laplacianfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(3):328-340 doi: 10.1109/TPAMI.2005.55 [26] Cai D, He X, Han J, Zhang H J. Orthogonal Laplacianfaces for face recognition. IEEE Transactions on Image Processing, 2006, 15(11):3608-3614 doi: 10.1109/TIP.2006.881945 期刊类型引用(7)
1. 文利燕,陶钢,姜斌,杨杰. 非线性动态突变系统的多模型自适应执行器故障补偿设计. 自动化学报. 2022(01): 207-222 . 本站查看
2. 周子龙,李晓航. 离散马尔可夫跳变系统的降维观测器设计. 电光与控制. 2022(04): 77-82+94 . 百度学术
3. 乔栋,张潇潇,王友清. 具有积分测量和时延的离散线性变参数系统故障与状态估计. 控制理论与应用. 2021(05): 587-594 . 百度学术
4. 庞新蕊,付兴建. 模态依赖时滞不确定Markov跳变系统的鲁棒H_∞容错控制. 兰州理工大学学报. 2020(01): 100-105 . 百度学术
5. 张仁斌,吴佩,陆阳,郭忠义. 基于混合马尔科夫树模型的ICS异常检测算法. 自动化学报. 2020(01): 127-141 . 本站查看
6. 熊威,顾德,刘飞. PKTP有限时间跳变系统H_∞可靠控制. 计算机仿真. 2020(05): 191-196 . 百度学术
7. 熊威,顾德,刘飞. 转移概率部分未知时滞跳变系统有限时间H_∞控制. 计算机测量与控制. 2019(07): 63-69 . 百度学术
其他类型引用(19)
-