2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

虹膜呈现攻击检测综述

王财勇 刘星雨 房美玲 赵光哲 何召锋 孙哲南

林圣琳, 李伟, 杨明, 马萍. 考虑相关性的多元输出仿真模型验证方法. 自动化学报, 2019, 45(9): 1666-1678. doi: 10.16383/j.aas.c180456
引用本文: 王财勇, 刘星雨, 房美玲, 赵光哲, 何召锋, 孙哲南. 虹膜呈现攻击检测综述. 自动化学报, 2024, 50(2): 241−281 doi: 10.16383/j.aas.c230109
LIN Sheng-Lin, LI Wei, YANG Ming, MA Ping. Multivariate Validation Method Under Correlation for Simulation Model. ACTA AUTOMATICA SINICA, 2019, 45(9): 1666-1678. doi: 10.16383/j.aas.c180456
Citation: Wang Cai-Yong, Liu Xing-Yu, Fang Mei-Ling, Zhao Guang-Zhe, He Zhao-Feng, Sun Zhe-Nan. A survey on iris presentation attack detection. Acta Automatica Sinica, 2024, 50(2): 241−281 doi: 10.16383/j.aas.c230109

虹膜呈现攻击检测综述

doi: 10.16383/j.aas.c230109
基金项目: 国家自然科学基金(62106015, 62176025, 62276263), 北京市自然科学基金(4242018), 北京市科技新星计划(20230484444), 北京市科协青年人才托举工程(BYESS2023130), 北京建筑大学“建大英才”培养工程(JDYC20220819)资助
详细信息
    作者简介:

    王财勇:北京建筑大学电气与信息工程学院讲师. 2020年获得中国科学院自动化研究所博士学位. 主要研究方向为生物特征识别, 计算机视觉与模式识别. E-mail: wangcaiyong@bucea.edu.cn

    刘星雨:北京建筑大学电气与信息工程学院硕士研究生. 2020年获得浙江师范大学学士学位. 主要研究方向为生物特征识别. E-mail: liuxingyu@stu.bucea.edu.cn

    房美玲:德国达姆施塔特弗劳恩霍夫计算机图形研究所研究员. 2023年获得德国达姆施塔特工业大学博士学位. 主要研究方向为机器学习, 计算机视觉, 生物特征识别. E-mail: meiling.fang@igd.fraunhofer.de

    赵光哲:北京建筑大学电气与信息工程学院教授. 2012年获得日本名古屋大学博士学位. 主要研究方向为计算机视觉与图像处理, 模式识别, 人工智能. E-mail: zhaoguangzhe@bucea.edu.cn

    何召锋:北京邮电大学人工智能学院教授. 2010年获得中国科学院自动化研究所博士学位. 主要研究方向为生物特征识别, 视觉计算, 智能博弈决策, AI+IC协同优化. E-mail: zhaofenghe@bupt.edu.cn

    孙哲南:中国科学院自动化研究所研究员, 中国科学院大学人工智能学院教授. 2006年获得中国科学院自动化研究所博士学位. 主要研究方向为生物特征识别, 模式识别, 计算机视觉. 本文通信作者. E-mail: znsun@nlpr.ia.ac.cn

  • 中图分类号: Y

A Survey on Iris Presentation Attack Detection

Funds: Supported by National Natural Science Foundation of China (62106015, 62176025, 62276263), Beijing Natural Science Foundation (4242018), Beijing Nova Program (20230484444), Young Elite Scientist Sponsorship Program by BAST (BYESS2023130), and Pyramid Talent Training Project of BUCEA (JDYC20220819)
More Information
    Author Bio:

    WANG Cai-Yong Lecturer at School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture. He received his Ph.D. degree from the Institute of Automation, Chinese Academy of Sciences in 2020. His research interest covers biometrics, computer vision, and pattern recognition

    LIU Xing-Yu Master student at School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture. She received her bachelor degree from Zhejiang Normal University in 2020. Her main research interest is biometrics

    FANG Mei-Ling Researcher at Fraunhofer Institute for Computer Graphics Research IGD, Darmstadt, Germany. She received her Ph.D. degree from Technical University of Darmstadt, Germany in 2023. Her research interest covers machine learning, computer vision, and biometrics

    ZHAO Guang-Zhe Professor at School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture. He received his Ph.D. degree from Nagoya University, Japan in 2012. His research interest covers computer vision, image processing, pattern recognition, and artificial intelligence

    HE Zhao-Feng Professor at School of Artificial Intelligence, Beijing University of Posts and Telecommunications. He received his Ph.D. degree from the Institute of Automation, Chinese Academy of Sciences in 2010. His research interest covers biometrics, visual computing, intelligent game decision-making, and AI+IC collaborative optimization

    SUN Zhe-Nan Professor at Institute of Automation, Chinese Academy of Sciences, and also at the School of Artificial Intelligence, University of Chinese Academy of Sciences. He received his Ph.D. degree from the Institute of Automation, Chinese Academy of Sciences in 2006. His research interest covers biometrics, pattern recognition, and computer vision. Corresponding author of this paper

  • 摘要: 虹膜识别技术因唯一性、稳定性、非接触性、准确性等特性广泛应用于各类现实场景中. 然而, 现有的许多虹膜识别系统在认证过程中仍然容易遭受各种攻击的干扰, 导致安全性方面可能存在风险隐患. 在不同的攻击类型中, 呈现攻击(Presentation attacks, PAs)由于出现在早期的虹膜图像获取阶段, 且形式变化多端, 因而虹膜呈现攻击检测(Iris presentation attack detection, IPAD)成为虹膜识别技术中首先需要解决的安全问题之一, 得到了学术界和产业界的广泛重视. 本综述是目前已知第一篇虹膜呈现攻击检测领域的中文综述, 旨在帮助研究人员快速、全面地了解该领域的相关知识以及发展动态. 总体来说, 本文对虹膜呈现攻击检测的难点、术语和攻击类型、主流方法、公共数据集、比赛及可解释性等方面进行全面归纳. 具体而言, 首先介绍虹膜呈现攻击检测的背景、虹膜识别系统现存的安全漏洞与呈现攻击的目的. 其次, 按照是否使用额外硬件设备将检测方法分为基于硬件与基于软件的方法两大类, 并在基于软件的方法中按照特征提取的方式作出进一步归纳和分析. 此外, 还整理了开源方法、可申请的公开数据集以及概括了历届相关比赛. 最后, 对虹膜呈现攻击检测未来可能的发展方向进行了展望.
  • 目前建模与仿真技术已成为人们认识和改造现实世界的重要手段.由于仿真是一种基于模型的活动, 仿真模型是否可信成为用户十分关注的问题.验证是仿真模型可信度评估的重要步骤[1], 包含概念模型验证和结果验证.仿真结果验证最直接而有效的方法是, 在相同输入条件下度量仿真输出与参考输出数据之间的一致性程度.然而, 针对复杂系统建立的仿真模型往往具有不确定性、多元异类(动、静态)输出, 且各输出变量间可能存在相关性, 此时, 若仍采用传统仿真结果验证方法将导致验证结果不准确.因此, 考虑相关性及不确定性的多元输出仿真结果验证是需要重点研究的问题.

    由于仿真模型和参考系统的输入及模型参数通常含有不确定性, 加之仿真模型运行和实际实验过程中引入的不确定性因素和误差, 导致仿真模型和参考系统的输出为随机变量或不确定的时间序列[2].考虑不确定性的影响, 静态输出结果验证方法的研究多数集中在概率框架, 形成了以参数估计[3-5]、假设检验[6-7]、贝叶斯因子[8-10]、证据距离[11]、概率分布差异法[12-13]为代表的5种解决方案.其中, Oberkampf等针对参考数据稀疏的情况, 采用插值和回归分析的方法估计参考输出的均值和标准差, 并与仿真输出的相应统计量进行对比, 得到置信区间形式的验证结果[5]; 同时, 假设检验和贝叶斯因子方法在静态仿真输出结果验证中的应用日趋完善, Jiang等将贝叶斯区间假设检验应用于模型的分等级评估中[14]; 考虑到固有和认知不确定性的影响, 文献[11]采用证据理论对动静态输出进行描述, 并引入证据距离度量仿真和参考输出的一致性; Ferson等提出了概率分布差异与u-pooling相结合的方法, 用于处理稀疏参考数据情况下的单输出仿真结果验证问题[12], 该方法以其原理简单、可操作性强等优点得到了广泛应用.

    考虑不确定性的同时, 复杂仿真模型可能存在多个输出变量的情况, 且各变量间可能存在函数或相关关系.在单变量静态输出结果验证方法的基础上, 针对多元输出仿真结果验证方法的研究取得了一定的进展.例如, Rebba等最先提出了假设检验、贝叶斯因子与协方差相结合的多元输出结果验证方法, 并引入了非正态验证数据转化为正态数据的方法以满足假设检验的条件[15]; Jiang等将区间贝叶斯因子方法进行推广, 将其应用至多元静态输出结果验证问题中[16]; Zhan等提出了基于概率主成分分析(Probabilistic principal component analysis, PPCA)与贝叶斯因子相结合的方法, 用于解决带有不确定性和相关性的多元动态输出结果验证问题[17].同时, Li等将概率积分变换(Probability integral transformation, PIT)与概率分布差异法相结合, 将多变量累积概率分布转化为单变量概率分布的形式, 采用概率分布差异法计算仿真和参考输出累积概率分布的差异[18]; Zhao等分别计算仿真和参考输出与相应总体分布的马氏距离, 进而得到仿真和参考输出马氏距离的累积概率分布, 并应用概率分布差异法计算两者的差异[19].

    从单变量验证到多变量验证, 各变量间的关系是研究的重点, 现有验证方法存在验证结果不够准确和全面的问题.利用传统单变量验证(或结合多种预处理方法)对各变量进行分别验证再综合, 一是对带有相关性的多个验证结果进行加权综合导致最终验证结果不够准确; 二是未考虑多变量间相关性的验证将导致验证结果不全面.此外, 对复杂仿真模型进行结果验证, 其输出变量间的关系不够明确.此时需首先明确输出变量间的关系(独立/函数/相关关系)再进行验证, 现有的多变量验证方法仅适用于变量间关系已知的情况.同时, 多变量验证方法均利用协方差矩阵度量变量间的相关性, 这对非线性等其他相关关系将不再适用, 导致多变量间相关性度量不够准确.

    为解决上述问题, 提出基于变量选择和概率分布差异相结合的多元输出仿真结果验证方法, 对具有不确定性的多元异类输出进行联合验证.第1节对多元输出结果验证问题进行描述与分析, 指出现有方法存在的问题; 第2节给出多元静、动态输出的相关性分析及变量选择方法; 第3节提出基于数据特征提取和联合概率分布差异的多元输出仿真结果验证方法; 第4节给出应用实例与对比实验结果; 第5节给出本文结论.

    用$ {S} $表示系统, $ {S}_{{s}} $和$ {S}_{{r}} $分别表示仿真模型和参考系统, 用$ {\pmb{A}_{{s}}} = \{\pmb{{a}}_{{s1}} $, $ {\pmb{a}}_{{s2}}, \cdots, {\pmb{a}}_{{s}{p}}\} $和$ {\pmb{A}_{{r}}} = \{{\pmb{a}}_{{r1}} $, $ {\pmb{a}}_{{r2}}, \cdots, {\pmb{a}}_{{r}{p}}\} $分别表示仿真模型和参考系统的输入变量集, $ {p} $为输入变量个数, $ {\pmb{Y}\!_{{s}}} = \{{\pmb{y}}_{{s1}} $, $ {\pmb{y}}_{{s2}}, \cdots, {\pmb{y}}_{{s}{m}} $}和$ {\pmb{Y}\!_{{r}}} = \{{\pmb{y}}_{{r1}} $, $ {\pmb{y}}_{{r2}}, \cdots, {\pmb{y}}_{{r}{m}} $}分别表示仿真模型和参考系统的输出变量集, m为输出变量个数, 多元异类输出集$ {\pmb{Y}}\!_{{s}} $、$ {\pmb{Y}}\!_{{r}} $中的静态输出表示为随机变量, 动态输出表示为多个时间序列集合的形式.假设$ {\pmb{y}}_{{s}{i}} $、$ {\pmb{y}}_{{s}{j}} $分别为仿真模型的某一动态和静态输出, 则有

    $ \begin{align} & {{{\pmb{y}}}_{{s}i}} = \left[ \begin{array}{*{35}{l}} y_{{s}i}^{1}({{t}_{1}}) & y_{{s}i}^{1}({{t}_{2}}) & \cdots & y_{{s}i}^{1}({{t}_{N}}) \\ y_{{s}i}^{2}({{t}_{1}}) & y_{{s}i}^{2}({{t}_{2}}) & \cdots & y_{{s}i}^{2}({{t}_{N}}) \\ \ \vdots & \ \vdots & \ddots & \ \vdots \\ y_{{s}i}^{n}({{t}_{1}}) & y_{{s}i}^{n}({{t}_{2}}) & \cdots & y_{{s}i}^{n}({{t}_{N}}) \\ \end{array} \right] \\ & \ \ \ \ \ \ \ \ \ {{{\pmb{y}}}_{{s}j}} = {{\left[ y_{{s}j}^{1}, y_{{s}j}^{2}, \cdots , y_{{s}j}^{n} \right]}^{\rm T}} \end{align} $

    (1)

    式中, $ i, j\in [1, m] $, 且$ i\ne j $; $ \it N $为时间序列的长度; $ {{t}_{1}}, {{t}_{2}}, \cdots , {{t}_{N}} $表示时间序列的时刻点; 考虑不确定性的影响, 需要进行多次仿真和实际实验, $ \it n $为重复实验次数.用$ C\left( {{{\pmb{Y}}}\!_{{s}}}, {{{\pmb{Y}}}\!_{{r}}} \right) $表示在$ {{{\pmb{A}}}_{{s}}} = {{{\pmb{A}}}_{{r}}} $时, $ {{{\pmb{Y}}}\!_{{s}}} $相对于$ {{{\pmb{Y}}}\!_{{r}}} $的一致性程度, 且$ C\left( {{{\pmb{Y}}}\!_{{s}}}, {{{\pmb{Y}}}\!_{{r}}} \right)\in \left( 0, 1 \right] $.当$ {{{\pmb{Y}}}\!_{{s}}} $与$ {{{\pmb{Y}}}\!_{{r}}} $完全一致, 则有$ C\left( {{{\pmb{Y}}}\!_{{s}}}, {{{\pmb{Y}}}\!_{{r}}} \right) = 1 $; 当$ {{{\pmb{Y}}}\!_{{s}}} $相对于$ {{{\pmb{Y}}}\!_{{r}}} $的一致性程度越差, 表示仿真模型越不可信, 则有$ C\left( {{{\pmb{Y}}}\!_{{s}}}, {{{\pmb{Y}}}\!_{{r}}} \right)\to 0 $[1].

    假设$ {\pmb{Y}}\!_{{sJ}}^{{}}\in {{\bf {R}}^{{{n}_{{s}}}\times m}} $与$ {\pmb{Y}}\!_{{rJ}}^{{}}\in {{\bf {R}}^{{{n}_{{r}}}\times m}} $为多元仿真模型和参考系统的静态输出变量, $ {{n}_{{s}}} $、$ {{n}_{{r}}} $表示仿真和实际实验的重复运行次数.针对带有相关性的多元静态输出结果验证方法主要有:

    1) 基于假设检验和马氏距离相结合的方法.文献[16]给出基于似然比检验和马氏距离相结合的验证方法, 得到最终一致性结果.

    2) 基于主成分分析的方法[17].对$ {\pmb{Y}}\!_{{sJ}}^{{}} $和$ {\pmb{Y}}\!_{{rJ}}^{{}} $进行降维, 去除变量间相关性, 得到新的输出变量$ {\pmb{Y}}\!_{{sJ}}^{{new}} = \left[ y_{{sJ}1}^{{new}}, y_{{sJ2}}^{{new}}, \cdots, y_{{sJ}\eta }^{{new}} \right] $和$ {\pmb{Y}}\!_{{rJ}}^{{new}} = \left[ y_{{rJ}1}^{{new}}, y_{{rJ2}}^{{new}}, \cdots, y_{{rJ}\eta }^{{new}} \right] $, $ \eta \le m $为主成分的个数, 而后采用现有静态输出验证方法对若干主成分进行逐一验证并综合得到最终验证结果.

    3) 基于概率分布差异的方法[18].分别计算m维$ {\pmb{Y}}\!_{{sJ}}^{{}} $和$ {\pmb{Y}}\!_{{rJ}}^{{}} $的联合累积概率分布(Cumulative distribution function, CDF)函数并作差, 获得仿真和参考输出数据的差异, 得到$ \left[ 0, +\infty \right) $的误差度量结果.

    针对带有相关性的多元动态输出结果验证问题, 常用方法为基于数据特征和主成分分析相结合的方法[17].首先分别提取动态输出数据的特征矩阵, 而后采用基于主成分分析的多元静态输出验证方法获得最终验证结果.针对上述多元输出仿真结果验证方法进行分析, 存在以下问题需要进一步研究:

    1) 复杂仿真模型常存在多元输出变量间的相关或独立关系未知的情况, 目前方法均是在变量关系已知的前提下进行研究, 存在一定局限性;

    2) 利用主成分分析获取的多元输出变量的主成分是线性变换后的结果, 被提取主成分所代表的变量含义不够明确, 同时对多元输出变量进行降维将导致验证信息丢失, 使验证结果不够准确和全面;

    3) 采用协方差矩阵度量变量相关性, 需假设变量样本服从正态分布, 且仅能描述多元输出变量间的线性关系, 无法度量变量间非线性等其他相关关系, 进而导致变量间相关性度量不准确;

    4) 基于联合概率分布差异法可直接度量多元静态输出变量间的差异, 需要已知变量间的独立或相关关系, 同时, 处理多元动态输出存在局限, 得到的差异度量结果无法刻画仿真模型的可信度.

    为解决上述问题, 可采用基于变量选择和概率分布差异相结合的多元输出仿真结果验证方法, 考虑不确定性的影响, 对选取到具有相关性的多变量进行联合验证.首先, 引入变量选择方法分别对$ {{{\pmb{Y}}}\!_{{s}}} $、$ {{{\pmb{Y}}}\!_{{r}}} $进行相关性分析, 提取相关变量子集(又称相关变量组, 子集中各变量是相关的, 各子集中变量数的和为输出变量总数), 进而得到多个独立的变量子集; 同时, 提取相同变量子集中多变量的数据特征, 对于静态输出选取数据本身作为变量特征, 对于动态输出选取距离、形状以及频谱特征; 而后计算变量子集中多个变量关于某特征的联合CDF差异, 并将其转化为可信度; 最后将多个变量子集关于若干数据特征的一致性与多个动态输出均值曲线的一致性进行综合得到仿真模型可信度.

    为明确复杂仿真模型中多元输出变量间的独立或相关关系, 引入数据挖掘领域的相应方法对多变量进行相关性分析, 进而提取相关变量子集.本文仅考虑同种类型(静态或动态)输出变量间的相关性, 利用分形维数和互信息方法分别对静、动态输出变量进行相关性分析.

    对随机变量的相关性分析集中于Pearson相关系数, 它仅能度量变量的线性关系, 并对变量间强相关性较敏感, 其结果受奇异值的影响较大, 无法适应具有非线性、不确定性以及非正态分布的数据集.其他一些相关系数, 如Kendall系数、Spearman系数等虽可以描述非线性相关关系, 但却不能完整地刻画变量间的相关性结构.此外, 数据挖掘领域常用的变量选择方法, 如奇异值分解法(Singular value decomposition, SVD)、主成分分析法(Principal component analysis, PCA)、基于神经网络的方法(Neural networks, NN)、基于k-邻近方法(K-nearest neighbor, KNN)、基于决策树的方法(Decision tree, DT)、基于贝叶斯网络的方法(Bayesian network, BN)以及基于分形维数的方法(Fractal dimension, FD)等, 具有不同的特点, 对其进行对比分析如表 1所示.

    表 1  常用变量选择方法对比
    Table 1  Comparison of general variable selection methods
    变量选择方法 是否为原变量集的子集 是否支持非线性相关关系 个体决策所占比例 是否需要训练样本集 运行速度与变量个数的关系
    SVD 线性增长
    PCA 线性增长
    KNN 指数增长
    DT 指数增长
    BN 指数增长
    FD 线性增长
    下载: 导出CSV 
    | 显示表格

    表 1可知, SVD和PCA方法得到的变量子集失去了其原有的含义, 且只能对具有线性相关性的变量集进行分析; 而基于机器学习的方法需要训练样本集作为支撑, 其运行速度受到变量个数的影响较大, 导致变量个数较多时运行速度较慢; 而基于分形维数的方法不仅能够度量线性相关性, 还能度量非线性等其他相关关系, 具有不需要训练样本集和运行速度快等优点.因此, 本文引入基于分形维数[20]的方法对多元输出变量进行分析, 提取相关变量子集.假设$ {\pmb{Y}}\!_{{sJ}}^{{}} $、$ {\pmb{Y}}\!_{{rJ}}^{{}} $为仿真和参考多元静态输出变量集, 以$ y_{{sJ}}^{i} $, $ i\in \left[ 1, m \right] $为例给出$ {\pmb{Y}}\!_{{sJ}}^{{}} $的相关变量子集提取步骤如下.

    步骤1. 根据自相似性原理计算$ y_{{sJ}}^{i} $的局部固有维度$ pD\left( \cdot \right) $:

    $ \begin{equation} pD(y_{{sJ}}^{i})\equiv \frac{\partial \log \left( \sum\limits_{i}{C_{a, i}^{2}} \right)}{\partial \log \left( a \right)}, \ \ \ \ a\in \left[ {{a}_{1}}, {{a}_{2}} \right] \end{equation} $

    (2)

    式中, r表示将$ y_{{sJ}}^{i} $划分成$ {{2}^{\upsilon }} $个相等大小区间的长度, $ \upsilon $为划分深度, $ {{C}_{a, i}} $表示$ y_{{sJ}}^{i} $中落入第i个区间的样本个数;

    步骤2. 设$ c = 1 $, 移除$ {\pmb{Y}}\!_{{sJ}} $中$ pD(y_{{sJ}}^{i})<\xi $的变量$ y_{{sJ}}^{i} $, $ \xi $为预定义的固有维度阈值, 排除的变量为独立变量, 并按照$ pD\left(\cdot \right) $大小将$ y_{{sJ}}^{i} $进行降序排列, 形成新变量集$ {\pmb{Y}}\!_{{sJ}}^{\prime} $, 其变量个数为$ {m}' $;

    步骤3. 计算$ pD\left( \left\{ y_{{sJ}}^{1} \right\} \right), pD\left( \left\{ y_{{sJ}}^{1}, y_{{sJ}}^{2} \right\} \right), \cdots $, 直到$ \left| pD\left( \left\{ y_{{sJ}}^{1}\cdot y_{{sJ}}^{k} \right\} \right)-pD\left( \left\{ y_{{sJ}}^{1}\cdot y_{{sJ}}^{k-1} \right\} \right) \right|<\xi \cdot pD\left( y_{{sJ}}^{k} \right) $, $ k = 1, 2, \cdots, {m}' $;

    步骤4. 若$ k = {m}' $且$ \left| pD\left( \left\{ y_{{sJ}}^{1}\cdot y_{{sJ}}^{k} \right\} \right)- \right. $ $ \left.pD\left\{ \left( y_{{sJ}}^{1} \cdot y_{{sJ}}^{k-1} \right\} \right) \right|\ge \xi \cdot pD\left( y_{{sJ}}^{k} \right) $, 则算法结束;

    步骤5. 设相关性变量超集$ \xi S{{G}_{c}} = \left\{ y_{{sJ}}^{1}, \cdots , \right. $ $ \left. y_{{sJ}}^{k} \right\} $, 并提取$ \xi S{{G}_{c}} $中的相关变量子集$ \xi {{G}_{c}} $和相关变量基$ \xi {{B}_{c}} $, 具体算法见文献[20], 并设循环变量$ j = k+1 $;

    步骤6. 若$ \left| pD\left( \xi {{B}_{c}}\bigcup \left\{ y_{{sJ}}^{j} \right\} \right)- \right.\left. pD\left( \xi {{B}_{c}} \right) \right|<\xi \cdot pD\left( y_{{sJ}}^{j} \right) $, 则执行下一步, 否则转至步骤8;

    步骤7.        对于$ \xi {{B}_{c}} $中的每个变量$ y_{{sJ}}^{b} $, 若$ \Big| pD\left( \xi {{B}_{c}} \bigcup \left\{ y_{{sJ}}^{j} \right\} \right)-pD\Big( \left( \xi {{B}_{c}}-\left\{ y_{{sJ}}^{b} \right\} \right)\bigcup $ $ \left\{ y_{{sJ}}^{j} \right\} \Big) \Big|<\xi \cdot pD\left( y_{{sJ}}^{b} \right) $和$ \Big| pD\left( \xi {{B}_{c}}\bigcup \left\{ y_{{sJ}}^{b} \right\} \right)- $ $ pD\left( \left( \xi {{B}_{c}}-\left\{ y_{{sJ}}^{b} \right\} \right) \bigcup \left\{ y_{{sJ}}^{j} \right\} \right) \Big|\ge \xi \cdot pD\left( y_{{sJ}}^{j} \right) $同时成立, 则将$ y_{{sJ}}^{j} $加入$ \xi {{G}_{c}} $;

    步骤8. 执行$ j\leftarrow j+1 $, 若$ j>{m}' $, 则转至下一步, 否则转至步骤6;

    步骤9. 移除$ {\pmb{Y}}_{{sJ}}^{\prime} $中$ \xi {{G}_{c}}-\xi {{B}_{c}} $的变量, 并输出相关变量子集$ \xi {{G}_{c}} $和相关变量基$ \xi {{B}_{c}} $;

    步骤10. 执行$ c\leftarrow c+1 $, 并转至步骤3.

    通过上述步骤提取$ {\pmb{Y}}\!_{{sJ}} $和$ {\pmb{Y}}\!_{{rJ}} $的相关变量子集$ {\pmb{G}}_{{sJ}}^{i} $、$ {\pmb{G}}_{{rJ}}^{j} $如下.

    $ \begin{equation} \left\{ \begin{array}{*{35}{l}} {\pmb{G}}_{{sJ}}^{i} = \left[ y_{{sJ}}^{i1}, y_{{sJ}}^{i2}, \cdots , y_{{sJ}}^{i{{m}_{{s}i}}} \right], i = 1, 2, \cdots , {{\beta }_{{s}}} \\ {\pmb{G}}_{{rJ}}^{j} = \left[ y_{{rJ}}^{j1}, y_{{rJ}}^{j2}, \cdots , y_{{rJ}}^{j{{m}_{{r}j}}} \right], j = 1, 2, \cdots , {{\beta }_{{r}}} \\ \end{array} \right. \end{equation} $

    (3)

    式中, $ {{\beta }_{{s}}} $、$ {{\beta }_{{r}}} $分别为提取$ {\pmb{Y}}\!_{{sJ}}^{{}} $和$ {\pmb{Y}}\!_{{rJ}}^{{}} $相关变量子集的个数, $ {{m}_{{s}i}} $、$ {{m}_{{r}j}} $分别为$ {\pmb{G}}_{{sJ}}^{i} $、$ {\pmb{G}}_{{rJ}}^{j} $中变量的个数, 且有$ {{m}_{{s}1}}+{{m}_{{s}2}}+\cdots +{{m}_{{s}i}} = {{m}_{{r}1}}+{{m}_{{r}2}}+\cdots $ +$ {{m}_{{r}j}} = m $.

    与随机变量不同, 多元动态输出变量与时间有关, 其相关性分析与变量选择需从时间序列整体的角度进行分析.一些传统的随机变量相关性分析方法对于多元动态变量同样适用, 例如Pearson系数、Kendall系数、Spearman系数等, 但无法用于动态输出变量具有多个样本(时间序列)的情况.此外, 一些统计学分析方法, 如Granger因果关系分析[21]、典型相关分析[22]、Copula分析[23]、灰色关联分析[24]以及互信息分析[25]等同样能够用于多变量的相关性分析. Granger因果关系分析只能定性地分析变量间的因果关系, 而无法得到定量的结果; 典型相关分析对观测值的顺序不会做出响应, 因此无法解决时间序列问题; Copula分析需要建立在对边缘分布的合理假设之上, 使其应用受到限制; 灰色关联分析仅从形状相关性的角度对时间序列进行分析, 其相关性度量不够全面.

    基于互信息的相关性分析方法能够度量动态输出变量间任意类型的关系, 互信息以信息熵为理论基础, 它能够度量变量取值的不确定性程度, 进而描述变量的信息含量大小[26], 通常用于多种类型时间序列的特征提取和结构化预测[27].然而, 互信息同样存在不能完整刻画变量集相关性结构的缺点, 因此本文引入类可分性和变量可分性提取多元动态输出的相关变量子集[28].假设$ {\pmb{Y}}\!_{{sD}}^{{}} $、$ {\pmb{Y}}\!_{{rD}}^{{}} $为仿真和参考多元动态输出变量集, 同样以$ {\pmb{Y}}\!_{{sD}}^{{}} $为例, 给出变量选择步骤如下.

    步骤1. 计算$ {\pmb{Y}}\!_{{sD}}^{{}} $的$ m\times m $维互信息矩阵, 具体算法见文献[26];

    步骤2. 分别计算每一维变量的类间离散度$ {{\Omega }_{{b}i}} $和类内离散度$ {{\Omega }_{{w}i}} $:

    $ \begin{equation} \left\{\begin{array}{*{35}{l}} {{\Omega }_{{b}i}} = \sum\limits_{i = 1}^{{{C}_{{sam}}}}{{{q}_{i}}\left( {{\mu }_{i}}-\mu \right){{\left( {{\mu }_{i}}-\mu \right)}^{\rm T}}} \\ {{\Omega }_{{w}i}} = \sum\limits_{i = 1}^{{{C}_{{sam}}}}{\sum\limits_{j = 1}^{{{q}_{i}}}{\left( {{\mu }_{i}}-y_{{sD}}^{j} \right){{\left( {{\mu }_{i}}-y_{{sD}}^{j} \right)}^{\rm T}}}} \\ \end{array} \right. \end{equation} $

    (4)

    式中, $ {{C}_{{sam}}} $为样本类别总数, $ {{q}_{i}} $为属于第i类的样本个数, $ \mu = ({{1}}/{{{n}_{{s}}}})\;\sum\nolimits_{i = 1}^{{{n}_{{s}}}}{y_{{sD}}^{i}} $, $ {{\mu }_{i}} = ({{1}}/{{{q}_{i}}})\;\sum\nolimits_{i = 1}^{{{q}_{i}}}{y_{{sD}}^{i}} $.按照每个变量的类可分离性大小, 进行变量排序:

    $ \begin{equation} {{J}_{i}} = \frac{{{\Omega }_{{b}i}}}{{{\Omega }_{{w}i}}}, \quad i = 1, 2, \cdots, m \end{equation} $

    (5)

    步骤3.  取$ {{J}_{i}} $值最大的变量为变量子集$ {\pmb{G}}_{{sD}}^{i} $的第一个变量;

    步骤4.  选择使下式最大的变量为$ {\pmb{G}}_{{sD}}^{i} $的下一个变量:

    $ \begin{equation} \left\{\begin{array}{*{35}{l}} {{J}_{i}} = \frac{{{\Omega }_{{b}i}}+{{\Omega }_{{f}i}}}{{{\Omega }_{{w}i}}} \\ {{\Omega }_{{f}i}} = \frac{1}{\left| {\pmb{G}}_{{sD}}^{i} \right|}\sum\limits_{k = 1}^{\left| {\pmb{G}}_{{sD}}^{i} \right|}{\sum\limits_{o = 1}^{{{C}_{{sam}}}}{{{q}_{ko}}\left( {{\mu }_{o}}-{{\mu }_{ko}} \right)\cdot }} \\ \ \ \ \ \ \ \ \ {{\left( {{\mu }_{o}}-{{\mu }_{ko}} \right)}^{\rm T}} \\ \end{array} \right. \end{equation} $

    (6)

    式中, $ \left| {\pmb{G}}_{{sD}}^{i} \right| $为子集$ {\pmb{G}}_{{sD}}^{i} $的变量个数, $ {{\mu }_{ko}} $为子集$ {\pmb{G}}_{{sD}}^{i} $中属于第$ o $类的第$ k $个变量的均值;

    步骤5. 当$ \left| {\pmb{G}}_{{sD}}^{i} \right| = \varepsilon $, 则算法终止, 其中, $ \varepsilon $为预设值, 否则转至步骤4.通过上述步骤得到相关变量子集$ {\pmb{G}}_{{sD}}^{i} $、$ {\pmb{G}}_{{rD}}^{j} $如下:

    $ \begin{equation} \left\{\begin{array}{*{35}{l}} {\pmb{G}}_{{sD}}^{i} = \left[ y_{{sD}}^{i1}(t), y_{{sD}}^{i2}(t), \cdots , y_{{sD}}^{i{{m}_{{s}i}}}(t) \right] \\ {\pmb{G}}_{{rD}}^{j} = \left[ y_{{rD}}^{j1}(t), y_{{rD}}^{j2}(t), \cdots , y_{{rD}}^{j{{m}_{{r}j}}}(t) \right] \\ \end{array} \right. \end{equation} $

    (7)

    式中, $ i = 1, 2, \cdots , {{\beta }_{{s}}} $, $ j = 1, 2, \cdots , {{\beta }_{{r}}} $. $ {{\beta }_{{s}}} $、$ {{\beta }_{{r}}} $的含义与式(3)相同.

    考虑不确定的影响, 若对每一时刻的多元动态输出变量进行分析势必导致维数爆炸.为此提出基于特征的验证方法, 首先提取用户关注的输出数据特征, 而后计算每个特征下多变量联合概率分布的差异, 并将其转化为可信度结果, 最后综合多个验证结果得到模型可信度.

    针对于静态输出, 选取数据本身作为变量特征.假设$ {{{\pmb{Y}}}\!_{{s}}}\in {{\bf {R}}^{{{n}_{{s}}}\times m}} $与$ {{{\pmb{Y}}}\!_{{r}}}\in {{\bf {R}}^{{{n}_{{r}}}\times m}} $为多元仿真和参考静态输出变量, 其数据特征描述为

    $ \begin{align} \begin{cases} {{\pmb{e}}_{{s}ij}} = {{\left[ {{y}_{{s}ij}} \right]}_{{{n}_{{s}}}\times m}}, i = 1, 2, \cdots , {{n}_{{s}}}\!\!\!\!\\ {{\pmb{e}}_{{r}ij}} = {{\left[ {{y}_{{r}ij}} \right]}_{{{n}_{{r}}}\times m}}, i = 1, 2, \cdots , {{n}_{{r}}}\!\!\!\!\\ \end{cases}, \\j = 1, 2, \cdots, m \end{align} $

    (8)

    对于动态输出$ {{y}_{{s}ij}}(t) $、$ {{y}_{{r}ik}}(t) $, $ i = 1, 2, \cdots , m $, $ j = 1, 2, \cdots , {{n}_{{s}}} $, $ k = 1, 2, \cdots , {{n}_{{r}}} $则选取$ {{n}_{{s}}} $、$ {{n}_{{r}}} $次系统运行得到的输出均值曲线$ {{\bar{y}}_{{s}i}}(t) $、$ {{\bar{y}}_{{r}i}}(t) $作为基准, 与每次实验得到的输出曲线进行对比, 求取相应的均值曲线的第$ l $个特征$ e_{{s}ij}^{l} $、$ e_{{r}ik}^{l} $:

    $ \begin{equation} \left\{ \begin{split} & e_{{s}ij}^{l} = {{\Phi }_{l}}\left( {{{\bar{y}}}_{{s}i}}(t), {{y}_{{s}ij}}(t) \right) \\ & e_{{r}ik}^{l} = {{\Phi }_{l}}\left( {{{\bar{y}}}_{{r}i}}(t), {{y}_{{r}ik}}(t) \right) \\ \end{split} \right., \quad l = 1, 2, \cdots , {{L}_{i}} \end{equation} $

    (9)

    式中, $ {{L}_{i}} $为第i个输出的特征数, $ {{\Phi }_{l}}\left( \cdot \right) $为第l个特征度量模型.

    提取动态输出特征前, 需要先对动态输出进行归类[1].以第j个动态输出的第i次实现$ {{y}_{ij}}\left( t \right) $为例, 其对应的时间变化序列为$ \left[ {{t}_{1}}, {{t}_{2}}, \cdots , {{t}_{N}} \right] $.则定义$ {{y}_{ij}}\left( t \right) $随时间变化的频率为:

    $ \begin{equation} {{F}_{ij}} = \frac{\sum\limits_{k = 1}^{N-1}{\left| \frac{\Delta {{y}_{ij}}\left( {{t}_{k}} \right)}{\Delta {{t}_{k}}} \right|}}{\left| {{{\bar{y}}}_{ij}} \right|} \end{equation} $

    (10)

    式中, $ {{F}_{ij}}\ge 0 $为$ {{y}_{ij}}\left( t \right) $的变化频率; $ \Delta {{y}_{ij}}\left( {{t}_{k}} \right) = {{y}_{ij}}\left( {{t}_{k+1}} \right)-{{y}_{ij}}\left( {{t}_{k}} \right) $; $ \Delta {{t}_{k}} = {{t}_{k+1}}-{{t}_{k}} $; $ \left| {{{\bar{y}}}_{ij}} \right| = {\sum\nolimits_{k = 1}^{N}{\left| {{y}_{ij}}\left( {{t}_{k}} \right) \right|}}/{N}\;\ne 0 $.给定$ {{F}_{0}} $为判断时间序列变化快慢的临界值, 若$ {{F}_{ij}}\ge {{F}_{0}} $, 则认为$ {{y}_{ij}}\left( t \right) $为速变数据, 否则为缓变数据.

    为刻画不确定性对系统输出的影响, 从距离和形状两方面提取缓变数据的特征.在前期工作[29]的基础上, 给出第j个仿真输出的第i次实现$ {{y}_{{s}ij}}\left( t\right) $与其均值曲线$ {{\bar{y}}_{{s}j}}(t) $的距离和形状差异$ e_{{sd}}^{ij} $、$ e_{{sc}}^{ij} $的度量公式如下.

    $ \begin{equation} \left\{ \begin{split} & e_{{sd}}^{ij} = \frac{1}{N}\sqrt{\sum\limits_{t = {{t}_{1}}}^{{{t}_ {N}}}{z^{2}{{\left( t \right)}}}} \\ & e_{{sc}}^{ij} = \frac{1}{N}\sqrt{\sum\limits_{t = {{t}_{1}}}^{{{t}_{N}}}{{{\left( z\left( t \right)-\bar{z} \right)}^{2}}}} \\ \end{split} \right. \end{equation} $

    (11)

    式中, $ z\left( t \right) = y_{{s}ij}^{{}}\left( t \right)-\bar{y}_{{s}j}^{{}}\left( t \right) $, $ t = {{t}_{1}}, {{t}_{2}}, \cdots , {{t}_{N}} $, $ \bar{z} = {\sum\nolimits_{t = {{t}_{1}}}^{{{t}_{N}}}{z\left( t \right)}}/{N}\; $.另外, 选取谱密度特征度量速变数据$ {{y}_{{s}ij}}\left( t \right) $与相应均值曲线$ {{\bar{y}}_{{s}j}}(t) $的差异$ e_{{sh}}^{ij} $, 定义如下.

    $ \begin{equation} e_{{sh}}^{ij} = 1-\frac{\gamma }{{M}} \end{equation} $

    (12)

    式中, $ e_{{sh}}^{ij} $表示速变数据$ {{{\pmb{y}}}_{{s}ij}} $与$ {{\bar{{\pmb{y}}}}}_{{s}j} $的谱密度差异; M表示$ {{y}_{{s}ij}}\left( t \right) $和$ {{\bar{y}}_{{s}j}}(t) $转换至频域中的点数; $ \gamma $表示通过相容性检验的点数.根据式(2)$ \sim $(7)得到$ {{{\pmb{Y}}}\!_{{s}}} $的第i个相关变量子集$ {{{\pmb{G}}}_{{s}i}} $、$ {{{\pmb{Y}}}\!_{{r}}} $的第j个相关变量子集$ {{{\pmb{G}}}_{{r}j}} $关于第l个特征的差异度量矩阵分别为

    $ \begin{align} & {\pmb{E}}_{{s}i}^{l} = \left[ \begin{array}{*{35}{l}} e_{{s}i1}^{l1} & e_{{s}i2}^{l1} & \cdots & e_{{s}i{{m}_{{s}i}}}^{l1} \\ e_{{s}i1}^{l2} & e_{{s}i2}^{l2} & \cdots & e_{{s}i{{m}_{{s}i}}}^{l2} \\ \ \vdots & \ \vdots & \ddots & \ \vdots \\ e_{{s}i1}^{l{{n}_{{s}}}} & e_{{s}i2}^{l{{n}_{{s}}}} & \cdots & e_{{s}i{{m}_{{s}i}}}^{l{{n}_{{s}}}} \\ \end{array} \right] \\ & {\pmb{E}}_{{r}j}^{l} = \left[ \begin{array}{*{35}{l}} e_{{r}j1}^{l1} & e_{{r}j2}^{l1} & \cdots & e_{{r}j{{m}_{{r}j}}}^{l1} \\ e_{{r}j1}^{l2} & e_{{r}j2}^{l2} & \cdots & e_{{r}j{{m}_{{r}j}}}^{l2} \\ \ \vdots & \ \vdots & \ddots & \ \vdots \\ e_{{r}j1}^{l{{n}_{{r}}}} & e_{{r}j2}^{l{{n}_{{r}}}} & \cdots & e_{{r}j{{m}_{{r}j}}}^{l{{n}_{{r}}}} \\ \end{array} \right]\\ \end{align} $

    (13)

    式中, $ i = 1, 2, \cdots , \alpha $, $ j = 1, 2, \cdots , \beta $.若$ {{{\pmb{G}}}_{{s}i}} $和$ {{{\pmb{G}}}_{{r}j}} $均为静态输出变量子集, 则$ {{L}_{i}} = 1 $; 若$ {{{\pmb{G}}}_{{s}i}} $和$ {{{\pmb{G}}}_{{r}j}} $均为缓变输出变量子集, 则$ {{L}_{i}} = 2 $; 若$ {{{\pmb{G}}}_{{s}i}} $和$ {{{\pmb{G}}}_{{r}j}} $均为速变输出变量子集, 则$ {{L}_{i}} = 1 $.需要说明的是, 在某些特殊仿真应用中, 除了上述特征外, 通常还需关注数据本身的一些特征, 例如控制系统阶跃响应中的上升时间、超调量以及稳态误差, 位置数据中的变化趋势, 测量数据中的噪声等.在进行实际验证中, 特征矩阵包含两部分内容, 一部分为上文给出的数据特征, 另一部分为根据具体领域知识确定的数据特征.

    以$ {\pmb{E}}_{{s}i}^{l} $为例进行分析, 用$ \upsilon $维随机变量$ {{x}_{1}}, {{x}_{2}}, \cdots , {{x}_{\upsilon }} $代替其列向量$ \left[ {\pmb{e}}_{{s}i1}^{l}, {\pmb{e}}_{{s}i2}^{l}, \cdots , {\pmb{e}}_{{s}i{{m}_{{s}i}}}^{l} \right] $, $ \upsilon = {{m}_{{s}i}} $, 采用多维随机变量概率分布定义$ {{x}_{1}}, {{x}_{2}}, \cdots , {{x}_{\upsilon }} $的联合CDF:

    $ \begin{align} & F\left( {{x}_{1}}, {{x}_{2}}, \cdots , {{x}_{\upsilon }} \right) = P\Big\{ \left( {{X}_{1}}\le {{x}_{1}} \right) \cup \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left( {{X}_{2}}\le {{x}_{2}} \right)\cup \cdots \cup \left( {{X}_{\upsilon }}\le {{x}_{\upsilon }} \right) \Big\} \end{align} $

    (14)

    将$ \upsilon $维空间划分为等尺寸的$ {{\rho }^{\upsilon }} $个区域, 遍历$ \upsilon $维变量的$ \rho $个取值区间, 若$ {{x}_{1}}<X_{1}^{0} $, $ {{x}_{2}}<X_{2}^{0}, \cdots, {{x}_{\upsilon }}<X_{\upsilon }^{0} $, 则$ F\left( {{x}_{1}}, {{x}_{2}}, \cdots, {{x}_{\upsilon }} \right) = 0 $; 若$ {{x}_{1}}<X_{1}^{k} $, $ {{x}_{2}}<X_{2}^{k}, \cdots, {{x}_{\upsilon }}<X_{\upsilon }^{k} $, 则$ F\left({{x}_{1}}, {{x}_{2}}, \cdots , {{x}_{\upsilon }} \right) = {k}/{{{\rho }^{\upsilon}}} $等.如果变量集$ {{x}_{1}}, {{x}_{2}}, \cdots , {{x}_{\upsilon }} $在第k个区间内的样本量为1, 则F在$ {{x}_{1}}, {{x}_{2}}, \cdots , {{x}_{\upsilon }} $点的跳跃度为$ {1}/{{{\rho }^{\upsilon }}} $, 如果变量集$ {{x}_{1}}, {{x}_{2}}, \cdots , {{x}_{\upsilon }} $在第k个区间内有$ \varepsilon $个样本, 则F在$ {{x}_{1}}, {{x}_{2}}, \cdots , {{x}_{\upsilon }} $点的跳跃度是$ {\varepsilon }/{{{\rho }^{\upsilon }}} $.给出$ {\pmb{E}}_{{s}i}^{l} $和$ {\pmb{E}}_{{r}j}^{l} $联合CDF间的差异如下.

    $ \begin{align} & D\left( {{F}_{{s}}}, {{F}_{{r}}} \right) = \int{\int{\cdots }}\int{\left| {{F}_{{s}}}\left( {{x}_{1}}, {{x}_{2}}, \cdots , {{x}_{\upsilon }} \right) \right.}- \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left. {{F}_{{r}}}\left( {{x}_{1}}, {{x}_{2}}, \cdots , {{x}_{\upsilon }} \right) \right|\textrm{d}{{x}_{1}}\textrm{d}{{x}_{2}}\cdots \textrm{d}{{x}_{\upsilon }} \end{align} $

    (15)

    为计算联合CDF的差异$ D\left( {{F}_{{r}}}, {{F}_{{s}}} \right) $, 可将上式改写为下面积分之差的形式:

    $ \begin{equation} D = \int{{{F}_{{s}}}\left( {\pmb{x}} \right)\textrm{d}{\pmb{x}}}-\int{{{F}_{{r}}}\left( {\pmb{x}} \right)\textrm{d}{\pmb{x}}} = {{I}_{{s}}}-{{I}_{{r}}} \end{equation} $

    (16)

    式中$ {\pmb{x}} = \left[ {{x}_{1}}, {{x}_{2}}, \cdots , {{x}_{\upsilon }} \right] $.假设分别用$ {{\hat{I}}_{{s}}} $和$ {{\hat{I}}_{{r}}} $估计$ {{I}_{{s}}} $和$ {{I}_{{r}}} $, 并用$ \hat{D} = {{\hat{I}}_{{s}}}-{{\hat{I}}_{{r}}} $估计$ \hat{D} $, 则$ \hat{D} $的方差为:

    $ \begin{equation} {Var}\left( {\hat{D}} \right) = {Var}\left( {{{\hat{I}}}_{{s}}} \right)+{Var}\left( {{{\hat{I}}}_{{r}}} \right)-2{Cov}\left( {{{\hat{I}}}_{{s}}}, {{{\hat{I}}}_{{r}}} \right) \end{equation} $

    (17)

    显然, 在$ {Var}\left( {{{\hat{I}}}_{{s}}} \right) $和$ {Var}\left( {{{\hat{I}}}_{{r}}} \right) $确定后, $ {{\hat{I}}_{{s}}} $和$ {{\hat{I}}_{{r}}} $的正相关度越高, 则$ \hat{D} $的方差越小.本文采用重要性抽样法估计$ {{I}_{{s}}} $和$ {{I}_{{r}}} $, 即改写D

    $ \begin{equation} D = \int{{{H}_{{s}}}\left( {\pmb{x}} \right){{g}_{{s}}}\left( {\pmb{x}} \right)}\textrm{d}{\pmb{x}}-\int{{{H}_{{r}}}\left( {\pmb{x}} \right){{g}_{{r}}}\left( {\pmb{x}} \right)}\textrm{d}{\pmb{x}} \end{equation} $

    (18)

    其中, $ {\pmb{x}} = {{x}_{1}}, {{x}_{2}}, \cdots , {{x}_{\upsilon }} $, $ {{g}_{{s}}}\left( {\pmb{x}} \right) $、$ {{g}_{{r}}}\left( {\pmb{x}} \right) $是两个密度函数, $ {{H}_{{s}}}\left( {\pmb{x}} \right) = {{{F}_{{s}}}\left( {\pmb{x}} \right)}/{{{g}_{{s}}}\left( {\pmb{x}} \right)} $, $ {{H}_{{r}}}\left( {\pmb{x}} \right) = {{{F}_{{r}}}\left( {\pmb{x}} \right)}/{{{g}_{{r}}}\left( {\pmb{x}} \right)} $.首先, 由$ {{g}_{{s}}}\left( {\pmb{x}} \right) $、$ {{g}_{{r}}}\left( {\pmb{x}} \right) $各产生P个相互独立的$ \upsilon $维随机数$ {{{\pmb{T}}}_{{s}1}}, \cdots , {{{\pmb{T}}}_{{s}P}} $和$ {{{\pmb{T}}}_{{r}1}}, \cdots , {{{\pmb{T}}}_{{r}P}} $, 并计算

    $ \begin{equation} \hat{D} = \frac{1}{P}\sum\limits_{k = 1}^{P}{\left( {{H}_{{s}}}\left( {{{\pmb{T}}}_{{s}k}} \right)-{{H}_{{r}}}\left( {{{\pmb{T}}}_{{r}k}} \right) \right)} \end{equation} $

    (19)

    采用逆变换方法由同一个$ \upsilon $维联合均匀分布$ U\left( 0, 1 \right) $产生$ {{{\pmb{T}}}_{{s}1}}, \cdots , {{{\pmb{T}}}_{{s}P}} $和$ {{{\pmb{T}}}_{{r}1}}, \cdots , {{{\pmb{T}}}_{{r}P}} $, 能够保证两组随机数具有较高的正相关程度, 进而使$ {Var}( {\hat{D}} ) $较小, 对$ \hat{D} $的估计值趋于稳定.

    需要说明的是, $ D\left( {{F}_{{s}}}, {{F}_{{r}}} \right)\in \left[ 0, \infty \right) $仅是仿真和参考输出特征的联合CDF的差异(如图 1所示), 其取值范围为$ \left[ 0, \infty \right) $, 此时无法给出仿真和参考输出的一致性程度(即取值为$ \left[ 0, 1 \right] $的相对值).因此, 提出将$ D\left( {{F}_{{s}}}, {{F}_{{r}}} \right) $向可信度$ C\left( {{F}_{{s}}}, {{F}_{{r}}} \right) $转化的公式如下.

    图 1  参考与仿真输出的CDF对比
    Fig. 1  Comparing CDF curves of reference and simulation output

    $ \begin{equation} C\left( {{F}_{{s}}}, {{F}_{{r}}} \right) = \frac{\prod\limits_{i = 1}^{\upsilon }{\left( X_{i}^{\max }-X_{i}^{\min } \right)}-D\left( {{F}_{{s}}}, {{F}_{{r}}} \right)}{\prod\limits_{i = 1}^{\upsilon }{\left( X_{i}^{\max }-X_{i}^{\min } \right)}} \end{equation} $

    (20)

    式中, $ \prod\nolimits_{i = 1}^{\upsilon }{\left( X_{i}^{\max }-X_{i}^{\min } \right)} $表示$ \upsilon $维样本空间所占区域的大小; $ X_{i}^{\min } = \min \left( X_{{s}i}^{\min }, X_{{r}i}^{\min } \right) $, $ X_{i}^{\max } = \max \left( X_{{s}i}^{\max }, X_{{r}i}^{\max } \right) $表示第i维变量的样本极值.显然, $ C\left( {{F}_{{s}}}, {{F}_{{r}}} \right) $满足如下性质[13], 进而能够用于度量仿真模型可信度.

    性质1. 非负性: $ C\left( {{F}_{{s}}}, {{F}_{{r}}} \right)\ge 0 $;

    性质2. 交换性: $ C\left( {{F}_{{s}}}, {{F}_{{r}}} \right) = C\left( {{F}_{{r}}}, {{F}_{{s}}} \right) $;

    性质3. 有界性: $ C\left( {{F}_{{s}}}, {{F}_{{r}}} \right)\in \left[ 0, 1 \right] $;

    性质4. 同一性: $ C\left( {{F}_{{s}}}, {{F}_{{r}}} \right) = 1 $, 当且仅当$ {{F}_{{s}}} = {{F}_{{r}}} $.

    基于前文所述方法, 给出考虑相关性的多元输出仿真结果验证流程如图 2所示.

    图 2  考虑相关性的多元输出仿真结果验证方法流程
    Fig. 2  Procedures of multivariate simulation result validation under correlation

    1) 考虑不确定因素的影响, 分别进行n次仿真运行和实际试验, 获取多元仿真和参考输出$ {{{\pmb{Y}}}\!_{{s}}} = \left\{ {{{\pmb{Y}}}\!_{{s}1}}, {{{\pmb{Y}}}\!_{{s}2}}, \cdots, {{\bf{Y}}_{{s}m}} \right\} $, $ {{{\pmb{Y}}}\!_{{r}}} = \left\{ {{{\pmb{Y}}}\!_{{r}1}}, {{{\pmb{Y}}}\!_{{r}2}}, \cdots, {{{\pmb{Y}}}\!_{{r}m}} \right\} $;

    2) 利用多元输出变量选择方法提取$ {{{\pmb{Y}}}\!_{{s}}} $、$ {{{\pmb{Y}}}\!_{{r}}} $的相关变量子集$ {{{\pmb{G}}}_{{s}i}}, i = 1, \cdots, {{\beta }_{{s}}} $, $ {{{\pmb{G}}}_{{r}j}}, j = 1, \cdots , {{\beta }_{{r}}} $;

    3) 若$ {{\beta }_{{s}}} = {{\beta }_{{r}}} $且$ {{{\pmb{G}}}_{{s}i}} = {{{\pmb{G}}}_{{r}j}} $, 则依据式(8)$ \sim $(13)提取$ {{{\pmb{G}}}_{{s}i}} $、$ {{{\pmb{G}}}_{{r}j}} $中各变量的数据特征$ {\pmb{e}}_{{s}ik}^{l} $、$ {\pmb{e}}_{{r}ik}^{l} $; 反之, 若$ {{\beta }_{{s}}}\ne {{\beta }_{{r}}} $或$ {{{\pmb{G}}}_{{s}i}}\ne {{{\pmb{G}}}_{{r}j}} $的相关变量子集, 则认为该仿真模型不可信, 即C = 0, 算法结束;

    4) 依据式(14)分别计算数据特征变量集$ e_{{s}i1}^{l}, e_{{s}i2}^{l}, \cdots , e_{{s}i{{m}_{{s}i}}}^{l} $和$ e_{{r}j1}^{l}, e_{{r}j2}^{l}, \cdots , e_{{r}j{{m}_{{r}j}}}^{l} $的联合CDF: $ {{F}_{{s}il}} $、$ {{F}_{{r}jl}} $;

    5) 依据式(15)$ \sim $(19)计算特征变量集的联合CDF: $ {{F}_{{s}il}} $、$ {{F}_{{r}jl}} $的差异$ D_{i}^{l}\left( {{F}_{{s}il}}, {{F}_{{r}jl}} \right) $;

    6) 依据式(20)将$ D_{i}^{l}\left( {{F}_{{s}il}}, {{F}_{{r}jl}} \right) $转化为可信度结果$ C_{i}^{l}\left( {{F}_{{s}il}}, {{F}_{{r}jl}} \right) $;

    7) 通过2)可知, $ \alpha $个相关变量子集之间是相互独立的, 且用户关注的多个数据特征(包括位置、形状、频谱)间也可认为是独立的, 进而可采用加权方法综合多个可信度结果$ C_{i}^{l}\left( {{F}_{{s}il}}, {{F}_{{r}jl}} \right) $, $ l = 1, \cdots , {{L}_{i}} $; $ i = 1, \cdots , {{\beta }_{{s}}} $; $ j = 1, \cdots , {{\beta }_{{r}}} $. 图 2中“Integrate($ \cdot $)”表示加权综合算子.同时第$ \sigma $个动态输出的均值曲线$ {{\bar{y}}_{{s}\sigma }} $、$ {{\bar{y}}_{{r}\sigma }} $可认为是对系统输出的一次抽样, 不考虑不确定性影响时的多元输出数据是近似独立的, 进而综合得到最终验证结果如下所示.

    $ \begin{align} & C\left( {{{\pmb{Y}}}\!_{{s}}}, {{{\pmb{Y}}}\!_{{r}}} \right) = w_{1}^{-}\cdot \sum\limits_{i = 1}^{{{\beta }_{{s}}}}{{{w}_{i}}\cdot \left( \sum\limits_{l = 1}^{{{L}_{i}}}{{{w}_{l}}C_{i}^{l}\left( {{F}_{{s}il}}, {{F}_{{r}jl}} \right)} \right)}+ \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ w_{2}^{-}\cdot \sum\limits_{\zeta = 1}^{{{L}_{\sigma }}}{{{w}_{\zeta }}C_{\sigma }^{\zeta }\left( {{{\bar{y}}}_{{s}\sigma }}, {{{\bar{y}}}_{{r}\sigma }} \right)} \end{align} $

    (21)

    其中, $ \sigma = 1, \cdots , {{m}_{{dynamic}}} $表示第$ \sigma $个动态输出变量, $ \zeta = 1, \cdots , {{L}_{\sigma }} $表示动态输出均值曲线的第$ \zeta $个特征, $ {{w}_{l}} $、$ {{w}_{\zeta }} $代表第l、$ \zeta $个数据特征的可信度结果权重, $ {{w}_{i}} $代表第i个相关变量子集的一致性分析结果权重, $ w_{1}^{-} $、$ w_{2}^{-} $代表相关变量子集和动态输出均值曲线一致性的权重.

    为验证本文方法的有效性, 针对文献[2]中给出的某飞行器纵向平面内末制导阶段的仿真模型进行结果验证.该模型包括飞行器制导模型和目标运动模型. 图 3给出纵向平面内弹目相对运动几何关系.目标以恒定速度$ {{v}_{{T}}} $沿$ x $轴向右行驶.假设飞行器无动力飞行且航向已对准目标, 忽略地球自转, 给出以时间为自变量的飞行器纵向质心运动方程

    图 3  纵向平面内弹目相对运动几何关系
    Fig. 3  Geometrical relationship of relative missile-target movement in longitudinal plane

    $ \begin{equation} \left\{ \begin{split} & \dot{v} = -\frac{D}{M}-g\sin \theta \\ & \dot{\theta } = \frac{L}{Mv}-\frac{g\cos \theta }{v} \\ & \dot{h} = v\sin \theta \\ & \dot{d} = v\cos \theta \\ \end{split} \right. \end{equation} $

    (22)

    式中, v为速度, $ \theta $为弹道倾角, h为高度, d为水平距离.阻力$ D = 0.5\rho {{v}^{2}}S{{C}_{{D}}}\left( Ma, \alpha \right) $, 升力$ L = 0.5\rho {{v}^{2}}S{{C}_{{L}}}\left( Ma, \alpha \right) $. $ {{C}_{{D}}} $与$ {{C}_{{L}}} $分别为阻力系数与升力系数. $ \alpha $为攻角, 马赫数$ Ma = v/{{v}_{{s}}} $, S为参考面积, M为质量, $ {{\alpha }_{{M}}} $为法向加速度, $ \lambda $为视线角.声速$ {{v}_{s}} $与大气密度$ \rho $根据1976年美国标准大气计算.相应的制导律设计可见文献[30].根据上述信息建立该飞行器纵向平面内末制导仿真模型.

    利用此仿真模型精确研究此飞行器在末制导阶段的特性, 需要考虑其受到的不确定性因素.飞行器升力与阻力均存在不确定性, 引入升力系数扰动$ {{C}_{{LC}}} $, 与阻力系数扰动$ {{C}_{{DC}}} $模拟阻力系数与升力系数的不确定性, 因此有升力$ L = 0.5\rho {{v}^{2}}S{{C}_{{LC}}}{{C}_{{L}}}\left( Ma, \alpha \right) $, 阻力$ D = 0.5\rho {{v}^{2}}S{{C}_{{DC}}}{{C}_{{D}}}\left( Ma, \alpha \right) $, 分别采用不同分布对$ {{C}_{{LC}}} $和$ {{C}_{{DC}}} $进行描述.同时, 大气密度会影响升力和阻力, 因每次飞行环境不同需考虑其不确定性的影响, 采用大气密度系数$ {{C}_{{ }\!\!\rho\!\!{ }}} $表示.此外, 飞行器进入末制导阶段时的初始视线角$ {{\lambda }_{0}} $与弹道倾角$ {{\theta }_{0}} $亦具有不确定性.选取仿真模型和参考系统的不确定参数如表 2所示.

    表 2  飞行器末制导过程的不确定参数取值
    Table 2  Uncertainty parameters values in the terminal guidance process of flight vehicle
    变量名 仿真模型参数分布 参考系统参数分布
    大气密度系数${{C}_{{ }\!\!\rho\!\!{ }}}$ $N\left( 0, 0.033 \right)$ $N\left( 0, 0.033 \right)$
    升力系数${{C}_{{D}}}$ $N(0, 0.05)$ $N(0.02, 0.07)$
    阻力系数${{C}_{{L}}}$ $N(0, 0.033)$ $N(0.02, 0.033)$
    初始弹道倾角${{{\theta }_{0}}}~/{{\rm rad}}$ $N\left( 0.17, 0.09 \right)$ $N\left( 0.26, 0.07 \right)$
    初始视线角${{{\lambda }_{0}}}~/{{\rm rad}}$ $N\left( 0.17, 0.09 \right)$ $N\left( 0.17, 0.09 \right)$
    下载: 导出CSV 
    | 显示表格

    选取用户关注的多元输出变量如表 3所示.选取静态输出变量有飞行器的最终落点位置坐标$ \left( {{x}_{{f}}}, {{z}_{{f}}} \right) $和目标终点位置坐标$ \left( {{x}_{{Tf}}}, {{z}_{{Tf}}} \right) $, 同时选取待验证的动态输出变量有弹道倾角$ \theta $、攻角$ \alpha $、视线角$ \lambda $、弹目相对距离$ {{D}_{{MT}}} $、目标速度$ {{v}_{{T}}} $.利用拉丁超立方抽样法, 对模型不确定性参数进行抽样, 给定初始样本数为1 000, 运行仿真模型共得到1 000组输出.改变飞行器模型参数(见表 2), 采用拉丁超立方抽样获得的1 000组数据作为参考输出.系统输出数据的包络线如图 4$ \sim $10所示.其中目标速度$ {{v}_{{T}}} $、目标终点位置$ {{z}_{{Tf}}} $为恒定值, 未在图中标出.

    表 3  待验证的模型输出
    Table 3  Model outputs to be validated
    变量类型 变量名
    动态 弹道倾角${\theta }~$(rad)
    动态 攻角${\alpha }$ (rad)
    动态 视线角${\lambda }~$(rad)
    动态 弹目相对距离${{{D}_{{MT}}}}~$(m)
    动态 目标速度${{v}_{{T}}}~$(m/s)
    静态 飞行器落点X坐标${{{x}_{{f}}}}~$(m)
    静态 飞行器落点Z坐标${{{z}_{{f}}}}~$(m)
    静态 目标终点位置X坐标${{{x}_{{Tf}}}}~$(m)
    静态 目标终点位置Z坐标${{{z}_{{Tf}}}}~$(m)
    下载: 导出CSV 
    | 显示表格
    图 4  弹道倾角输出包络线
    Fig. 4  Envelope lines of launch angle
    图 5  攻角输出包络线
    Fig. 5  Envelope lines of angle of attack
    图 6  视线角输出包络线
    Fig. 6  Line-of-sight angle envelopes
    图 7  弹目相对距离输出包络线
    Fig. 7  Envelope lines of the missile-target relative distance
    图 8  飞行器落点X坐标输出散点图
    Fig. 8  Scatter diagram of X-direction drop point coordinates of the flight vehicle
    图 9  目标终点位置X坐标输出散点图
    Fig. 9  Scatter diagram of X-direction terminal point coordinates of the target vehicle
    图 10  飞行器落点Z坐标输出散点图
    Fig. 10  Scatter diagram of the terminal point of the target in the Z direction

    利用本文方法对该飞行器末制导仿真输出进行验证.首先利用多元输出变量选择方法分别对仿真和参考的动静态输出变量进行相关性分析及变量选择, 得到相关性分析结果如表 4所示.通过分析可知, 仿真和参考输出变量具有相同的变量子集, 动态输出变量$ \theta $、$ \alpha $、$ \lambda $、$ {{D}_{{MT}}} $具有相关性, 故将其归为一类. $ {{v}_{{T}}} $为定值(不随时间改变)并与变量子集Ⅰ相互独立; 静态输出$ {{x}_{{f}}} $、$ {{x}_{{Tf}}} $具有相关性, 通过验证可知两者满足线性关系(如图 11所示), 同时$ {{z}_{{f}}} $与$ {{x}_{{f}}} $相互独立(如图 12所示), 进而可得$ {{z}_{{f}}} $与$ {{x}_{{f}}} $相互独立, $ {{z}_{{Tf}}} $为定值0形成了变量子集Ⅲ.由上述分析结果验证了变量选择方法的有效性.

    表 4  多元输出变量选择结果
    Table 4  Variables selection results of multiple outputs
    输出类型 变量子集Ⅰ 变量子集Ⅱ 变量子集Ⅲ
    动态 $\theta $, $\alpha$, $\lambda$, ${{D}_{{MT}}}$ ${{v}_{{T}}}$ -
    静态 ${{x}_{{f}}}$, ${{x}_{{Tf}}}$ ${{z}_{{f}}}$ ${{z}_{{Tf}}}$
    下载: 导出CSV 
    | 显示表格
    图 11  飞行器落点X坐标与目标终点位置X坐标间的关系
    Fig. 11  Relationship of X-direction coordinates between drop point of flight vehicle and terminal point of target
    图 12  飞行器落点X方向坐标与Z方向坐标间的关系
    Fig. 12  Relationship between X-direction and Z-direction coordinates of the drop point of flight vehicle

    根据表 4的变量选择结果求取各变量子集关于某特征的联合CDF, 选取动态输出的位置和形状特征, 分别求取变量子集Ⅰ的联合CDF, 变量子集Ⅱ为恒定值在验证过程中直接采用相对误差方法进行一致性分析即可; 对于静态输出变量子集Ⅰ关于数据本身的联合CDF如图 13所示, 变量子集Ⅱ的CDF曲线如图 14所示.进而得到动态输出均值曲线的一致性结果(见表 5)以及多个变量组关于多个特征的CDF差异和可信度结果(见表 6).依据式(21)综合多个可信度结果得到最终验证结果为0.82, 由于仿真和参考输出变量$ {{v}_{{T}}} $、$ {{z}_{{Tf}}} $均相等且恒为0, 故在计算模型可信度时不予考虑, 为方便计算采用均权的方式进行综合.

    图 13  仿真和参考静态输出变量子集Ⅰ的联合CDF对比
    Fig. 13  JCDF comparison of variable subset I between static simulation and reference output
    图 14  仿真和参考静态输出变量子集Ⅱ的CDF对比
    Fig. 14  Comparison of variable subset Ⅱ between static simulation and reference output
    表 5  动态输出均值曲线的一致性分析结果
    Table 5  Consistency analysis results of the mean curves of dynamic outputs
    变量名 位置特征一致性 形状特征一致性
    $\theta $ 0.92 0.74
    $\alpha$ 0.63 0.60
    $\lambda$ 0.98 0.74
    ${{D}_{{MT}}}$ 0.97 0.61
    下载: 导出CSV 
    | 显示表格
    表 6  仿真和参考输出变量子集的一致性分析结果
    Table 6  Consistency analysis results of the variables subset of the simulation and reference outputs
    输出变量类型 变量子集标号 累积概率分布差异 可信度结果
    动态 变量子集Ⅰ 位置差异: $8.92\times {{10}^{{-8}}}$ 位置特征: 0.99
    动态 变量子集Ⅰ 形状差异: $1.1\times {{10}^{{-3}}}$ 形状特征: 0.94
    动态 变量子集Ⅱ 0 1
    静态 变量子集Ⅰ $1.6\times {{10}^{5}}$ 0.84
    静态 变量子集Ⅱ 0.5 0.9
    静态 变量子集Ⅲ 0 1
    下载: 导出CSV 
    | 显示表格

    此外, 为进一步验证本文方法对参数不确定性度量的有效性, 针对上述应用实例分别设计两组验证实验(不确定性参数取值见表 7).固定仿真模型和参考系统的不确定性参数大气密度系数$ C_ \rho $、升力系数$ {{C}_{{D}}} $、阻力系数$ {{C}_{{L}}} $和初始视线角$ {{\lambda }_{0}} $的取值.分别调节仿真模型初始弹道倾角$ {{\theta }_{0}} $的均值和方差, 得到最终验证结果如图 15$ \sim $16所示.通过实验可得, 该方法能够度量仿真模型不确定参数取值的离散程度对验证结果的影响, 证明过大或过小的参数不确定度均会降低模型的可信度; 同时该方法能够度量不确定性参数的均值差异对验证结果的影响.综上所述, 所提方法能够用于解决带有相关性的多元输出仿真结果验证问题.

    表 7  验证实验的不确定参数取值
    Table 7  Uncertainty parameters values for validation experiments
    试验编号 参考系统${{\theta }_{0}}$取值 实验组Ⅰ ${{\theta }_{0}}$取值 实验组Ⅱ ${{\theta }_{0}}$取值
    1 $N\left( 0.26, 0.07 \right)$ 0.26 $N\left( 0.15, 0.07 \right)$
    2 $N\left( 0.26, 0.07 \right)$ $N\left( 0.26, 0.04 \right)$ $N\left( 0.21, 0.07 \right)$
    3 $N\left( 0.26, 0.07 \right)$ $N\left( 0.26, 0.07 \right)$ $N\left( 0.26, 0.07 \right)$
    4 $N\left( 0.26, 0.07 \right)$ $N\left( 0.26, 0.1 \right)$ $N\left( 0.31, 0.07 \right)$
    5 $N\left( 0.26, 0.07 \right)$ $N\left( 0.26, 0.13 \right)$ $N\left( 0.37, 0.07 \right)$
    下载: 导出CSV 
    | 显示表格
    图 15  实验组Ⅰ验证结果
    Fig. 15  Validation result of experiment Ⅰ
    图 16  实验组Ⅱ验证结果
    Fig. 16  Validation result of experiment Ⅱ

    针对带有相关性的多元输出仿真模型验证问题, 提出了考虑不确定性的联合验证方法.首先对多变量输出提取相关变量子集, 并对各输出变量提取数据特征, 利用联合CDF差异法度量各相关变量子集的一致性程度, 进而综合得到模型可信度.利用单变量验证方法进行多变量验证时需要满足输出变量相互独立的条件.本文方法考虑了多变量间的相关关系, 基于相关变量子集进行联合验证, 较单变量验证方法应用更合理; 同时在验证前引入了变量相关性分析, 使其能够适应输出变量之间关系未知的情况, 使验证结果更准确, 但也增加了计算开销.此外, 该方法能够度量不确定性因素对模型可信度的影响.

    需要说明的是, 本文仅考虑同一类型输出(动态或静态)存在相关性的情况, 涉及的变量选择方法本质上属于数据挖掘方法, 为确保方法的准确性要求具备足够的样本容量, 对于参考数据缺乏的情况, 可采用专家给出参考数据的大致分布, 或可利用已有的历史数据、可信度较高且类似的半实物/纯数字仿真模型所产生的数据代替.此外, 刻画动态输出的数据特征不限于距离、形状及频谱, 可依据具体应用需求而定(例如, 超调量、相位误差等).后续将对动态、静态输出间的相关性分析及变量选择方法进行研究; 同时针对参考数据缺乏以及存在认知和固有混合不确定性时的多元输出仿真结果验证问题展开进一步研究.


  • 11下载地址: http://www.cripac.ia.ac.cn/people/znsun/irisclassification/CASIA-Iris-Fake.rar
  • 图  1  使用义眼进行虹膜呈现攻击图示(插图取自电影《辛普森一家》)

    Fig.  1  An illustration of iris presentation attack using artificial eye (the figure is from 《The Simpsons》)

    图  2  虹膜识别及虹膜呈现攻击检测的应用场景

    Fig.  2  Application scenarios of iris recognition and iris presentation attack detection

    图  3  具有虹膜呈现攻击检测功能的虹膜识别产品

    Fig.  3  Iris recognition products with IPAD function

    图  4  虹膜呈现攻击检测的中国专利数量

    Fig.  4  The number of Chinese patents related to IPAD

    图  5  申请虹膜呈现攻击检测中国专利的公司名称词云

    Fig.  5  Word cloud of companies applying for Chinese patents related to IPAD

    图  6  虹膜识别一般流程及关于呈现攻击的脆弱性

    Fig.  6  General pipeline of iris recognition and its vulnerability to presentation attacks

    图  7  虹膜呈现攻击检测和虹膜识别的两种集成方式

    Fig.  7  Tow schemes for integrating iris presentation attack detection and iris recognition

    图  8  真实虹膜与常见虹膜呈现攻击类型(绿色框内为真实样本, 红色框内为假体样本)

    Fig.  8  Bona fide iris and common iris presentationattack types (green box contains bona fide samples,while red box contains fake samples)

    图  9  虹膜呈现攻击类型分类(蓝色框内为使用真实虹膜的攻击, 绿色框内为使用人工制品的攻击,紫色框内表示合成虹膜攻击)

    Fig.  9  Taxonomy of iris presentation attack types (blue box indicates PAs using real iris, green box indicates PAs using artifacts, and purple box indicates PAs using synthetic iris)

    图  10  虹膜识别与屏显虹膜进行静态虹膜呈现攻击 (插图取自电影《坏蛋联盟》)

    Fig.  10  Iris recognition and static iris presentation attack using the iris displayed on the mobile phone (the figure is from 《The Bad Guys》)

    图  11  来自CASIA-Iris-Syn[44]中012子集的合成虹膜样例, 其中(b)为(a)的虹膜旋转所得, (c)为(a)的瞳孔收缩所得, (d)为(a)的虹膜离焦变换所得

    Fig.  11  Synthetic iris samples from the 012 subset of CASIA-Iris-Syn[44], where (b), (c) and (d) are obtained from the iris rotation, pupil constriction, and iris defocus transformation of (a), respectively

    图  12  真实虹膜与iDCGAN生成的虹膜[38]

    Fig.  12  Bona fide iris and iris generated by iDCGAN[38]

    图  13  虹膜呈现攻击检测的发展进程

    Fig.  13  Development and progression of IPAD

    图  14  虹膜呈现攻击检测的论文数量(数据来源:Web of Science, EI Compendex, 中国知网)

    Fig.  14  Number of papers on IPAD (Data source: Web of Science, EI Compendex, CNKI)

    图  15  不同波长下的多光谱虹膜图像[53]

    Fig.  15  The multi-spectral iris images atdifferent wavelengths[53]

    图  16  使用文献[55]的成像系统捕获的真实虹膜和伪造虹膜的样例图像

    Fig.  16  Example images of bona fide and fake irises by using the proposed camera system in [55]

    图  17  使用(a) OCT, (b) 近红外和(c) 可见光成像获取的真实活体虹膜、义眼和纹理隐形眼镜的样例图像, 其中可见光图像中的红线表示OCT扫描仪的遍历扫描方向[59]

    Fig.  17  Example images of bona fide iris, artificial eye and textured contact lens captured using (a) OCT, (b) NIR and (c) VIS imaging modalities, where the red line in the VIS image shows the traverse scanning direction of the OCT scanner[59]

    图  18  活体人眼在光照刺激下的瞳孔缩放效应示例

    Fig.  18  Illustration of the pupil contraction/dilation of live eye due to visible light stimulus

    图  19  近年来有代表性的基于软件的虹膜呈现攻击检测方法

    Fig.  19  Recent representative software-based iris presentation attack detection solutions

    图  20  GLCM计算过程示例

    Fig.  20  Example of GLCM calculation process

    图  21  纹理隐形眼镜图像的虹膜预处理过程[73]

    Fig.  21  Iris preprocessing process for images withtextured contact lens[73]

    图  22  基于质量相关特征的虹膜活体检测方法流程图[89]

    Fig.  22  General diagram of the iris liveness detection method based on quality related features[89]

    图  23  25种图像质量评价指标的分类[90]

    Fig.  23  Classification of the 25 image quality measures[90]

    图  24  不同的图像预处理模块, 其中(a)来自文献[96], (b)来自文献[97], (c)来自文献[98]

    Fig.  24  Different image preprocessing modules, where (a) is from [96], (b) is from [97], and (c) is from [98]

    图  25  基于微条纹分析的虹膜呈现攻击检测方法[100]

    Fig.  25  Micro stripes analyses for iris presentation attack detection[100]

    图  26  基于二分类(上)和单分类(下)的虹膜呈现攻击检测算法在处理未知攻击时的效果示意图[108]

    Fig.  26  Illustration of the effects of IPAD algorithms based on binary classification (top) and one-class classification (bottom) in handling unseen presentation attacks[108]

    图  27  D-NetPAD的特征可视化[20]

    Fig.  27  Feature visualization of D-NetPAD[20]

    图  28  AG-PAD的Grad-CAM热图[21]

    Fig.  28  Grad-CAM heatmaps of AG-PAD[21]

    图  29  不同方法的Score-CAM热图[9]

    Fig.  29  Score-CAM heatmaps of different methods[9]

    图  30  DCNN的Grad-CAM热图[22]

    Fig.  30  Grad-CAM heatmaps of DCNN[22]

    表  1  国内外虹膜识别主要厂商部署虹膜呈现攻击检测技术概览

    Table  1  Overview of IPAD technology deployed by major iris recognition manufacturers at home and abroad

    公司名称官方网址是否拥有
    虹膜呈现
    攻击检测
    技术
    技术支持
    方法支持检测的呈现攻击类型
    北京万里红科技有限公司http://www.superred.com.cn/卷积神经网络、视频序列分析美瞳、义眼、打印、
    屏显或重放攻击
    北京中科虹霸科技有限公司http://www.irisking.com/频谱分析、多尺度LBP、
    SIFT、CNN
    美瞳、义眼、打印或重放攻击
    上海点与面智能科技有限公司https://www.pixsur.com.cn/深度神经网络美瞳、义眼、打印、
    屏显或重放攻击
    上海聚虹光电科技有限公司http://www.irisian.com/红外灯闪烁、多光谱成像、
    机器学习
    美瞳或打印攻击
    北京眼神科技有限公司https://www.eyecool.cn/CNN、瞳孔光照反应美瞳、打印或重放攻击
    武汉虹识技术有限公司https://www.homsh.cn/LBP、GLCM、红外检测、
    深度学习
    美瞳、打印或合成攻击
    IriTech, Inc. (美国)https://iritech.com/虹膜动态变化N/R
    Iris ID (韩国)https://www.irisid.com/N/RN/R
    BioEnable Technologies (印度)https://www.bioenabletech.com/N/RN/R
    IrisGuard (英国)https://www.irisguard.com/瞳孔收缩变化、视频序列分析重放攻击
    EyeLock (美国)https://www.eyelock.com/多帧图像(视频)特征分析重放攻击
    Neurotechnology (立陶宛)https://www.neurotechnology.com/N/R美瞳或打印攻击
    注: N/R = not reported, 未公布.
    数据来源: 官网、问卷调查、专利.
    下载: 导出CSV

    表  2  虹膜呈现攻击检测方法汇总

    Table  2  Summary of iris presentation attack detection algorithms

    方法分类代表文献算法思想优点缺点
    一级分类二级分类
    基于硬件
    的方法
    多光谱成像[4954]眼组织不同层的光谱特性采集信息丰富, 检测
    准确率高, 可解释性好
    需要额外的成像设备, 成本较高, 采集效率低, 可能对用户有较大干扰
    3D成像[5559]眼睛的曲率、3D特性和
    内部结构
    瞳孔光照反应[6061]照明变化对瞳孔
    大小的影响
    眼动信号[6365]眼球运动过程中
    的物理特征
    基于软件
    的方法
    基于传统计算机视觉的方法基于图像纹理的方法[7376]LBP、BSIF、小波变换、
    GLCM等算子从灰度
    图中提取纹理特征
    不需要额外
    设备, 成本
    较低, 对用
    户的干扰
    较小
    计算复杂度低、容易实现, 适合纹理隐形眼镜检测泛化性不足
    基于图像质量的方法[8990]真假虹膜图像之间的
    “质量差异”
    简洁、快速、非接触性、用户友好、廉价容易误检真实噪声虹膜
    图像、未定制图像
    质量评价标准
    基于深度学习的方法传统CNNs[20, 48, 96103]通过CNNs进行虹膜
    真假分类
    特征提取和分类器学习联合优化、
    准确率较高
    计算复杂度高、容易
    过拟合、可解释
    性差
    生成对抗网络[108, 110111]生成器和判别器对抗博弈有利于检测未知攻击模型训练较为
    复杂、困难
    域自适应[10, 113]学习域不变特征提升检测泛化性收集目标域数据
    较困难
    注意力机制[9, 21, 119121]强化或者抑制特征映射提升CNN特征表达能力, 提高检测准确性和泛化性增加额外模型参数
    多源特征融合的方法[6, 122128]传统特征与深度特征融合、多模态特征融合融合特征的性能一般优于单一特征, 能提升检测的鲁棒性及泛化性计算复杂度高、
    部署困难
    下载: 导出CSV

    表  3  虹膜呈现攻击检测开源代码总览

    Table  3  Brief overview of open-source IPAD methods

    方法名称代码网址编程语言数据集
    PhotometricStereoIrisPAD[83]https://github.com/CVRL/PhotometricStereoIrisPADMATLABNDCLD15[86]
    OpenSourceIrisPAD[82]https://github.com/CVRL/OpenSourceIrisPADC++、PythonNDCLD15[86]、IIITD-WVU[14]
    LivDet-Iris 2017 (Clarkson)[14]
    RaspberryPiOpenSourceIris[85]https://github.com/CVRL/RaspberryPiOpenSourceIrisC++、PythonNDCLD15[86]、NDIris3D[84]
    Emvlc-ipad[124]https://github.com/akuehlka/emvlc-ipadPython、Objective-C、C++LivDet-Iris 2017[14]
    D-NetPAD[20]https://github.com/iPRoBe-lab/D-NetPADPythonNDCLD15[86]、LivDet-Iris 2017[14]
    AG-PAD[21]https://github.com/cunjian/AGPADPythonJHU-APL (私有)[21]、LivDet-Iris 2017[14]
    LFLD[58]https://github.com/luozhengquan/LFLDPythonCASIA-Iris-LFLD[57-58]
    下载: 导出CSV

    表  4  虹膜呈现攻击检测开放数据集总览

    Table  4  Brief overview of publicly available IPAD datasets

    数据集年份数据量(张)成像光谱攻击类型图像分辨率
    (像素)
    呈现攻击真实虹膜总数
    Warsaw-BioBase-Disease-Iris v1.0[36]2015441384825近红外、可见光病变$640\times480$
    Warsaw-BioBase-Disease-Iris v2.1[133]20152 2127842 996近红外、可见光病变$640\times480$
    Warsaw-BioBase-Post-Mortem-Iris v1.1[33]20161 59701 597近红外、可见光尸体$640\times480$
    Warsaw-BioBase-Post-Mortem-Iris v2.0[134]20192 98702 987近红外、可见光尸体$640\times480$
    Warsaw-BioBase-Post-Mortem-Iris v3.0[135]20201 87901 879近红外、可见光尸体$640\times480$
    CASIA-Iris-Syn[43]200810 000010 000N/A合成$640\times480$
    CASIA-Iris-Fake[136]20144 7306 00010 730近红外打印、隐形眼镜、
    义眼、合成
    大小不一
    CASIA-Iris-LFLD[5758]2019274230504近红外打印、屏显$128\times96$
    Eye Tracker Print-Attack Database (ETPAD v1)[63]2014200200400近红外打印$640\times480$
    Eye Tracker Print-Attack Database (ETPAD v2)[64]2015400400800近红外打印$640\times480$
    Synthetic Iris Textured Based[137]20067 00007 000N/A合成N/R
    Synthetic Iris Model Based[138]2007160 0000160 000N/A合成N/R
    UVCLI Database[139]20171 9251 8773 802可见光隐形眼镜N/R
    UnMIPA Database[93]20199 3879 31918 706近红外隐形眼镜N/R
    Cataract Mobile Periocular Database (CMPD)[140]2016N/RN/R2 380可见光病变$4 \;608\times3\; 456$
    WVU Mobile Iris Spoofing (IIITD-WVU) Dataset[14]20177 5072 95210 459近红外隐形眼镜、打印$640\times480$
    IIITD Contact Lens Iris Database[141]2013N/RN/R6 570近红外隐形眼镜$640\times480$
    ND Cosmetic Contact Lenses 2013 Dataset (NDCLD13)[142]20131 7003 4005 100近红外隐形眼镜$640\times480$
    The Notre Dame Contact Lense Dataset 2015 (NDCLD15)[86]20152 5004 8007 300近红外隐形眼镜$640\times480$
    The Notre Dame LivDet-Iris 2017 Subset[14]20172 4002 4004 800近红外隐形眼镜$640\times480$
    Notre Dame Photometric Stereo Iris Dataset (WACV 2019)[83]20192 6643 1325 796近红外隐形眼镜$640\times480$
    NDIris3D[84]20213 3923 4586 850近红外隐形眼镜$640\times480$
    注: N/R = not reported, 未公布.
    N/A = not applicable, 不适用.
    透明隐形眼镜虹膜图像归类为真实虹膜图像.
    下载: 导出CSV

    表  5  虹膜呈现攻击检测竞赛

    Table  5  Iris presentation attack detection competitions

    比赛名称组织者数据集图像数攻击类型成像光谱参赛团
    队数量
    冠军团队
    算法名称
    评价指标
    训练测试 BPCER (%)APCER (%)
    LivDet-Iris 2013[11]克拉克森大学670686打印、纹理隐形眼镜近红外3Federico28.565.72
    华沙工业大学4311 236
    圣母大学3 0001 200
    MoblLive 2014[12]INESC TEC800800打印可见光6IIT Indore0.500.00
    波尔图大学
    LivDet-Iris 2015[13]克拉克森大学(LG)1 8721 854打印、纹理隐形眼镜近红外4Federico1.685.48
    克拉克森大学(Dalsa)2 4191 836
    华沙工业大学1 6675 892
    LivDet-Iris 2017[14]克拉克森大学4 9373 158打印、纹理隐形眼镜近红外3Anon13.3614.71
    华沙工业大学4 5137 500
    圣母大学1 2002 700
    西弗吉尼亚大学
    印度理工学院德里分校6 2504 209
    LivDet-Iris 2020[15]克拉克森大学12 432打印、纹理隐形眼镜、屏显虹膜、义眼、尸体虹膜、组合攻击近红外3USACH/
    TOC
    0.4659.10
    华沙工业大学
    圣母大学
    瑞士IDIAP研究所
    华沙医科大学
    下载: 导出CSV
  • [1] Daugman J G. High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1993, 15(11): 1148-1161 doi: 10.1109/34.244676
    [2] Daugman J G. How Iris recognition works. IEEE Transactions on Circuits and Systems for Video Technology, 2004, 14(1): 21-30 doi: 10.1109/TCSVT.2003.818350
    [3] Wildes R P. Iris recognition: An emerging biometric technology. Proceedings of the IEEE, 1997, 85(9): 1348-1363 doi: 10.1109/5.628669
    [4] International Organization for Standardization. Information Technology-biometric Presentation Attack Detection——Part 1: Framework, ISO/IEC 30107-1: 2016, 2016.
    [5] 孙哲南, 赫然, 王亮, 阚美娜, 冯建江, 郑方, 等. 生物特征识别学科发展报告. 中国图象图形学报, 2021, 26(6): 1254-1329 doi: 10.11834/jig.210078

    Sun Zhe-Nan, He Ran, Wang Liang, Kan Mei-Na, Feng Jian-Jiang, Zheng Fang, et al. Overview of biometrics research. Journal of Image and Graphics, 2021, 26(6): 1254-1329 doi: 10.11834/jig.210078
    [6] Agarwal A, Noore A, Vatsa M, Singh R. Generalized contact lens iris presentation attack detection. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2022, 4(3): 373-385 doi: 10.1109/TBIOM.2022.3177669
    [7] Tapia J E, Gonzalez S, Busch C. Iris liveness detection using a cascade of dedicated deep learning networks. IEEE Transactions on Information Forensics and Security, 2022, 17: 42-52 doi: 10.1109/TIFS.2021.3132582
    [8] Maureira J, Tapia J E, Arellano C, Busch C. Analysis of the synthetic periocular iris images for robust presentation attacks detection algorithms. IET Biometrics, 2022, 11(4): 343-354 doi: 10.1049/bme2.12084
    [9] Fang M L, Damer N, Boutros F, Kirchbuchner F, Kuijper A. Iris presentation attack detection by attention-based and deep pixel-wise binary supervision network. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Shenzhen, China: IEEE, 2021. 1−8
    [10] Li Y C, Lian Y, Wang J J, Chen Y H, Wang C M, Pu S L. Few-shot one-class domain adaptation based on frequency for iris presentation attack detection. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Singapore: IEEE, 2022. 2480−2484
    [11] Yambay D, Doyle J S, Bowyer K W, Czajka A, Schuckers S. LivDet-iris 2013——Iris liveness detection competition 2013. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Clearwater, USA: IEEE, 2014. 1−8
    [12] Sequeira A F, Oliveira H P, Monteiro J C, Monteiro J P, Cardoso J S. MobiLive 2014——Mobile iris liveness detection competition. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Clearwater, USA: IEEE, 2014. 1−6
    [13] Yambay D, Walczak B, Schuckers S, Czajka A. LivDet-iris 2015——Iris liveness detection competition 2015. In: Proceedings of the IEEE International Conference on Identity, Security and Behavior Analysis (ISBA). New Delhi, India: IEEE, 2017. 1−6
    [14] Yambay D, Becker B, Kohli N, Yadav D, Czajka A, Bowyer K W, et al. LivDet iris 2017——Iris liveness detection competition 2017. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Denver, USA: IEEE, 2017. 733−741
    [15] Das P, McFiratht J, Fang Z Y, Boyd A, Jang G, Mohammadi A, et al. Iris liveness detection competition (livDet-iris)——The 2020 edition. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Houston, USA: IEEE, 2020. 1−9
    [16] Czajka A, Bowyer K W. Presentation attack detection for iris recognition: An assessment of the state-of-the-art. ACM Computing Surveys, 2019, 51(4): Article No. 86
    [17] Boyd A, Fang Z Y, Czajka A, Bowyer K W. Iris presentation attack detection: Where are we now? Pattern Recognition Letters, 2020, 138: 483-489 doi: 10.1016/j.patrec.2020.08.018
    [18] Galbally J, Gomez-Barrero M. A review of iris anti-spoofing. In: Proceedings of the 4th International Conference on Biometrics and Forensics (IWBF). Limassol, Cyprus: IEEE, 2016. 1−6
    [19] Morales A, Fierrez J, Galbally J, Gomez-Barrero M. Introduction to iris presentation attack detection. Handbook of Biometric Anti-Spoofing. Berlin: Springer, 2019. 135−150
    [20] Sharma R, Ross A. D-NetPAD: An explainable and interpretable iris presentation attack detector. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Houston, USA: IEEE, 2020. 1−10
    [21] Chen C J, Ross A. An explainable attention-guided iris presentation attack detector. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision Workshops (WACVW). Waikola, USA: IEEE, 2021. 97−106
    [22] Trokielewicz M, Czajka A, Maciejewicz P. Presentation attack detection for cadaver iris. In: Proceedings of the IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS). Redondo Beach, USA: IEEE, 2018. 1−10
    [23] Fang M L, Damer N, Kirchbuchner F, Kuijper A. Demographic bias in presentation attack detection of iris recognition systems. In: Proceedings of the 28th European Signal Processing Conference (EUSIPCO). Amsterdam, Netherlands: IEEE, 2021. 835−839
    [24] Husseis A, Liu-Jimenez J, Goicoechea-Telleria I, Sanchez-Reillo R. A survey in presentation attack and presentation attack detection. In: Proceedings of the International Carnahan Conference on Security Technology (ICCST). Chennai, India: IEEE, 2019. 1−13
    [25] Ma L, Tan T N, Wang Y H, Zhang D X. Personal identification based on iris texture analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(12): 1519-1533 doi: 10.1109/TPAMI.2003.1251145
    [26] Sun Z N, Wang Y H, Tan T N, Cui J L. Improving iris recognition accuracy via cascaded classifiers. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 2005, 35(3): 435-441 doi: 10.1109/TSMCC.2005.848169
    [27] Jain A K, Deb D, Engelsma J J. Biometrics: Trust, but verify. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2022, 4(3): 303-323 doi: 10.1109/TBIOM.2021.3115465
    [28] Islam I, Munim K M, Islam M N, Karim M M. A proposed secure mobile money transfer system for SME in Bangladesh: An industry 4.0 perspective. In: Proceedings of the International Conference on Sustainable Technologies for Industry 4.0 (STI). Dhaka, Bangladesh: IEEE, 2019. 1−6
    [29] Tinsley P, Czajka A, Flynn P J. Haven't I seen you before? Assessing identity leakage in synthetic irises. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Abu Dhabi, United Arab Emirates: IEEE, 2022. 1−9
    [30] Dhar P, Kumar A, Kaplan K, Gupta K, Ranjan R, Chellappa R. EyePAD++: A distillation-based approach for joint eye authentication and presentation attack detection using periocular images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 20186−20195
    [31] 国家市场监督管理总局, 国家标准化管理委员会. 信息技术 生物特征识别呈现攻击检测 第1部分: 框架, GB/T 41815.1-2022, 2022.

    State Administration for Market Regulation, Standardization Administration of the People's Republic of China. Information Technology——Biometric Presentation Attack Detection——Part 1: Framework, GB/T 41815.1-2022, 2022.
    [32] Sansola A K H. Postmortem Iris Recognition and Its Application in Human Identification [Master thesis], Boston University, USA, 2015.
    [33] Trokielewicz M, Czajka A, Maciejewicz P. Human iris recognition in post-mortem subjects: Study and database. In: Proceedings of the IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS). Niagara Falls, USA: IEEE, 2016. 1−6
    [34] Sauerwein K, Saul T B, Steadman D W, Boehnen C B. The effect of decomposition on the efficacy of biometrics for positive identification. Journal of Forensic Sciences, 2017, 62(6): 1599-1602 doi: 10.1111/1556-4029.13484
    [35] Bolme D S, Tokola R A, Boehnen C B, Saul T B, Sauerwein K A, Steadman D W. Impact of environmental factors on biometric matching during human decomposition. In: Proceedings of the IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS). Niagara Falls, USA: IEEE, 2016. 1−8
    [36] Trokielewicz M, Czajka A, Maciejewicz P. Database of iris images acquired in the presence of ocular pathologies and assessment of iris recognition reliability for disease-affected eyes. In: Proceedings of the IEEE 2nd International Conference on Cybernetics (CYBCONF). Gdynia, Poland: IEEE, 2015. 495−500
    [37] Boyd A, Speth J, Parzianello L, Bowyer K W, Czajka A. Comprehensive study in open-set iris presentation attack detection. IEEE Transactions on Information Forensics and Security, 2023, 18: 3238-3250 doi: 10.1109/TIFS.2023.3274477
    [38] Kohli N, Yadav D, Vatsa M, Singh R, Noore A. Synthetic iris presentation attack using iDCGAN. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Denver, USA: IEEE, 2017. 674−680
    [39] Lefohn A, Budge B, Shirley P, Caruso R, Reinhard E. An ocularist's approach to human iris synthesis. IEEE Computer Graphics and Applications, 2003, 23(6): 70-75 doi: 10.1109/MCG.2003.1242384
    [40] Cui J L, Wang Y H, Huang J Z, Tan T N, Sun Z N. An iris image synthesis method based on PCA and super-resolution. In: Proceedings of the 17th International Conference on Pattern Recognition. Cambridge, UK: IEEE, 2004. 471−474
    [41] Wei L Y, Levoy M. Fast texture synthesis using tree-structured vector quantization. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAP). New Orleans, USA: ACM, 2000. 479−488
    [42] Makthal S, Ross A. Synthesis of iris images using Markov random fields. In: Proceedings of the 13th European Signal Processing Conference (EUSIPCO). Antalya, Turkey: IEEE, 2005. 1−4
    [43] Wei Z S, Tan T N, Sun Z N. Synthesis of large realistic iris databases using patch-based sampling. In: Proceedings of the 19th International Conference on Pattern Recognition (ICPR). Tampa, USA: IEEE, 2008. 1−4
    [44] Biometrics Ideal Test (BIT). CASIA iris image database 4.0 [Online], available: http://biometrics.idealtest.org/dbDetailForUser.do?id=4, October 22, 2023
    [45] Boutros F, Damer N, Raja K, Ramachandra R, Kirchbuchner F, Kuijper A. Iris and periocular biometrics for head mounted displays: Segmentation, recognition, and synthetic data generation. Image and Vision Computing, 2020, 104: Article No. 104007 doi: 10.1016/j.imavis.2020.104007
    [46] Galbally J, Savvides M, Venugopalan S, Ross A A. Iris image reconstruction from binary templates. Handbook of Iris Recognition. London, UK: Springer, 2016. 469−496
    [47] Daugman J G. Demodulation by complex-valued wavelets for stochastic pattern recognition. International Journal of Wavelets, Multiresolution and Information Processing, 2003, 1(1): 1-17 doi: 10.1142/S0219691303000025
    [48] Silva P, Luz E, Baeta R, Pedrini H, Falcao A X, Menotti D. An approach to iris contact lens detection based on deep image representations. In: Proceedings of the 28th SIBGRAPI Conference on Graphics, Patterns and Images. Salvador, Brazil: IEEE, 2015. 157−164
    [49] Lee E C, Park K R, Kim J. Fake iris detection by using Purkinje image. In: Proceedings of the International Conference on Biometrics. Hong Kong, China: Springer, 2006. 397−403
    [50] Lee S J, Park K R, Lee Y J, Bae K, Kim J H. Multifeature-based fake iris detection method. Optical Engineering, 2007, 46(12): Article No. 127204 doi: 10.1117/1.2815719
    [51] He Y Q, Hou Y S, Li Y J, Wang Y M. Liveness iris detection method based on the eye's optical features. In: Proceedings of SPIE 7838, Optics and Photonics for Counterterrorism and Crime Fighting VI and Optical Materials in Defence Systems Technology VII. Toulouse, France: SPIE, 2010. 236−243
    [52] Park J H, Kang M G. Multispectral iris authentication system against counterfeit attack using gradient-based image fusion. Optical Engineering, 2007, 46(11): Article No. 117003 doi: 10.1117/1.2802367
    [53] 陈瑞, 孙静宇, 林喜荣, 丁天怀. 利用多光谱图像的伪造虹膜检测算法. 电子学报, 2011, 39(3): 710-713

    Chen Rui, Sun Jing-Yu, Lin Xi-Rong, Ding Tian-Huai. An algorithm for fake irises detection using multi-spectral images. Acta Electronica Sinica, 2011, 39(3): 710-713
    [54] Chen R, Lin X R, Ding T H. Liveness detection for iris recognition using multispectral images. Pattern Recognition Letters, 2012, 33(12): 1513-1519 doi: 10.1016/j.patrec.2012.04.002
    [55] Lee E C, Park K R. Fake iris detection based on 3D structure of iris pattern. International Journal of Imaging Systems and Technology, 2010, 20(2): 162-166 doi: 10.1002/ima.20227
    [56] Raghavendra R, Busch C. Presentation attack detection on visible spectrum iris recognition by exploring inherent characteristics of light field camera. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Clearwater, USA: IEEE, 2014. 1−8
    [57] 宋平, 黄玲, 王云龙, 刘菲, 孙哲南. 基于计算光场成像的虹膜活体检测方法. 自动化学报, 2019, 45(9): 1701-1712 doi: 10.16383/j.aas.c180213

    Song Ping, Huang Ling, Wang Yun-Long, Liu Fei, Sun Zhe-Nan. Iris liveness detection based on light field imaging. Acta Automatica Sinica, 2019, 45(9): 1701-1712 doi: 10.16383/j.aas.c180213
    [58] Luo Z Q, Wang Y L, Liu N F, Wang Z L. Combining 2D texture and 3D geometry features for Reliable iris presentation attack detection using light field focal stack. IET Biometrics, 2022, 11(5): 420-429 doi: 10.1049/bme2.12092
    [59] Sharma R, Ross A. Viability of optical coherence tomography for iris presentation attack detection. In: Proceedings of the 25th International Conference on Pattern Recognition (ICPR). Milan, Italy: IEEE, 2021. 6165−6172
    [60] Park K R. Robust fake iris detection. In: Proceedings of the 4th International Conference on Articulated Motion and Deformable Objects. Port d'Andratx, Spain: Springer, 2006. 10−18
    [61] Czajka A. Pupil dynamics for iris liveness detection. IEEE Transactions on Information Forensics and Security, 2015, 10(4): 726-735 doi: 10.1109/TIFS.2015.2398815
    [62] 苟超, 卓莹, 王康, 王飞跃. 眼动跟踪研究进展与展望. 自动化学报, 2022, 48(5): 1173-1192 doi: 10.16383/j.aas.c210514

    Gou Chao, Zhuo Ying, Wang Kang, Wang Fei-Yue. Research advances and prospects of eye tracking. Acta Automatica Sinica, 2022, 48(5): 1173-1192 doi: 10.16383/j.aas.c210514
    [63] Rigas I, Komogortsev O V. Gaze estimation as a framework for iris liveness detection. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Clearwater, USA: IEEE, 2014. 1−8
    [64] Rigas I, Komogortsev O V. Eye movement-driven defense against iris print-attacks. Pattern Recognition Letters, 2015, 68: 316-326 doi: 10.1016/j.patrec.2015.06.011
    [65] Raju M H, Lohr D J, Komogortsev O. Iris print attack detection using eye movement signals. In: Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA). Seattle, USA: ACM, 2022. Article No. 70
    [66] Kannala J, Rahtu E. BSIF: Binarized statistical image features. In: Proceedings of the 21st International Conference on Pattern Recognition. Tsukuba, Japan: IEEE, 2012. 1363−1366
    [67] Ojala T, Pietikainen M, Harwood D. Performance evaluation of texture measures with classification based on Kullback discrimination of distributions. In: Proceedings of the 12th International Conference on Pattern Recognition (ICPR). Jerusalem, Israel: IEEE, 1994. 582−585
    [68] Haralick R M, Shanmugam K, Dinstein I H. Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics, 1973, SMC-3(6): 610-621 doi: 10.1109/TSMC.1973.4309314
    [69] Agarwal R, Jalal A S, Arya K V. Enhanced binary hexagonal extrema pattern (EBHXEP) descriptor for iris liveness detection. Wireless Personal Communications, 2020, 115(3): 2627-2643 doi: 10.1007/s11277-020-07700-9
    [70] Agarwal R, Jalal A S, Arya K V. Local binary hexagonal extrema pattern (LBHXEP): A new feature descriptor for fake iris detection. The Visual Computer, 2021, 37(6): 1357-1368 doi: 10.1007/s00371-020-01870-0
    [71] Cybenko G. Approximation by superpositions of a sigmoidal function[J]. Mathematics of control, signals and systems, 1989, 2(4): 303−314.
    [72] Ulaby F T, Kouyate F, Brisco B, Williams T H L. Textural infornation in SAR images. IEEE Transactions on Geoscience and Remote Sensing, 1986, GE-24(2): 235-245 doi: 10.1109/TGRS.1986.289643
    [73] He X F, An S J, Shi P F. Statistical texture analysis-based approach for fake iris detection using support vector machines. In: Proceedings of the International Conference on Biometrics. Seoul, South Korea: Springer, 2007. 540−546
    [74] Li D, Wu C, Wang Y M. A novel iris texture extraction scheme for iris presentation attack detection. Journal of Image and Graphics, 2021, 9(3): 95-102
    [75] Wei Z S, Qiu X C, Sun Z N, Tan T N. Counterfeit iris detection based on texture analysis. In: Proceedings of the 19th International Conference on Pattern Recognition (ICPR). Tampa, USA: IEEE, 2008. 1−4
    [76] He X F, Lu Y, Shi P F. A new fake iris detection method. In: Proceedings of the 3rd International Conference on Advances in Biometrics. Alghero, Italy: Springer, 2009. 1132−1139
    [77] Freund Y, Schapire R E. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 1997, 55(1): 119-139 doi: 10.1006/jcss.1997.1504
    [78] He Z F, Sun Z N, Tan T N, Wei Z S. Efficient iris spoof detection via boosted local binary patterns. In: Proceedings of the International Conference on Biometrics. Alghero, Italy: Springer, 2009. 1080−1090
    [79] Zhang H, Sun Z N, Tan T N. Contact lens detection based on weighted LBP. In: Proceedings of the 20th International Conference on Pattern Recognition (ICPR). Istanbul, Turkey: IEEE, 2010. 4279−4282
    [80] Alonso-Fernandez F, Bigun J. Exploting periocular and RGB information in fake iris detection. In: Proceedings of the 37th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). Opatija, Croatia: IEEE, 2014. 1354−1359
    [81] Sequeira A F, Murari J, Cardoso J S. Iris liveness detection methods in mobile applications. In: Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP). Lisbon, Portugal: IEEE, 2014. 22−33
    [82] McGrath J, Bowyer K W, Czajka A. Open source presentation attack detection baseline for iris recognition. arXiv: 1809.10172, 2018.
    [83] Czajka A, Fang Z Y, Bowyer K W. Iris presentation attack detection based on photometric stereo features. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV). Waikoloa, USA: IEEE, 2019. 877−885
    [84] Fang Z Y, Czajka A, Bowyer K W. Robust iris presentation attack detection fusing 2D and 3D information. IEEE Transactions on Information Forensics and Security, 2021, 16: 510-520 doi: 10.1109/TIFS.2020.3015547
    [85] Fang Z Y, Czajka A. Open source iris recognition hardware and software with presentation attack detection. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Houston, USA: IEEE, 2020. 1−8
    [86] Doyle J S, Bowyer K W. Robust detection of textured contact lenses in iris recognition using BSIF. IEEE Access, 2015, 3: 1672-1683 doi: 10.1109/ACCESS.2015.2477470
    [87] Dronky M R, Khalifa W, Roushdy M. Using residual images with BSIF for iris liveness detection. Expert Systems With Applications, 2021, 182: Article No. 115266 doi: 10.1016/j.eswa.2021.115266
    [88] 李星光, 孙哲南, 谭铁牛. 虹膜图像质量评价综述. 中国图象图形学报, 2014, 19(6): 813-824 doi: 10.11834/jig.20140601

    Li Xing-Guang, Sun Zhe-Nan, Tan Tie-Niu. Overview of iris image quality-assessment. Journal of Image and Graphics, 2014, 19(6): 813-824 doi: 10.11834/jig.20140601
    [89] Galbally J, Ortiz-Lopez J, Fierrez J, Ortega-Garcia J. Iris liveness detection based on quality related features. In: Proceedings of the 5th IAPR International Conference on Biometrics (ICB). New Delhi, India: IEEE, 2012. 271−276
    [90] Galbally J, Marcel S, Fierrez J. Image quality assessment for fake biometric detection: Application to iris, fingerprint, and face recognition. IEEE Transactions on Image Processing, 2014, 23(2): 710-724 doi: 10.1109/TIP.2013.2292332
    [91] Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278-2324 doi: 10.1109/5.726791
    [92] Selvaraju R R, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 618−626
    [93] Yadav D, Kohli N, Vatsa M, Singh R, Noore A. Detecting textured contact lens in uncontrolled environment using DensePAD. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Long Beach, USA: IEEE, 2019. 2336−2344
    [94] Yadav D, Kohli N, Yadav S, Vatsa M, Singh R, Noore A. Iris presentation attack via textured contact lens in unconstrained environment. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV). Lake Tahoe, USA: IEEE, 2018. 503−511
    [95] Van der Maaten L, Hinton G. Visualizing data using t-SNE. Journal of Machine Learning Research, 2008, 9(86): 2579-2605
    [96] He L X, Li H Q, Liu F, Liu N F, Sun Z N, He Z F. Multi-patch convolution neural network for iris liveness detection. In: Proceedings of the IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS). Niagara Falls, USA: IEEE, 2016. 1−7
    [97] Raghavendra R, Raja K B, Busch C. ContlensNet: Robust iris contact lens detection using deep convolutional neural networks. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV). Santa Rosa, USA: IEEE, 2017. 1160−1167
    [98] Hoffman S, Sharma R, Ross A. Convolutional neural networks for iris presentation attack detection: Toward cross-dataset and cross-sensor generalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Salt Lake City, USA: IEEE, 2018. 1701−1709
    [99] Pala F, Bhanu B. Iris liveness detection by relative distance comparisons. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Honolulu, USA: IEEE, 2017. 664−671
    [100] Fang M L, Damer N, Kirchbuchner F, Kuijper A. Micro stripes analyses for iris presentation attack detection. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Houston, USA: IEEE, 2020. 1−10
    [101] Fang M L, Damer N, Boutros F, Kirchbuchner F, Kuijper A. Cross-database and cross-attack iris presentation attack detection using micro stripes analyses. Image and Vision Computing, 2021, 105: Article No. 104057 doi: 10.1016/j.imavis.2020.104057
    [102] 刘明康, 王宏民, 李琦, 孙哲南. 增强型灰度图像空间实现虹膜活体检测. 中国图象图形学报, 2020, 25(7): 1421-1435 doi: 10.11834/jig.190503

    Liu Ming-Kang, Wang Hong-Min, Li Qi, Sun Zhe-Nan. Enhanced gray-level image space for iris liveness detection. Journal of Image and Graphics, 2020, 25(7): 1421-1435 doi: 10.11834/jig.190503
    [103] Gautam G, Raj A, Mukhopadhyay S. Deep supervised class encoding for iris presentation attack detection. Digital Signal Processing, 2022, 121: Article No. 103329 doi: 10.1016/j.dsp.2021.103329
    [104] Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS). Montreal, Canada: MIT Press, 2014. 2672−2680
    [105] Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. In: Proceedings of the 4th International Conference on Learning Representations. San Juan, Puerto Rico: ICLR, 2015.
    [106] Jolicoeur-Martineau A. The relativistic discriminator: A key element missing from standard GAN. In: Proceedings of the 7th International Conference on Learning Representations. New Orleans, USA: OpenReview.net, 2019. 1−26
    [107] Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 4396−4405
    [108] Yadav S, Chen C J, Ross A. Relativistic discriminator: A one-class classifier for generalized iris presentation attack detection. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV). Snowmass, USA: IEEE, 2020. 2624−2633
    [109] Perera P, Oza P, Patel V M. One-class classification: A survey. arXiv: 2101.03064, 2021.
    [110] Ferreira P M, Sequeira A F, Pernes D, Rebelo A, Cardoso J S. Adversarial learning for a robust iris presentation attack detection method against unseen attack presentations. In: Proceedings of the International Conference of the Biometrics Special Interest Group (BIOSIG). Darmstadt, Germany: IEEE, 2019. 1−7
    [111] Yadav S, Ross A. CIT-GAN: Cyclic image translation generative adversarial network with application in iris presentation attack detection. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision. Waikoloa, USA: IEEE, 2021. 2411−2420
    [112] 刘建伟, 孙正康, 罗雄麟. 域自适应学习研究进展. 自动化学报, 2014, 40(8): 1576-1600

    Liu Jian-Wei, Sun Zheng-Kang, Luo Xiong-Lin. Review and research development on domain adaptation learning. Acta Automatica Sinica, 2014, 40(8): 1576-1600
    [113] El-Din Y S, Moustafa M N, Mahdi H. On the effectiveness of adversarial unsupervised domain adaptation for iris presentation attack detection in mobile devices. In: Proceedings of SPIE 11605, Thirteenth International Conference on Machine Vision (ICMV). Rome, Italy: SPIE, 2021. Article No. 116050W
    [114] Zhang Y B, Tang H, Jia K, Tan M K. Domain-symmetric networks for adversarial domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 5026−5035
    [115] Wang X L, Girshick R, Gupta A, He K M. Non-local neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE, 2018. 7794−7803
    [116] Hu J, Shen L, Sun G. Squeeze-and-excitation networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE, 2018. 7132−7141
    [117] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, et al. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS). Long Beach, USA: Curran Associates Inc., 2017. 6000−6010
    [118] Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X H, Unterthiner T, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In: Proceedings of the 9th International Conference on Learning Representations. Vienna, Austria: ICLR, 2021. 1−21
    [119] 吕梦凌, 何玉青, 杨峻凯, 金伟其, 张丽君. 基于循环注意力机制的隐形眼镜虹膜防伪检测方法. 光学学报, 2022, 42(23): Article No. 2315001 doi: 10.3788/AOS202242.2315001

    Lv Meng-Ling, He Yu-Qing, Yang Jun-Kai, Jin Wei-Qi, Zhang Li-Jun. Anti-spoofing detection method for contact lens irises based on recurrent attention mechanism. Acta Optica Sinica, 2022, 42(23): Article No. 2315001 doi: 10.3788/AOS202242.2315001
    [120] 陈旭旗, 沈文忠. IrisBeautyDet: 虹膜定位和美瞳检测网络. 计算机工程与应用, 2023, 59(2): 120-128 doi: 10.3778/j.issn.1002-8331.2106-0460

    Chen Xu-Qi, Shen Wen-Zhong. IrisBeautyDet: Neural network for iris localization and cosmetic contact lens detection. Computer Engineering and Applications, 2023, 59(2): 120-128 doi: 10.3778/j.issn.1002-8331.2106-0460
    [121] Fang M L, Boutros F, Damer N. Intra and cross-spectrum iris presentation attack detection in the NIR and visible domains. Handbook of Biometric Anti-Spoofing: Presentation Attack Detection and Vulnerability Assessment. Singapore: Springer, 2023. 171−199
    [122] Yadav D, Kohli N, Agarwal A, Vatsa M, Singh R, Noore A. Fusion of handcrafted and deep learning features for large-scale multiple iris presentation attack detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City, USA: IEEE, 2018. 685−692
    [123] Choudhary M, Tiwari V, Venkanna U. Iris liveness detection using fusion of domain-specific multiple BSIF and DenseNet features. IEEE Transactions on Cybernetics, 2022, 52(4): 2370-2381 doi: 10.1109/TCYB.2020.3005089
    [124] Kuehlkamp A, Pinto A, Rocha A, Bowyer K W, Czajka A. Ensemble of multi-view learning classifiers for cross-domain iris presentation attack detection. IEEE Transactions on Information Forensics and Security, 2019, 14(6): 1419-1431 doi: 10.1109/TIFS.2018.2878542
    [125] Gragnaniello D, Poggi G, Sansone C, Verdoliva L. Using iris and sclera for detection and classification of contact lenses. Pattern Recognition Letters, 2016, 82: 251-257 doi: 10.1016/j.patrec.2015.10.009
    [126] Hoffman S, Sharma R, Ross A. Iris + ocular: Generalized iris presentation attack detection using multiple convolutional neural networks. In: Proceedings of the International Conference on Biometrics (ICB). Crete, Greece: IEEE, 2019. 1−8
    [127] Gupta M, Singh V, Agarwal A, Vatsa M, Singh R. Generalized iris presentation attack detection algorithm under cross-database settings. In: Proceedings of the 25th International Conference on Pattern Recognition (ICPR). Milan, Italy: IEEE, 2021. 5318−5325
    [128] Agarwal A, Noore A, Vatsa M, Singh R. Enhanced iris presentation attack detection via contraction-expansion CNN. Pattern Recognition Letters, 2022, 159: 61-69 doi: 10.1016/j.patrec.2022.04.007
    [129] Jain V, Agarwal A, Singh R, Vatsa M, Ratha N. Robust IRIS presentation attack detection through stochastic filter noise. In: Proceedings of the 26th International Conference on Pattern Recognition (ICPR). Montreal, Canada: IEEE, 2022. 1134−1140
    [130] Sequeira A F, Thavalengal S, Ferryman J, Corcoran P, Cardoso J S. A realistic evaluation of iris presentation attack detection. In: Proceedings of the 39th International Conference on Telecommunications and Signal Processing (TSP). Vienna, Austria: IEEE, 2016. 660−664
    [131] Gragnaniello D, Sansone C, Poggi G, Verdoliva L. Biometric spoofing detection by a domain-aware convolutional neural network. In: Proceedings of the 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS). Naples, Italy: IEEE, 2016. 193−198
    [132] Hu Y, Sirlantzis K, Howells G. Iris liveness detection using regional features. Pattern Recognition Letters, 2016, 82: 242-250 doi: 10.1016/j.patrec.2015.10.010
    [133] Trokielewicz M, Czajka A, Maciejewicz P. Assessment of iris recognition reliability for eyes affected by ocular pathologies. In: Proceedings of the IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS). Arlington, USA: IEEE, 2015. 1−6
    [134] Trokielewicz M, Czajka A, Maciejewicz P. Iris recognition after death. IEEE Transactions on Information Forensics and Security, 2019, 14(6): 1501-1514 doi: 10.1109/TIFS.2018.2881671
    [135] Trokielewicz M, Czajka A, Maciejewicz P. Post-mortem iris recognition with deep-learning-based image segmentation. Image and Vision Computing, 2020, 94: Article No. 103866 doi: 10.1016/j.imavis.2019.103866
    [136] Sun Z N, Zhang H, Tan T N, Wang J Y. Iris image classification based on hierarchical visual codebook. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(6): 1120-1133 doi: 10.1109/TPAMI.2013.234
    [137] Shah S, Ross A. Generating synthetic irises by feature agglomeration. In: Proceedings of the International Conference on Image Processing (ICIP). Atlanta, USA: IEEE, 2006. 317−320
    [138] Zuo J Y, Schmid N A, Chen X H. On generation and analysis of synthetic iris images. IEEE Transactions on Information Forensics and Security, 2007, 2(1): 77-90 doi: 10.1109/TIFS.2006.890305
    [139] Yadav D, Kohli N, Vatsa M, Singh R, Noore A. Unconstrained visible spectrum iris with textured contact lens variations: Database and benchmarking. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Denver, USA: IEEE, 2017. 574−580
    [140] Keshari R, Ghosh S, Agarwal A, Singh R, Vatsa M. Mobile periocular matching with pre-post cataract surgery. In: Proceedings of the 2016 IEEE International Conference on Image Processing. Phoenix, USA: IEEE, 2016. 3116−3120
    [141] Kohli N, Yadav D, Vatsa M, Singh R. Revisiting iris recognition with color cosmetic contact lenses. In: Proceedings of the International Conference on Biometrics (ICB). Madrid, Spain: IEEE, 2013. 1−7
    [142] Doyle J S, Bowyer K W, Flynn P J. Variation in accuracy of textured contact lens detection based on sensor and lens pattern. In: Proceedings of the IEEE 6th International Conference on Biometrics: Theory, Applications and Systems (BTAS). Arlington, USA: IEEE, 2013. 1−7
    [143] Yambay D, Das P, Boyd A, McGrath J, Fang Z Y, Czajka A, et al. Review of iris presentation attack detection competitions. Handbook of Biometric Anti-Spoofing: Presentation Attack Detection and Vulnerability Assessment (Third edition). Singapore: Springer, 2023. 149−169
    [144] Wang H F, Wang Z F, Du M N, Yang F, Zhang Z J, Ding S R, et al. Score-CAM: Score-weighted visual explanations for convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Seattle, USA: IEEE, 2020. 111−119
    [145] Ross A, Banerjee S, Chen C J, Chowdhury A, Mirjalili V, Sharma R, et al. Some research problems in biometrics: The future beckons. In: Proceedings of the International Conference on Biometrics (ICB). Crete, Greece: IEEE, 2019. 1−8
    [146] Agarwal A, Ratha N, Noore A, Singh R, Vatsa M. Misclassifications of contact lens iris PAD algorithms: Is it gender bias or environmental conditions? In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Waikoloa, USA: IEEE, 2023. 961−970
    [147] Boyd A, Bowyer K, Czajka A. Human-aided saliency maps improve generalization of deep learning. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Waikoloa, USA: IEEE, 2022. 1255−1264
    [148] Geng C X, Huang S J, Chen S C. Recent advances in open set recognition: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(10): 3614-3631 doi: 10.1109/TPAMI.2020.2981604
    [149] Howard A G, Zhu M L, Chen B, Kalenichenko D, Wang W J, Weyand T, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv: 1704.04861, 2017.
    [150] Tan M X, Le Q V. EfficientNet: Rethinking model scaling for convolutional neural networks. In: Proceedings of the 36th International Conference on Machine Learning (ICML). Long Beach, USA: PMLR, 2019. 6105−6144
    [151] Chingovska I, Anjos A, Marcel S. Anti-spoofing in action: Joint operation with a verification system. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Portland, USA: IEEE, 2013. 98−104
    [152] Grosz S A, Wijewardena K P, Jain A K. ViT unified: Joint fingerprint recognition and presentation attack detection. arXiv: 2305.07602, 2023.
    [153] Hospedales T, Antoniou A, Micaelli P, Storkey A. Meta-learning in neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(9): 5149-5169
    [154] Fang M L, Yang W F, Kuijper A, Štruc V, Damer N. Fairness in face presentation attack detection. Pattern Recognition, 2024, 147: Article No. 110002 doi: 10.1016/j.patcog.2023.110002
    [155] Singh R, Majumdar P, Mittal S, Vatsa M. Anatomizing bias in facial analysis. In: Proceedings of the 36th AAAI Conference on Artificial Intelligence, 34th Conference on Innovative Applications of Artificial Intelligence, the 12th Symposium on Educational Advances in Artificial Intelligence. Vancouver, Canada: AAAI, 2022. 12351−12358
    [156] Terhörst P, Kolf J N, Huber M, Kirchbuchner F, Damer N, Moreno A M, et al. A comprehensive study on face recognition biases beyond demographics. IEEE Transactions on Technology and Society, 2022, 3(1): 16-30 doi: 10.1109/TTS.2021.3111823
    [157] de Freitas Pereira T, Marcel S. Fairness in biometrics: A figure of merit to assess biometric verification systems. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2022, 4(1): 19-29 doi: 10.1109/TBIOM.2021.3102862
    [158] Zhang C, Xie Y, Bai H, Yu B, Li W H, Gao Y. A survey on federated learning. Knowledge-Based Systems, 2021, 216: Article No. 106775 doi: 10.1016/j.knosys.2021.106775
    [159] Yang L, Zhang Z L, Song Y, Hong S D, Xu R S, Zhao Y, et al. Diffusion models: A comprehensive survey of methods and applications.ACM Computing Surveys,2023,56(4): Article No. 105
  • 期刊类型引用(11)

    1. 张月莹,殷其昊,荆根强,颜露新,王相勋. 非匀速条件下车辆底盘超近距成像测量方法. 计量学报. 2024(02): 178-185 . 百度学术
    2. 齐咏生,陈培亮,高学金,董朝轶,魏淑娟. 高精度实时语义分割算法框架:多通道深度加权聚合网络. 控制与决策. 2024(05): 1450-1460 . 百度学术
    3. 柳东威,王旭,廖佳妹. 基于卷积神经网络的汽车产品检测优化研究. 商用汽车. 2024(02): 82-87 . 百度学术
    4. 解丹,陈立潮,曹玲玲,张艳丽. 基于卷积神经网络的车辆分类与检测技术研究. 软件工程. 2023(04): 10-13 . 百度学术
    5. 王明明,孙寅静,孙晓云,龚芮,王佳浩. 基于深度残差网络与迁移学习的地形识别方法. 科学技术与工程. 2023(09): 3779-3786 . 百度学术
    6. 余烨,陈维笑,陈凤欣. 面向车型识别的夜间车辆图像增强网络RIC-NVNet. 中国图象图形学报. 2023(07): 2054-2067 . 百度学术
    7. 万淑慧. 基于深度强化学习的监控视频车辆型号精细识别研究. 传感器世界. 2023(12): 29-33 . 百度学术
    8. 赵腾飞,胡国玉,周建平,刘广,陈旭东,董娅兰. 卷积神经网络算法在核桃仁分类中的研究. 中国农机化学报. 2022(06): 181-189 . 百度学术
    9. 杨栋,李超,吴兴华,王椿钧,唐雯. 基于智能识别技术的铁路安检辅助分析装置研究. 计算机测量与控制. 2022(08): 25-30+49 . 百度学术
    10. 马永杰,马芸婷,程时升,马义德. 基于改进YOLO v3模型与Deep-SORT算法的道路车辆检测方法. 交通运输工程学报. 2021(02): 222-231 . 百度学术
    11. 马永杰,程时升,马芸婷,马义德. 卷积神经网络及其在智能交通系统中的应用综述. 交通运输工程学报. 2021(04): 48-71 . 百度学术

    其他类型引用(10)

  • 加载中
  • 图(30) / 表(5)
    计量
    • 文章访问数:  1056
    • HTML全文浏览量:  679
    • PDF下载量:  267
    • 被引次数: 21
    出版历程
    • 收稿日期:  2023-03-06
    • 录用日期:  2023-10-12
    • 网络出版日期:  2023-11-01
    • 刊出日期:  2024-02-26

    目录

    /

    返回文章
    返回