2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

自动化信任的研究综述与展望

董文莉 方卫宁

孙燕, 张弛, 路兴龙, 王靖戈, 付俊. 具有不等式路径约束的微分代数方程系统的动态优化. 自动化学报, 2019, 45(5): 897-905. doi: 10.16383/j.aas.c180302
引用本文: 董文莉, 方卫宁. 自动化信任的研究综述与展望. 自动化学报, 2021, 47(6): 1183−1200 doi: 10.16383/j.aas.c200432
SUN Yan, ZHANG Chi, LU Xing-Long, WANG Jing-Ge, FU Jun. Dynamic Optimization of Differential-algebraic Equations With Inequality Path Constraints. ACTA AUTOMATICA SINICA, 2019, 45(5): 897-905. doi: 10.16383/j.aas.c180302
Citation: Dong Wen-Li, Fang Wei-Ning. Trust in automation: Research review and future perspectives. Acta Automatica Sinica, 2021, 47(6): 1183−1200 doi: 10.16383/j.aas.c200432

自动化信任的研究综述与展望

doi: 10.16383/j.aas.c200432
基金项目: 北京市自然科学基金(L191018)资助
详细信息
    作者简介:

    董文莉:北京交通大学电子信息工程学院博士研究生. 2017年获得郑州大学轨道交通信号与控制学士学位. 主要研究方向为自动化信任和计算认知建模. E-mail: wldong_bjtu@163.com

    方卫宁:北京交通大学轨道交通控制与安全国家重点实验室教授. 1996年获得重庆大学博士学位. 主要研究方向为人因工程 , 轨道交通安全模拟与仿真. 本文通信作者. E-mail: wnfang@bjtu.edu.cn

Trust in Automation: Research Review and Future Perspectives

Funds: Supported by Beijing Natural Science Foundation (L191018)
More Information
    Author Bio:

    DONG Wen-Li Ph. D. candidate at the School of Electronic and Information Engineering, Beijing Jiaotong University. She received her bachelor degree in Rail Transportation Signaling and Control from Zhengzhou University in 2017. Her research interest covers trust in automation and computational cognitive modeling

    FANG Wei-Ning Professor at the State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University. He received his Ph. D. degree from Chongqing University in 1996. His research interest covers ergonomics, intelligent transportation systems, system reliability and safety, and railway simulation. Corresponding author of this paper

  • 摘要: 随着自动化能力的快速提升, 人机关系发生深刻变化, 人的角色逐渐从自动化的主要控制者转变为与其共享控制的合作者. 为了实现绩效和安全目标, 人机协同控制需要操作人员适当地校准他们对自动化机器的信任, 自动化信任问题已经成为实现安全有效的人机协同控制所面临的最大挑战之一. 本文回顾了自动化信任相关文献, 围绕自动化信任概念、模型、影响因素及测量方法, 对迄今为止该领域的主要理论和实证工作进行了详细总结. 最后, 本文在研究综述和相关文献分析的基础上提出了现有自动化信任研究工作中存在的局限性, 并从人机系统设计的角度为未来的自动化信任研究提供一些建议.
  • 近年来, 随着监控设备在公共场所的普及, 行人再识别技术越来越受到人们的重视.行人再识别是利用计算机视觉技术判断图像或者视频序列中是否存在特定行人的技术.但是由于光照、遮挡、行人姿态等问题, 同一行人在不同场景中的外观呈现出较大差异, 这给行人再识别研究带来了巨大挑战.为了有效应对这些挑战, 广大研究者提出了很多解决方法.

    目前的行人再识别算法大体可分为三类, 分别是特征表示学习、距离度量学习和基于深度学习的方法.特征表示学习方法利用视觉特征对行人建立一个具有鲁棒性和区分性的表示, 然后根据传统的相似性度量算法(欧氏距离等)来计算行人之间的相似度.文献[1]在提取出行人前景的基础上, 利用行人区域的对称性和非对称性将前景划分成不同的区域, 对于每个区域, 提取带权重的颜色直方图等特征描述它们.文献[2]对于提取出的颜色直方图特征, 使用PCA (Principle component analysis)对其进行降维.文献[3]结合方向、颜色、熵等多种特征, 分级识别行人.虽然特征表示学习的思想较为直接简单, 易于解决小规模数据集的行人再识别问题, 但是在光照、视角、姿态变化较大的情况下, 特征表示学习方法的效果很差.

    距离度量学习是一种利用测度学习算法得出两张行人图像的相似度度量函数, 使相关的行人图像对的相似度尽可能高, 不相关的行人图像对的相似度尽可能低的方法.代表性的距离度量学习算法有文献[4], 其中将行人再识别问题转化为距离学习问题, 提出了一种基于概率相对距离的行人匹配模型.文献[5]在不同的特征子空间中利用不同的核函数对距离进行度量.文献[6]基于马尔科夫模型对行人之间的距离进行度量.在大规模数据集下, 距离度量学习计算开销过大, 计算效率过低, 容易陷入局部最小值, 准确率不高.

    深度学习近年来在计算机视觉中得到了广泛的应用, 因此不少学者研究并提出了基于深度学习的行人再识别算法.文献[7]最先将深度学习应用于行人再识别领域, 使用卷积神经网络提取行人的特征.随后不断研究对其进行改进.文献[8]提出将LSTM (Long short-term memory)模型结合进卷积神经网络中, 提高了网络对时序特征的提取能力.文献[9]将注意力模型结合进CNN (Convolutional neural network)网络中, 提升了模型的特征提取能力.基于深度学习的行人再识别近年来成为该领域的主流方法, 相对于传统方法, 具有识别精度高, 鲁棒性好的优点.

    上述方法有个共同的特点, 就是它们仅仅考虑了行人图片的标签信息, 也就是只使用了行人ID这个标记信息, 并没有采用行人的属性信息.为此, 近年来, 随着带属性标签行人数据库的出现, 有研究人员提出了基于属性的行人再识别方法, 比如文献[10-11]使用行人属性进行行人再识别, 达到了很好的识别效果.由于基于属性学习的方法具有更符合人类的搜索习惯, 能应用于零样本学习等优点, 因此当前这类方法成为该领域的研究热点.其中, 文献[10]主要针对监控场景下行人属性的识别做出了改进, 主要提出了两个行人属性识别网络DeepSAR和DeepMAR, 前者对每个属性进行单独预测, 后者联合多属性同时预测, 在预测每个属性时, 考虑到属性内正负样本不均衡的情况, 利用数据先验分布对属性预测的权值进行调整, 从而提高了行人属性的识别效果.文献[11]提出一种联合识别行人属性和行人ID的神经网络模型, 大幅度提高了行人再识别的准确率, 作者首先对大规模行人再识别数据集Market 1501[12]和DukeMTMC[13]进行了行人属性的标注, 然后基于这些标注图片, 设计实现了APR (Attribute-person recognition)神经网络, 该网络对输入图片同时进行行人属性和行人ID的提取与识别, 将识别结果与图片标注标签进行比对, 比对结果作为反向传播的依据, 训练得到网络, 从网络中提取出代表行人的向量, 进行距离度量计算, 得到再识别的结果.该网络充分利用了行人的ID信息和属性信息, 相对于已有方法, 有效提高了行人再识别的精度.本文在APR的基础上, 进一步进行了三个方面的改进, 首先, 网络结构上的改进, 在网络中添加了一层全连接层.根据文献[14]的研究, 全连接层可以提高网络在微调后的判别能力, 保证源模型表示能力的迁移; 然后, 针对数据集中属性类之间的数量不均衡问题, 在损失函数中对各属性的损失基于其包含的样本数量进行了归一化处理, 提高网络对不平衡数据的处理能力; 最后, 针对数据集中各属性正负样本的数量不均衡问题, 利用数据中各属性分布的先验知识, 通过数量占比来调整各属性在损失层中的权重.测试结果表明, 本文算法在公共实验数据集上的实验效果优于目前主流的行人再识别算法, 尤其是首位匹配率(Rank-1), 相对于APR网络, 也是有了较大幅度的提升.

    本文其余章节的组织安排如下.第1节介绍本文提出的用于提取行人属性和ID的行人再识别网络结构; 第2节介绍本文提出的运用数据先验知识的损失函数设计原理及实现; 第3节介绍本文算法在公共数据集上的实验结果及分析; 第4节总结全文以及展望.

    在本节中, 主要介绍用于提取行人属性和ID的行人再识别网络结构和算法流程.为了提取到高鲁棒性的行人属性特征描述子, 基于数据分布的先验知识, 本文对APR网络进行了大幅度改进, 具体网络结构见图 1.主要分两个部分介绍改进后的网络:基础网络部分, 行人特征向量度量部分.下面详细介绍这两个方面的内容.

    图 1  网络结构示意图
    Fig. 1  Schematic diagram of network structure

    本文的基础网络主要由两个部分组成, 以全连接层$F{{C}_{0}}$为界线, 前半部分为残差网络(Resnet[15]), 后半部分为行人属性和ID特征分类网络.首先介绍前半部分, 在计算机视觉里, 特征的等级随着网络深度的加深而变高.研究表明, 网络的深度是实现好效果的重要因素, 然而太深的网络在训练中会存在梯度弥散和爆炸的障碍, 导致无法收敛. Resnet的提出, 解决了多达100层的深度神经网络训练的问题, 它通过学习残差函数, 实现恒等映射, 从而在不引入额外参数和计算复杂度的情况下, 避免了网络的退化.本文网络采用的是Resnet-50网络, 即具有50层深度的网络, 该网络主要由卷积层(Convolution layer)、池化层(Pooling layer)和残差块组成.

    卷积层主要用于对图像或者上一层的特征图(Feature map)作卷积运算, 并使用神经元激活函数计算卷积后的输出.卷积操作可以表示为:

    $ \begin{equation} \begin{aligned} {{{\pmb y}}^{j}}=f\left( {{b}^{j}}+\sum\limits_{i}{{{k}^{i, j}}\ast {{{\pmb x}}^{i}}} \right) \end{aligned} \end{equation} $

    (1)

    其中, ${{{\pmb x}}^{i}}$为第$i$层输入图像或特征图, ${{{\pmb y}}^{j}}$为第$j$层输出特征图, ${{k}^{i, j}}$是连接第$i$层输入图像与第$j$层输入图像的卷积核, ${{b}^{j}}$是第$j$层输出图像的偏置, $\ast $是卷积运算符, $f\left(x\right)$是神经元激活函数.

    池化层主要对卷积层的输出作下采样, 其目的是减小特征图尺寸大小, 增强特征提取对旋转和形变的鲁棒性.一般使用平均值池化和最大值池化两种方式, 设输入特征图矩阵$F$, 子采样池化域的大小为$c\times c$, 偏置为$b$, 池化过程移动步长为$c$.则平均值池化和最大值池化的算法表达式分别为:

    $ \begin{equation} \begin{aligned} {{S}_{ij}}=\frac{1}{{{c}^{2}}}\left( \sum\limits_{i=1}^{c}{\sum\limits_{j=1}^{c}{{{F}_{ij}}}} \right)+b \end{aligned} \end{equation} $

    (2)

    $ \begin{equation} \begin{aligned} {{S}_{ij}}=\underset{i=1, j=1}{\overset{c}{\mathop{\max }}}\, \left( {{F}_{ij}} \right)+b \end{aligned} \end{equation} $

    (3)

    其中, ${\max }^{c}_{i=1, j=1}\, \left({{F}_{ij}} \right)$表示从输入特征图$F$的池化域中取出最大元素.

    残差块是整个网络的核心部分, 基本思想是在一个浅层网络基础上叠加一个恒等映射(Identity mappings), 并学习残差函数, 从而使得网络不退化而且性能更好.残差块共有两层, 计算表达式如下:

    $ \begin{equation} \begin{aligned} F\left( {\pmb x} \right)={{W}_{2}}\sigma \left( {{W}_{1}}{\pmb x} \right) \end{aligned} \end{equation} $

    (4)

    其中, $\sigma$表示非线性函数Relu, ${{W}_{1}}$和${{W}_{2}}$表示两个卷积层的参数矩阵.

    最后通过一个捷径(Shortcut)与恒等映射相加, 再通过一个Relu函数, 获得输出${\pmb y}$.

    $ \begin{equation} \begin{aligned} {\pmb y}=F\left( {\pmb x}, \left\{ {{W}_{i}} \right\} \right)+{\pmb x} \end{aligned} \end{equation} $

    (5)

    网络的后半部分为行人属性和ID特征分类网络, 主要用于提取行人的属性特征和行人ID特征, 由全连接层(Fully connected layers, FC)、Softmax层和损失层(Loss layers)组成.本文网络架构相较于APR网络最大的改进之处就是添加了$F{{C}_{0}}$层, 根据文献[14]的研究, $F{{C}_{0}}$层的主要作用是在模型表示能力迁移过程中充当“防火墙”的作用.具体来讲, 本实验是基于ImageNet上预训练得到的模型进行微调(fine tuning)得到最后的训练结果的, 则ImageNet可视为源域(Source domain).针对微调, 若目标域(Target domain)中的图像与源域中图像差异巨大, 如本文的实验中, 使用的是行人数据集, 相比ImageNet, 目标域图像不是各种物体的图像, 而是行人图像, 差异巨大.在这种情况下, 不含全连接层的网络微调后的结果要差于含全连接层的网络.因此, 在源域与目标域差异较大的情况下, 添加$F{{C}_{0}}$层, 可保证模型表示能力的迁移.

    图 1中全连接层$F{{C}_{1\text{-G}}}$和$F{{C}_{ID}}$主要起到分类器的作用, 对于每一个全连接层来说, 它的参数由节点权重矩阵$W$、偏置$b$以及激活函数$f$构成, 可以表示为:

    $ \begin{equation} \begin{aligned} {\pmb y}=f\left( W\cdot {\pmb x}+b \right) \end{aligned} \end{equation} $

    (6)

    其中, ${\pmb x}$, ${\pmb y}$分别为输入、输出数据.

    而Softmax层主要在全连接层的基础上, 进行分类结果的概率计算.可以表示为:

    $ \begin{equation} \begin{aligned} {{{\pmb y}}_{i}}=\frac{\exp \left( {{{\pmb x}}_{i}} \right)}{\sum\limits_{j=1}^{n}{\exp \left( {{{\pmb x}}_{j}} \right)}} \end{aligned} \end{equation} $

    (7)

    其中, ${{{\pmb x}}_{i}}$为Softmax层第$i$个节点的值, ${{{\pmb y}}_{i}}$为第$i$个输出值, $n$为Softmax层节点的个数.

    Loss层采用交叉信息熵损失(Cross-entropy loss)计算方式, 可以表示为:

    $\begin{equation} \begin{aligned} Loss=\, &-\frac{1}{N}\sum\limits_{n=1}^{N}[{{p}_{n}}\log \left( {{{\hat{p}}}_{n}} \right)+\notag\\ &\left( 1-{{p}_{n}} \right)\log \left( 1-{{{\hat{p}}}_{n}} \right)] \end{aligned} \end{equation} $

    (8)

    本文从网络中的$F{{C}_{0}}$层中提取出2 048维的特征向量, 用于表示行人特征, 采用交叉视角的二次判别分析法(Cross-view quadratic discriminant analysis, XQDA)[16]进行向量之间距离的度量, 该方法是在KISSME算法和贝叶斯方法基础上提出的.该方法用高斯模型分别拟合类内和类间样本特征的差值分布.根据两个高斯分布的对数似然比推导出马氏距离.

    $ \begin{equation} \begin{aligned} P\left( \frac{\Delta }{{{\Omega }_{I}}}\; \right)=\frac{1}{{{\left( 2\pi \right)}^{{\scriptstyle{}^\frac{d}{2}}}}{{\left|{\sum _{I}} \right|}^{{\scriptstyle{}^\frac{1}{2}}}}}{{{\rm e}}^{-\frac{1}{2}{{\Delta }^{{\rm T}}}\sum \limits_{I}^{-1}\Delta }} \end{aligned} \end{equation} $

    (9)

    $ \begin{equation} \begin{aligned} P\left(\frac {\Delta }{{{\Omega }_{E}}}\; \right)=\frac{1}{{{\left( 2\pi \right)}^{{\scriptstyle{}^\frac{d}{2}}}}{{\left| {{\sum }_{E}} \right|}^{{\scriptstyle{}^\frac{1}{2}}}}}{{{\rm e}}^{-\frac{1}{2}{{\Delta }^{{\rm T}}}\sum \limits_{E}^{-1}\Delta }} \end{aligned} \end{equation} $

    (10)

    上述两式取根号相除, 得到对数似然比为:

    $ \begin{equation} \begin{aligned} f\left( \Delta \right)={{\Delta }^{{\rm T}}}\left( \sum _{I}^{-1}-\sum _{E}^{-1} \right)\Delta \end{aligned} \end{equation} $

    (11)

    则两个样本之间的距离为:

    $ \begin{equation} \begin{aligned} d\left( {{{\pmb x}}_{i}}, {{{\pmb x}}_{j}} \right)={{\left( {{{\pmb x}}_{i}}-{{{\pmb x}}_{j}} \right)}^{{\rm T}}}\left( \sum _{I}^{-1}-\sum _{E}^{-1} \right)\left( {{{\pmb x}}_{i}}-{{{\pmb x}}_{j}} \right) \end{aligned} \end{equation} $

    (12)

    最后对所有的样本之间的距离进行排序, 选取距离最小的样本作为识别结果.

    第1节从整体上介绍了本文提出的行人再识别网络, 本节主要介绍网络中损失层计算的改进之处.本文主要利用数据先验分布, 对第1节提出的式(8)进行进一步的阐述和改进, 为了便于问题描述, 做出如下假设.

    假设训练数据集中(以Market 1501数据集为例, 见表 1)包含$N$张行人图片, 分别属于$M$个不同的行人, 每张图片标注了$G$类属性, 包括性别, 头发长短, 是否带包, 上衣颜色等属性, 对于每一类属性, 其中包含了${{K}^{g}}$种属性, 以上衣颜色为例, 其中包含黑色, 白色, 黄色等多种属性.将数据集用集合方式描述如下:

    $ \begin{equation} \begin{aligned} D=\left\{ \left( {{x}_{i}}, l_{i}^{1}, \cdots, l_{i}^{G} \right) \right\}_{i=1}^{N} \end{aligned} \end{equation} $

    (13)
    表 1  Market 1501数据集中的属性类别
    Table 1  The attribute category of Market 1501 dataset
    属性类($G$) 属性 数量($K$)
    Gender male, female 2
    Age child, teenager 4
    Hair length long, short 2
    Lower clothing length long, short 2
    Lower clothing type pants, dress 2
    Wearing hat yes, no 2
    Carrying bag yes, no 2
    Carrying backpack yes, no 2
    Carrying handbag yes, no 2
    Upper clothing color black, white, red$\cdots$ 8
    Lower clothing color black, white, pink$\cdots$ 9
    下载: 导出CSV 
    | 显示表格

    其中, ${{x}_{i}}$为第$i$张行人图片, 行人的第$g$类属性可以用向量${\pmb l}_{i}^{g}=\left(l_{i, 1}^{g}, \cdots, l_{i, {{K}^{\left( g \right)}}}^{g} \right)$表示, 每类属性中的第$k$种属性$l_{i, k}^{g}$都是二值向量表示, 即$l_{i, k}^{g}\in \left\{ 0, 1 \right\}$, 如果行人存在该属性, 则$l_{i, k}^{g}=1$, 反之则$l_{i, k}^{g}=0$.

    APR网络中的损失函数包括两部分, 一部分是属性识别的损失函数, 一部分是行人ID识别的损失函数.可以用下式进行计算:

    $ \begin{equation} \begin{aligned} L=\lambda {{L}_{ID}}+\frac{1}{G}\sum\limits_{i=1}^{G}{{{L}^{g}}} \end{aligned} \end{equation} $

    (14)

    其中, ${{L}_{ID}}$表示行人ID识别的损失函数, ${{L}^{g}}$表示各类属性识别的损失函数, $\lambda$为参数, 用于调节两者的权重.

    行人ID识别的损失函数具体形式为:

    $ \begin{equation} \begin{aligned} {{L}_{ID}}=-\sum\limits_{m=1}^{M}{\log \left( p\left( m \right) \right)q\left( m \right)} \end{aligned} \end{equation} $

    (15)

    其中, $p\left(m\right)$表示第$i$个样本属于第$m$类行人的概率, 由$F{{C}_{ID}}$层后的Softmax层计算得到; 如果假设$y$为标注的正确行人类别, 则$q\left(y\right)=1$, 当$m\ne y$时, $q\left(m\right)=0$.

    行人属性识别的损失函数具体形式为:

    $ \begin{equation} \begin{aligned} {{L}^{g}}=-\sum\limits_{k=1}^{{{K}^{g}}}{\log \left( p\left( k \right) \right)}l_{i, k}^{g} \end{aligned} \end{equation} $

    (16)

    其中, $p\left(k\right)$表示第$i$个样本属于第$g$类属性中第$k$种属性的概率值, 由$F{{C}_{1-G}}$各层后的Sofxmax层计算得到.

    在APR网络的基础上, 本文主要对基于属性识别的损失函数, 就是式(16)进行了改进.这部分改进包括两方面: 1) 基于属性样本数量对损失函数进行归一化; 2) 基于各属性中正负样本数量的占比对不同的属性赋予不同的权重.下面在第3.1节和第3.2节分别进行介绍.

    对通用的行人数据集统计发现, 属性间存在的样本数量不平衡现象, 这极大得影响了行人再识别的识别准确性.以Market 1501数据集为例, 表 2中统计了数据集中行人各属性的数量.从表 2可以看出, 年龄是青年, 穿短袖上衣, 短裤等属性的样本数量较多, 分别为569, 712, 641个样本.携带手提包, 帽子, 穿粉色下衣等属性的样本数量很少, 分别只有86, 20, 2个样本.针对各属性样本数据不平衡的情况, 本文在损失层的计算中, 对各属性的损失, 基于其所包含的样本数量进行了归一化处理.最终, 损失层改写为下式:

    $ \begin{equation} \begin{aligned} {{L}^{g}}=-\frac{1}{{{N}^{g}}}\sum\limits_{i=1}^{N}{\sum\limits_{k=1}^{{{K}^{g}}}{\frac{l_{i, k}^{g}\log p_{i, k}^{g}}{N_{k}^{g}}}}, \text{ }g=1, \cdots, G \end{aligned} \end{equation} $

    (17)
    表 2  Market 1501数据集中行人属性训练样本数量及占比
    Table 2  Statistics of Market 1501 dataset
    属性 数量 占比 属性 数量 占比
    upblack 113 0.15 male 431 0.57
    upwhite 228 0.30 female 320 0.43
    upred 78 0.10 short hair 506 0.67
    uppurple 30 0.04 long hair 245 0.33
    upyellow 36 0.05 long sleeve 39 0.05
    upgray 86 0.11 short sleeve 712 0.95
    upblue 46 0.06 long lower body 110 0.15
    upgreen 56 0.07 short lower body 641 0.85
    handbag no 665 0.89 dress 294 0.39
    handbag yes 86 0.11 pants 457 0.61
    young 14 0.02 downgray 123 0.16
    teenager 569 0.76 downblack 293 0.39
    adult 160 0.21 downwhite 58 0.08
    old 8 0.01 downpink 29 0.04
    bag no 566 0.75 downpurple 2 0.00
    bag yes 185 0.25 downyellow 10 0.01
    backpack no 552 0.74 downblue 123 0.16
    backpack yes 199 0.26 downgreen 14 0.02
    hat no 731 0.97 downbrown 69 0.09
    hat yes 20 0.03
    下载: 导出CSV 
    | 显示表格

    其中, ${{N}^{g}}$表示第$g$类属性的训练样本数量, $N_{k}^{g}$表示第$g$类属性中第$k$种属性的训练样本数量, 概率值$p_{i, k}^{g}$是$F{{C}_{1-G}}$各层的输出经由Softmax层计算而得的, 表示第$i$个样本属于第$g$类属性中第$k$种属性的概率值.

    Softmax层的计算方式如下:

    $ \begin{equation} \begin{aligned} p_{k}^{g}=\frac{\exp \left( o_{i, k}^{g} \right)}{\sum\limits_{k'=1}^{K\left( g \right)}{\exp \left( o_{i, k'}^{g} \right)}} \end{aligned} \end{equation} $

    (18)

    其中, $o_{i, k}^{g}$表示$F{{C}_{1-G}}$各层的第$k$个输出值.

    对于行人类别来说, 每个行人的样本数量大致相同, 基本不存在数据不平衡问题, 则不需要进行归一化操作.假设$F{{C}_{ID}}$层的输出为${\pmb z}=\left[{{z}_{1}}, {{z}_{2}}, \cdots, {{z}_{M}} \right]\in {{\bf R}^{M}}$, 同理可得, 第$i$个样本属于第$m$类行人的概率为:

    $\begin{equation} \begin{aligned} p_{i}^{m}=\frac{\exp \left( {{{\pmb z}}_{m}} \right)}{\sum\limits_{i'=1}^{M}{\exp \left( {{{\pmb z}}_{i'}} \right)}} \end{aligned} \end{equation} $

    (19)

    则可以将式(15)改写成如下损失函数:

    $ \begin{equation} \begin{aligned} {{L}^{ID}}=-\frac{1}{M}\sum\limits_{m=1}^{M}{\log \left( p_{i}^{m} \right)}q\left( m \right) \end{aligned} \end{equation} $

    (20)

    对于整个网络来说, 不能只计算属性或者行人类别的损失函数, 这会导致训练无法收敛.所以网络采用联合损失函数的方式, 将两者结合起来, 作为网络整体的损失函数, 联合方式可以用下式表示:

    $ \begin{equation} \begin{aligned} L=\alpha {{L}^{ID}}+\left( 1-\alpha \right)\frac{1}{G}\sum\limits_{g=1}^{G}{{{L}^{g}}} \end{aligned} \end{equation} $

    (21)

    其中, $0\le \alpha \le 1$, 该参数用于调节两个损失层在网络中的权重, 通过实验得到最佳值.在整个训练过程中, 通过反向传播和梯度下降来计算网络参数.

    行人再识别数据库中, 不仅存在属性间样本数量不平衡的问题, 也存在属性内正负样本数据不平衡问题.在选取的三个实验数据集中, 行人属性内的正负样本数据不平衡的现象也非常严重.以Market 1501数据集为例, 从表 2中可以看出, 比如在是否戴帽子这个属性类中, 戴帽子的占较少数, 没有帽子的占大多数, 占比为0.03/0.97.在上衣长短这个属性中, 也是正负样本比例不均, 长袖的只占0.05.在这种情况下, 正样本在在识别过程中起到的影响过小, 不能很好地反应行人属性真实情况, 影响识别的效果.为了解决正负样本不平衡的情况, 参照文献[10]的方法, 本文在第3.1节提出的损失层计算基础上, 利用数据先验分布知识, 基于各属性中正负样本占比, 对属性识别的损失函数通过引入权重的方式进行了调整, 将式(17)改写为下式:

    $ \begin{equation} \begin{aligned} {{L}^{g'}}=-\frac{1}{{{N}^{g}}}\sum\limits_{i=1}^{N}{\sum\limits_{k=1}^{{{K}^{g}}}{\omega _{k}^{g}\frac{l_{i, k}^{g}\log p_{i, k}^{g}}{N_{k}^{g}}}}, \text{ }g=1\cdots G \end{aligned} \end{equation} $

    (22)

    $ \begin{equation} \begin{aligned} \omega _{k}^{g}=\exp \left( -\frac{p_{k}^{g}}{{{\sigma }^{2}}}\; \right) \end{aligned} \end{equation} $

    (23)

    其中, $\omega _{k}^{g}$是第$g$类属性中第$k$种属性的权重, $p_{k}^{g}$是第$k$种属性的数量占比, $\sigma $是用于调整权值的参数.同样的, 将网络整体损失函数更新为:

    $ \begin{equation} \begin{aligned} L=\alpha {{L}^{ID}}+\left( 1-\alpha \right)\frac{1}{G}\sum\limits_{g=1}^{G}{{{L}^{g'}}} \end{aligned} \end{equation} $

    (24)

    本节首先介绍实验中使用的测试数据和算法性能的评测准则, 其次介绍本文算法中的一些相关参数设置和选取实验, 然后在不同公开实验数据集上测试对各行人属性的识别结果, 最后介绍本文算法在不同公共实验数据集上与已有的行人再识别算法的性能比较.本文所有的实验是基于深度学习框架Matconvnet实现的, 实验平台是配备64 GB内存的Intel Core i7处理器和24 GB显存的Nvidia TITAN X显卡的GPU工作站.

    本文主要基于三个具有行人属性标注的行人再识别数据集进行实验, 分别是Market 1501、DukeMTMC数据集和PETA数据集, 其中的一些行人图片例子见图 2.

    图 2  数据集行人图片举例
    Fig. 2  Example of dataset pedestrian picture

    Market 1501数据集是由6个摄像机拍摄采集生成的大规模行人再识别数据集, 包含32 668张行人图片和3 368张查询集图片, 共有1 501个不同ID的行人, 对每个行人标注了27种行人属性.其中751个不同ID的行人作为训练集, 750个不同ID的行人作为测试集.在本文实验中, 使用其中的651个ID的行人作为训练集, 剩下的100个ID的行人作为验证集, 用于确定参数.

    DukeMTMC数据集是由8个摄像机采集, 包含34 183张行人图片和2 228张查询集图片, 共有1 812个不同ID的行人, 其中1 404个不同ID的行人出现在不同摄像机拍摄视野中, 剩余的408个不同ID的行人是一些误导图片, 对每个行人标注了23种行人属性.根据数据集本身的划分, 其中702个不同ID的行人用于训练, 剩余的702个不同ID的行人用于测试.

    PETA数据集是由19 000张行人图片组成, 图片分辨率分布在17$\times $39到169 $\times $ 365之间.这些行人图片共包含8 705个不同ID的行人, 每张行人图片标注了61个二值行人属性和4个多值行人属性.在本文实验中, 随机选取其中的9 500张行人图片作为训练集, 1 900张行人图片作为验证集, 7 600张行人图片作为测试集, 按照经典的数据集使用方式, 只选取其中标注数量最多的35个行人属性作为识别目标.

    为了与已有算法公正比较, 实验中, 采用先前工作普遍采用的评价框架.将数据集事先划分为训练集和测试集, 其中测试集由查询集和行人图像库两部分组成.当给定一个行人再识别算法, 衡量该算法在行人图像库中搜索待查询行人的能力来评测此算法的性能.已有的行人再识别算法大部分采用累积匹配特性(Cumulative match characteristic, CMC)曲线评价算法性能, 给定一个查询集和行人图像库, 累积匹配特征曲线描述的是在行人图像库中搜索待查询的行人, 前$r$个搜索结果中找到待查询人的比率.首位匹配率$(r=1)$很重要, 因为它表示的是系统真正的识别能力.另外, 同时采用平均准确率(Mean average precision, mAP)评价算法性能, 平均准确率是对准确率和召回率的全面反映, 计算平均准确率时, 将准确率和召回率作为横纵坐标, 绘制曲线, 曲线包围的面积就是平均准确率的值, 该值最大时, 表示系统的准确率和召回率达到了最优.

    本网络参数设置是在文献[11]的基础上微调而得的, 训练过程中设置批尺寸(Batch size)大小为64, epochs为55, 学习率初始值为0.001, 在最后5个epochs中, 调整为0.0001.各参数的微调过程具体见图 3.

    图 3  网络参数及结果对比
    Fig. 3  Comparison of network parameters and results

    参数$\alpha$.如图 3 (a)所示, 其中曲线代表了式(24)中的参数对准确率的影响, 当$\alpha$取不同值时, 网络的行人再识别准确率也随之发生变化.基于Market 1501数据集, 当网络不使用行人属性标签信息时(即$\alpha = 1$时), 首位匹配率是72.36%.当网络仅考虑行人属性标签信息时(即$\alpha = 0$时), 首位匹配率是76.81%.当$0.1\le\alpha\le0.9$时, 也就是同时考虑属性和ID标签信息时, 首位匹配率要普遍高于单独考虑这两者, 在$\alpha = 0.7$时, 首位匹配率达到最大值86.90%.基于DukeMTMC数据集, 当$\alpha = 1$时, 首位匹配率是60.34%.当$\alpha = 0$时, 首位匹配率是62.16%.在$\alpha = 0.7$时, 首位匹配率达到最大值72.83%.基于PETA数据集, 当$\alpha = 1$时, 首位匹配率是70.13%.当$\alpha = 0$时, 首位匹配率是65.24%.在$\alpha = 0.6$时, 首位匹配率达到最大值76.37%.综上考虑, 实验中取$\alpha = 0.7$.

    迭代次数.如图 3 (b)所示, 其中曲线代表了当网络迭代不同次数时, 网络的首位匹配率变化情况.每迭代1 000次测试一次网络性能, 随着迭代次数达到8 000次左右, 网络性能基本稳定, 所以将网络的epochs设置为55.

    属性选取. 图 3 (c)表示了Market 1501数据集去除不同属性后, 网络的再识别准确率.有些行人属性容易产生误检和漏检, 从而对行人再识别带来负效应, 所以考虑从数据集标注的行人属性中, 剔除一些具有负效应的属性.以所有属性参与训练得到的准确率作为基准, 每次去除一类属性, 得到识别准确率, 与基准进行对比, 其中横坐标为去除的属性.图 3 (d)表示了DukeMTMC数据集的实验结果.可以发现, 在Market 1501数据集中, 不使用是否有帽子这个行人属性, 再识别的准确率反而得到了提升, 主要因为帽子这个属性漏检的概率较大, 所以本文实验中不使用该属性.而在DukeMTMC数据集的测试结果表明, 减少任一行人属性后, 行人再识别的识别效果都会降低, 所以使用所有的属性.

    全连接层$F{{C}_{0}}$.如图 3 (e)所示, 其中曲线代表了网络是否添加全连接层$F{{C}_{0}}$对网络准确率的影响.图中实线的趋势线代表添加了完整本文算法结果, 虚线的趋势线代表去除全连接层的本文算法, 可以发现, 添加了全连接层后, 本文提出的训练网络更加的稳定, 能更快的迭代到稳定值, 并且提升了算法在三个数据集上的首位匹配率.与本文算法完整算法相比, 去除全连接层后, 在Market 1501、DukeMTMC和PETA数据集上的首位匹配率分别下降了0.89%, 0.76%和1.31%.可以看出, 全连接层的添加对于本文算法的识别效果具有较大的提升作用.添加全连接层能够较明显改善识别效果的原因主要有如下两点: 1) 根据文献[13]的研究, 全连接层可以提高网络在微调后的判别能力, 使得网络在这三个常用数据集上的判别能力得到提升; 2) 本文采用的是残差网络, 不包含全连接层, 所以在添加了全连接层后, 丰富了网络结构, 从而提高特征提取能力, 进而提升了网络的识别效果.鉴于以上两点, 本文采用添加了全连接层$F{{C}_{0}}$的网络.

    数据集间交叉识别. 如图 3 (f)所示, 其中实线代表了根据各自数据集的先验分布训练得到的网络进行数据集内识别结果(例:用数据集Market 3中的训练集训练得到的网络, 对数据集Market 1501中的测试集进行测试), 虚线表示基于三个数据集的先验分布训练的网络进行数据集间交叉识别的结果(例:用数据集DukeMTMC和PETA中的训练集分别训练得到的网络, 对数据集Market 1501中的测试集分别进行测试分别记为Market-D和Market-P, 后面的命名规则相同, 需要说明的是, 当待测数据集中含有训练数据集中没有的属性时, 从待测数据集的训练集中选取含有特殊属性的样本, 对训练集进行扩充以后再进行训练.); 实验结果表明, 数据集间交叉识别的性能相对于数据集内识别的性能是有轻微下降.对于数据集Market 1501来说, 利用数据集DukeMTMC和PETA中的训练集训练得到的网络, 在该数据集上测试得到的首位匹配率相比于数据集内识别结果, 分别下降了0.82%和1.13%;对于数据集DukeMTMC来说, 利用数据集Market 1501和PETA中的训练集训练得到的网络, 在该数据集上测试得到的首位准确率相比于数据集内识别结果, 分别下降了0.67%和1.38%;对于数据集PETA来说, 利用数据集Market 1501和DukeMTMC中的训练集训练得到的网络, 在该数据集上测试得到首位匹配率相比于数据集内识别结果, 分别下降了1.54%和1.78%.这主要是因为PETA中属性分布相对于Market 1501和DukeMTMC有较大的差异.但是相对于不考虑数据先验分布的APR网络, 数据集间交叉识别的性能还是有所提升, 在Market 1501, DukeMTMC和PETA数据集上, 首位匹配率分别至少提升了1.48%, 0.76%, 2.61%.所以, 在实际应用中, 在对待检测数据集属性分布不可知的情况下, 可以直接采用基于已有的数据集训练的网络实现行人再识别工作.

    本文基于三个通用行人属性数据集, 分别进行了行人属性的识别实验, 识别准确率如表 3~表 5所示.同样地, 选取APR网络的实验结果作为对比, 其中APR网络在PETA数据集上的结果是基于APR文献源代码复现得到.

    表 3  Market 1501数据集各属性识别准确率(%)
    Table 3  Accuracy rate of each attribute recognition of Market 1501 dataset (%)
    属性 gender age hair L.slv L.low S.cloth B.pack H.bag bag C.up C.low mean
    APR 86.45 87.08 83.65 93.66 93.32 91.46 82.79 88.98 75.07 73.4 69.91 85.33
    本文算法 86.73 88.14 84.12 93.5 94.54 91.86 85.99 90.67 82.36 77.83 73.82 86.32
    下载: 导出CSV 
    | 显示表格
    表 4  DukeMTMC数据集各属性识别准确率(%)
    Table 4  Accuracy rate of each attribute recognition of DukeMTMC dataset (%)
    属性 gender hat boots L.up B.pack H.bag bag C.shoes C.up C.low mean
    APR 82.61 86.94 86.15 88.04 77.28 93.75 82.51 90.19 72.29 41.48 80.12
    本文算法 82.73 89.02 87.17 89.33 81.33 95.81 86.74 93.12 73.04 43.21 82.15
    下载: 导出CSV 
    | 显示表格
    表 5  PETA数据集各属性识别准确率(%)
    Table 5  Accuracy rate of each attribute recognition of PETA dataset (%)
    属性 gender age carry style hat hair shoes K.up K.low bag glasses mean
    APR 89.51 86.37 78.28 84.69 92.12 89.41 78.95 88.34 84.81 86.76 72.61 84.71
    本文算法 90.11 85.32 85.39 85.43 92.63 88.6 82.32 88.97 86.82 88.06 78.33 88.54
    下载: 导出CSV 
    | 显示表格

    首先, 从整体来看, 在这三个数据集上, 本文的识别准确率相较于APR网络都有了较大的提升, 平均准确率分别提升了0.99%、2.03%和3.83%, 就各属性来说, 识别准确率也都提升了0.12%~7.29%不等.这表示本文提出的网络相较于APR网络在行人属性识别上具有更好的性能.

    其次, 从一些具有强数据不平衡的属性来看, 以Market 1501数据集为例, 其中是否有包这个属性, 识别准确率提高了7.29%, 提升程度比较大, 对于一些数据平衡的数据, 比如性别这个属性, 识别准确率只提高了0.28%.这表明本文提出的基于数据先验分布知识的权值调整策略, 对行人属性的提升具有明显的效果, 尤其是具有强数据不平衡的属性, 提升效果更为明显.

    最后, 如图 4所示, 列举了两个行人属性识别结果的例子, 本文的网络会对行人所有的属性进行预测并打分, 可以发现, 左边行人的属性预测全部正确, 而右边行人的下衣种类和是否带手提包两个属性识别错误.

    图 4  行人属性识别结果举例
    Fig. 4  Example of the result of pedestrian attributes

    本文基于三个通用行人属性数据集, 进行了行人再识别实验, 实验结果如表 6~表 8所示.表中“*”表示原文献中没有公布相关数据, 本文使用其源码复现得到.其中“—”表示没有该项实验结果.

    表 6  Market 1501数据集行人再识别结果
    Table 6  Re-id results of the Market 1501 dataset
    方法 rank-1 rank-5 rank-10 rank-20 mAP
    DADM[17] 39.4 - - - 19.6
    MBC[18] 45.56 67 76 82 26.11
    SML[19] 45.16 68.12 76 84 -
    DLDA[20] 48.15 - - - 29.94
    DNS[21] 55.43 - - - 29.87
    LSTM[8] 61.6 - - - 35.3
    S-CNN[22] 65.88 - - - 39.55
    2Stream[23] 79.51 90.91 94.09 96.23 59.87
    GAN[13] 79.33 - - - 55.95
    Pose[24] 78.06 90.76 94.41 96.52 56.23
    Deep[25] 83.7 - - - 65.5
    APR 84.29 93.2 95.19 97 64.67
    本文-$F{{C}_{0}}$ 85.37 94.05 96.13 97.31 65.11
    本文-归一化 84.92 93.75 95.92 97.46 64.82
    本文-权重 85.67 94.69 96.75 97.94 65.23
    本文-归一化+权重 86.31 94.97 97.10 98.01 65.46
    本文 86.90 95.37 97.03 98.17 65.87
    下载: 导出CSV 
    | 显示表格
    表 7  DukeMTMC数据集行人再识别结果
    Table 7  Re-id results of the DukeMTMC dataset
    方法 rank-1 mAP
    BoW+kissme[12] 25.13 12.17
    LOMO+XQDA[16] 30.75 17.04
    GAN[13] 67.68 47.13
    APR 70.69 51.88
    本文-$F{{C}_{0}}$ 71.56 52.36
    本文-归一化 70.92 52.03
    本文-权重 71.82 52.67
    本文-归一化+权重 72.11 52.84
    本文 72.83 53.42
    下载: 导出CSV 
    | 显示表格
    表 8  PETA数据集行人再识别结果
    Table 8  Re-id results of the PETA dataset
    方法 rank-1 mAP mA
    ikSVM[26] 41.12* 26.87* 69.5
    MRFr2[27] 51.71* 30.77* 75.6
    ACN[28] 59.04* 35.89* 81.15
    DeepMAR[10] 64.58* 41.12* 82.6
    WPAL[29] 68.14* 42.68* 85.5
    APR 71.29* 45.31* 84.71*
    本文-$F{{C}_{0}}$ 72.82 47.75 86.24
    本文-归一化 73.63 47.86 87.01
    本文-权重 73.14 47.51 87.28
    本文-归一化+权重 74.57 49.69 88.05
    本文 75.68 51.03 88.54
    下载: 导出CSV 
    | 显示表格
    4.4.1   Market 1501数据集

    首先, 针对本文提出的三个改进点分别做了对比实验, 在表 6中分别以“本文-$F{{C}_{0}}$”, “本文-归一化”, “本文-权重”代表基于APR网络单独添加这三处改进得到实验结果, “本文-归一化+权重”代表同时添加这两项改进得到的实验结果.可以发现, 相对于APR网络, 这4处改进在首位匹配率上都得到了提升, 分别提升了1.08%、0.63%、1.38%、2.02%, 其中添加了全连接层和改变权重对实验效果的提升比较明显, 对数据进行归一化也有一定提升.相应的平均准确率也有0.44%、0.15%、0.56%、0.79%的提升, 这说明三处改进对于提高行人再识别结果都有较大作用, 而且联合归一化和占比权重调整两处改进, 得到了较单独改进更好的实验效果, 说明两处改进之间具有互补之处.

    其次, 在Market 1501数据集, 本文选取了DADM、MBC等经典方法进行对比.可以发现, 传统方法的首位匹配率普遍不是很高, 一般在50%以下.在使用深度学习方法以后, 准确率得到了一个巨大的提升, 而APR网络更是达到了84.29%的首位匹配率和64.67%的平均准确率.本文在APR网络的基础上, 进一步提高了识别的准确率, 达到了86.90%的首位匹配率和65.87%的平均准确率, 第5, 10, 20匹配率也有相应的提升.

    4.4.2   DukeMTMC数据集

    同样的, 针对本文提出的三个改进点分别做了对比实验.从表 7中可以发现, 相对于APR网络, 这四处改进在首位匹配率上都得到了提升, 分别提升了0.87%, 0.26%, 1.13%, 1.42%, 类似于Market 1501数据集的实验结果, 添加全连接层和改变权重对实验效果的提升比较明显, 相应的平均准确率也有0.48%, 0.25%, 0.79%, 0.96%的提升, 这说明对于该数据集, 这三处改进也有较好的实验效果.

    其次, 针对DukeMTMC数据集, 由于使用该数据集的评测方法与Market 1501数据集不尽相同, 从中选取了BoW、LOMO等经典方法进行对比.可以发现, 这两种传统方法的效果不是很好.对抗学习达到了67.68%的首位匹配率. APR网络在该数据集达到了70.69%的首位匹配率和51.88%的平均准确率.本文在APR网络的基础上, 在该数据集上达到了72.83%的首位匹配率和53.42%的平均准确率.由于这几个方法没有提供第5、10、20匹配率, 所以在此不作对比.

    4.4.3   PETA数据集

    与前两个数据集相同, 针对本文提出的三个改进点分别做了对比实验.从表 8中可以发现, 相对于APR网络, 在这4处改进中分别提升了1.53%、2.34%、1.85%、3.28%, 类似于前两者的实验结果, 添加全连接层和改变权重对实验效果的提升比较明显, 相应的平均准确率也有1.53%、2.30%、2.57%、3.34%的提升, 由于该数据集各属性之间数量差异较大, 且属性内正负样本不平衡严重, 所以本文方法在此数据集上有较大提升.

    其次, 针对PETA数据集, 很多方法是比较各属性的平均准确率(Mean accuracy, mA), 而不是比较rank-1和mAP的值, 所以本文一方面进行了mA的比较, 可以发现, 本文达到了88.54%的属性平均准确率, 较传统方法有了大幅度提升, 相对于APR网络也是有3.83%的提升.另一方面, 本文通过对文献源代码进行复现, 得到rank-1和mAP的值, 可以发现, 本文相对于APR网络和经典算法, 也是有较大的提升, 达到了75.68%的首位匹配率和51.03%的平均准确率.

    综上可以得出, 本文提出的网络相对于一些经典方法, 在首位匹配率和平均准确率上都有很大的优势, 相较于APR网络也有较大的提升, 表明本文提出的基于数据先验知识的行人再识别网络, 对于行人再识别效果提升是有效的.如图 5所示的行人再识别结果的两个例子, 可以发现, 虽然存在一些误识, 但是总体识别效果已经达到较高的程度.

    图 5  行人再识别结果举例
    Fig. 5  Example of the re-id result

    随着深度学习技术的发展和带属性标注行人数据集的出现, 近年来基于行人属性的行人再识别有效提升了识别精度.在已有研究基础上, 本文基于行人属性中的数据先验分布知识设计了新的用于行人属性识别和再识别的深度神经网络.实验结果验证了本文方法的有效性.但依旧没有充分挖掘数据集的内在信息, 实验效果还可进一步提高.后续工作将进一步研究如何在网络设计中融入属性之间的相关性和异质性.

  • 图  1  自动化信任校准示意图

    Fig.  1  Diagram of calibration of trust in automation

    图  2  自动化信任定义涉及的重要特征

    Fig.  2  Important characteristics involved in the definitions of trust in automation

    图  3  自动化信任模型文献的时间分布

    Fig.  3  Time distribution of the literature on models of trust in automation

    图  4  自动化信任影响因素总结

    Fig.  4  Summary of factors influencing trust in automation

    图  5  三种自动化信任测量方法的应用比例

    Fig.  5  Application ratio of three trust in automation measures

    图  6  与文献发表趋势、重点应用领域及研究对象相关的自动化信任文献分析结果

    Fig.  6  Results of literature analysis related to literature publication trends, key application areas and research objects of trust in automation

    表  1  自动化信任计算模型总结

    Table  1  Summary of computational models of trust in automation

    类型离线信任模型在线信任模型
    输入先验参数先验参数及实时行为和
    生理及神经数据
    作用在可能的情景范围内进行
    模拟以预测自动化信任水平
    在系统实际运行期间实
    时估计自动化信任水平
    应用用于自动化系统设计阶段用于自动化系统部署阶段
    结果静态改进自动化设计动态调整自动化行为
    下载: 导出CSV

    表  2  常见的自动化信任行为测量方法总结

    Table  2  Summary of common behavioural measures of trust in automation

    行为典型例子
    依赖1) 将控制权移交给自动化或从自动化收回控制权[133].
    2) 降低对自动化的监视程度[134-135].
    遵从1) 接受由自动化提供的建议或选择的动作[136].
    2) 放弃自己的决定来遵守自动化的决定[137].
    其他1) 选择手动还是使用自动化完成任务[58, 84].
    2) 选择的自动化水平[138] (操作者选择的自动化
    水平越高, 其信任水平越高).
    3) 反应时间[139] (较长的反应时间代表较高的信任水平).
    下载: 导出CSV

    表  3  重要的生理及神经测量方法及其依据

    Table  3  Important physiological and neural measures of trust in automation and their basis

    测量方法方法依据
    通过眼动追踪捕获操作者的凝视行为来对自动化信任进行持续测量.监视行为等显性行为与主观自动化信任的联系更加紧密[78]. 虽然关于自动化信任与监视行为的实验证据并不是单一的[142], 但大多数实证研究表明, 自动化信任主观评分与操作者监视频率之间存在显著的负相关关系[48]. 表征操作者监视程度的凝视行为可以为实时自动化信任测量提供可靠信息[140, 142-143].
    利用 EEG 信号的图像特征来检测操作者的自动化信任状态.许多研究检验了人际信任的神经关联[144-148], 使用神经成像工具检验自动化信任的神经关联是可行的. EEG 比其他工具 (如功能性磁共振成像) 具有更好的时间动态性[149], 在脑−机接口设计中使用 EEG 图像模式来识别用户认知和情感状态已经具有良好的准确性[149]. 自动化信任是一种认知结构, 利用 EEG 信号的图像特征来检测操作者的自动化信任校准是可行的, 并且已经取得了较高的准确性[68-69, 150].
    通过 EDA 水平推断自动化信任水平已有研究表明, 较低的自动化信任水平可能与较高的 EDA 水平相关[151]. 将该方法与其他生理及神经测量方法结合使用比单独使用某种方法的自动化信任测量准确度更高, 例如将 EDA 与眼动追踪[142] 或 EEG 结合使用[68-69].
    下载: 导出CSV

    表  4  自动化信任的主要研究团体及其研究贡献

    Table  4  Main research groups of trust in automation and their research contributions

    序号国别机构团队及代表学者研究贡献文献数
    1美国美国陆军研究实验室人类研究和工程局的
    Chen
    提出基于系统透明度的一系列自动化信任校准方法26
    2美国美国空军研究实验室人类信任与交互分部的Lyons进行军事背景下的自动化信任应用研究24
    3美国中佛罗里达大学仿真模拟与培训学院的Hancock建立人−机器人信任的理论体系并进行相关影响
    因素实证研究
    21
    4美国克莱姆森大学机械工程系的 Saeidi 和Wang建立基于信任计算模型的自主分配策略来
    提高人机协作效能
    20
    5美国乔治梅森大学心理学系的 de Visser建立并完善自动化信任修复相关理论, 着重研究
    自动化的拟人特征对信任修复的作用
    18
    6日本筑波大学风险工程系的 Itoh 和Inagaki基于自动化信任校准的人−自动驾驶汽车协同系统设计方法14
    下载: 导出CSV
  • [1] Schörnig N. Unmanned systems: The robotic revolution as a challenge for arms control. Information Technology for Peace and Security: IT Applications and Infrastructures in Conflicts, Crises, War, and Peace. Wiesbaden: Springer, 2019. 233−256
    [2] Meyer G, Beiker S. Road Vehicle Automation. Cham: Springer, 2019. 73−109
    [3] Bahrin M A K, Othman M F, Azli N H N, Talib M F. Industry 4.0: A review on industrial automation and robotic. Jurnal Teknologi, 2016, 78(6−13): 137−143
    [4] Musen M A, Middleton B, Greenes R A. Clinical decision-support systems. Biomedical Informatics: Computer Applications in Health Care and Biomedicine. London: Springer, 2014. 643−674
    [5] Janssen C P, Donker S F, Brumby D P, Kun A L. History and future of human-automation interaction. International Journal of Human-Computer Studies, 2019, 131: 99−107 doi: 10.1016/j.ijhcs.2019.05.006
    [6] 许为, 葛列众. 智能时代的工程心理学. 心理科学进展, 2020, 28(9): 1409−1425 doi: 10.3724/SP.J.1042.2020.01409

    Xu Wei, Ge Lie-Zhong. Engineering psychology in the era of artificial intelligence. Advances in Psychological Science, 2020, 28(9): 1409−1425 doi: 10.3724/SP.J.1042.2020.01409
    [7] Parisi G I, Kemker R, Part J L, Kanan C, Wermter S. Continual lifelong learning with neural networks: A review. Neural Networks, 2019, 113: 54−71 doi: 10.1016/j.neunet.2019.01.012
    [8] Grigsby S S. Artificial intelligence for advanced human-machine symbiosis. In: Proceeding of the 12th International Conference on Augmented Cognition: Intelligent Technologies. Berlin, Germany: Springer, 2018. 255−266
    [9] Gogoll J, Uhl M. Rage against the machine: Automation in the moral domain. Journal of Behavioral and Experimental Economics, 2018, 74: 97−103 doi: 10.1016/j.socec.2018.04.003
    [10] Gunning D. Explainable artificial intelligence (XAI) [Online], available: https://www.cc.gatech.edu/~alanwags/DLAI2016/(Gunning)%20IJCAI-16%20DLAI%20WS.pdf, April 26, 2020
    [11] Endsley M R. From here to autonomy: Lessons learned from human-automation research. Human Factors: the Journal of the Human Factors and Ergonomics Society, 2017, 59(1): 5−27 doi: 10.1177/0018720816681350
    [12] Blomqvist K. The many faces of trust. Scandinavian Journal of Management, 1997, 13(3): 271−286 doi: 10.1016/S0956-5221(97)84644-1
    [13] Rotter J B. A new scale for the measurement of interpersonal trust. Journal of Personality, 1967, 35(4): 651−665 doi: 10.1111/j.1467-6494.1967.tb01454.x
    [14] Muir B M. Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics, 1994, 37(11): 1905−1922 doi: 10.1080/00140139408964957
    [15] Lewandowsky S, Mundy M, Tan G P A. The dynamics of trust: Comparing humans to automation. Journal of Experimental Psychology: Applied, 2000, 6(2): 104−123 doi: 10.1037/1076-898X.6.2.104
    [16] Muir B M. Trust between humans and machines, and the design of decision aids. International Journal of Man-Machine Studies, 1987, 27(5-6): 527−539 doi: 10.1016/S0020-7373(87)80013-5
    [17] Parasuraman R, Riley V. Humans and automation: Use, misuse, disuse, abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society, 1997, 39(2): 230−253 doi: 10.1518/001872097778543886
    [18] Levin S. Tesla fatal crash: ‘autopilot’ mode sped up car before driver killed, report finds [Online], available: https://www.theguardian.com/technology/2018/jun/07/tesla-fatal-crash-silicon-valley-autopilot-mode-report, June 8, 2020
    [19] The Tesla Team. An update on last Week' s accident [Online], available: https://www.tesla.com/en_GB/blog/update-last-week%E2%80%99s-accident, March 20, 2020
    [20] Mayer R C, Davis J H, Schoorman F D. An integrative model of organizational trust. Academy of Management Review, 1995, 20(3): 709−734 doi: 10.5465/amr.1995.9508080335
    [21] McKnight D H, Cummings L L, Chervany N L. Initial trust formation in new organizational relationships. Academy of Management Review, 1998, 23(3): 473−490 doi: 10.5465/amr.1998.926622
    [22] Jarvenpaa S L, Knoll K, Leidner D E. Is anybody out there? Antecedents of trust in global virtual teams. Journal of Management Information Systems, 1998, 14(4): 29−64 doi: 10.1080/07421222.1998.11518185
    [23] Siau K, Shen Z X. Building customer trust in mobile commerce. Communications of the ACM, 2003, 46(4): 91−94 doi: 10.1145/641205.641211
    [24] Gefen D. E-commerce: The role of familiarity and trust. Omega, 2000, 28(6): 725−737 doi: 10.1016/S0305-0483(00)00021-9
    [25] McKnight D H, Choudhury V, Kacmar C. Trust in e-commerce vendors: A two-stage model. In: Proceeding of the 21st International Conference on Information Systems. Brisbane, Australia: Association for Information Systems, 2000. 96−103
    [26] Li X, Hess T J, Valacich J S. Why do we trust new technology? A study of initial trust formation with organizational information systems. The Journal of Strategic Information Systems, 2008, 17(1): 39−71 doi: 10.1016/j.jsis.2008.01.001
    [27] Lee J D, See K A. Trust in automation: Designing for appropriate reliance. Human Factors: the Journal of the Human Factors and Ergonomics Society, 2004, 46(1): 50−80 doi: 10.1518/hfes.46.1.50.30392
    [28] Pan B, Hembrooke H, Joachim' s T, Lorigo L, Gay G, Granka L. In Google we trust: Users' & decisions on rank, position, and relevance. Journal of Computer-Mediated Communication, 2007, 12(3): 801−823 doi: 10.1111/j.1083-6101.2007.00351.x
    [29] Riegelsberger J, Sasse M A, McCarthy J D. The researcher' s dilemma: Evaluating trust in computer-mediated communication. International Journal of Human-Computer Studies, 2003, 58(6): 759−781 doi: 10.1016/S1071-5819(03)00042-9
    [30] Hancock P A, Billings D R, Schaefer K E, Chen J Y C, de Visser E J, Parasuraman R. A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2011, 53(5): 517−527 doi: 10.1177/0018720811417254
    [31] Billings D R, Schaefer K E, Chen J Y C, Hancock P A. Human-robot interaction: Developing trust in robots. In: Proceeding of the 7th ACM/IEEE International Conference on Human-Robot Interaction. Boston, USA: ACM, 2012. 109−110
    [32] Madhavan P, Wiegmann D A. Similarities and differences between human-human and human-automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 2007, 8(4): 277−301 doi: 10.1080/14639220500337708
    [33] Walker G H, Stanton N A, Salmon P. Trust in vehicle technology. International Journal of Vehicle Design, 2016, 70(2): 157−182 doi: 10.1504/IJVD.2016.074419
    [34] Siau K, Wang W Y. Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 2018, 31(2): 47−53
    [35] Schaefer K E, Chen J Y C, Szalma J L, Hancock P A. A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors: the Journal of the Human Factors and Ergonomics Society, 2016, 58(3): 377−400 doi: 10.1177/0018720816634228
    [36] Hoff K A, Bashir M. Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors: the Journal of the Human Factors and Ergonomics Society, 2015, 57(3): 407−434 doi: 10.1177/0018720814547570
    [37] Kaber D B. Issues in human-automation interaction modeling: Presumptive aspects of frameworks of types and levels of automation. Journal of Cognitive Engineering and Decision Making, 2018, 12(1): 7−24 doi: 10.1177/1555343417737203
    [38] Hancock P A. Imposing limits on autonomous systems. Ergonomics, 2017, 60(2): 284−291 doi: 10.1080/00140139.2016.1190035
    [39] Schaefer K E. The Perception and Measurement of Human-Robot Trust [Ph. D. dissertation], University of Central Florida, USA, 2013.
    [40] Schaefer K E, Billings D R, Szalma J L, Adams J K, Sanders T L, Chen J Y C, et al. A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Human-Robot Interaction, Technical Report ARL-TR-6984, Army Research Laboratory, USA, 2014.
    [41] Nass C, Fogg B J, Moon Y. Can computers be teammates? International Journal of Human-Computer Studies, 1996, 45(6): 669−678 doi: 10.1006/ijhc.1996.0073
    [42] Madhavan P, Wiegmann D A. A new look at the dynamics of human-automation trust: Is trust in humans comparable to trust in machines? Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2004, 48(3): 581−585 doi: 10.1177/154193120404800365
    [43] Dimoka A. What does the brain tell us about trust and distrust? Evidence from a functional neuroimaging study. MIS Quarterly, 2010, 34(2): 373−396 doi: 10.2307/20721433
    [44] Riedl R, Hubert M, Kenning P. Are there neural gender differences in online trust? An fMRI study on the perceived trustworthiness of eBay offers. MIS Quarterly, 2010, 34(2): 397−428 doi: 10.2307/20721434
    [45] Billings D, Schaefer K, Llorens N, Hancock P A. What is Trust? Defining the construct across domains. In: Proceeding of the American Psychological Association Conference. Florida, USA: APA, 2012. 76−84
    [46] Barber B. The Logic and Limits of Trust. New Jersey: Rutgers University Press, 1983. 15−22
    [47] Rempel J K, Holmes J G, Zanna M P. Trust in close relationships. Journal of Personality and Social Psychology, 1985, 49(1): 95−112 doi: 10.1037/0022-3514.49.1.95
    [48] Muir B M, Moray N. Trust in automation. Part Ⅱ. Experimental studies of trust and human intervention in a process control simulation. Ergonomics, 1996, 39(3): 429−460 doi: 10.1080/00140139608964474
    [49] Ajzen I. The theory of planned behavior. Organizational Behavior and Human Decision Processes, 1991, 50(2): 179−211 doi: 10.1016/0749-5978(91)90020-T
    [50] Dzindolet M T, Pierce L G, Beck H P, Dawe L A, Anderson B W. Predicting misuse and disuse of combat identification systems. Military Psychology, 2001, 13(3): 147−164 doi: 10.1207/S15327876MP1303_2
    [51] Madhavan P, Wiegmann D A. Effects of information source, pedigree, and reliability on operator interaction with decision support systems. Human Factors: the Journal of the Human Factors and Ergonomics Society, 2007, 49(5): 773−785 doi: 10.1518/001872007X230154
    [52] Goodyear K, Parasuraman R, Chernyak S, de Visser E, Madhavan P, Deshpande G, et al. An fMRI and effective connectivity study investigating miss errors during advice utilization from human and machine agents. Social Neuroscience, 2017, 12(5): 570−581 doi: 10.1080/17470919.2016.1205131
    [53] Hoffmann H, Söllner M. Incorporating behavioral trust theory into system development for ubiquitous applications. Personal and Ubiquitous Computing, 2014, 18(1): 117−128 doi: 10.1007/s00779-012-0631-1
    [54] Ekman F, Johansson M, Sochor J. Creating appropriate trust in automated vehicle systems: A framework for HMI design. IEEE Transactions on Human-Machine Systems, 2018, 48(1): 95−101 doi: 10.1109/THMS.2017.2776209
    [55] Xu A Q, Dudek G. OPTIMo: Online probabilistic trust inference model for asymmetric human-robot collaborations. In: Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI). Portland, USA: IEEE, 2015. 221−228
    [56] Nam C, Walker P, Lewis M, Sycara K. Predicting trust in human control of swarms via inverse reinforcement learning. In: Proceeding of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). Lisbon, Portugal: IEEE, 2017. 528−533
    [57] Akash K, Polson K, Reid T, Jain N. Improving human-machine collaboration through transparency-based feedback-part I: Human trust and workload model. IFAC-PapersOnLine, 2019, 51(34): 315−321 doi: 10.1016/j.ifacol.2019.01.028
    [58] Gao J, Lee J D. Extending the decision field theory to model operators' reliance on automation in supervisory control situations. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 2006, 36(5): 943−959 doi: 10.1109/TSMCA.2005.855783
    [59] Wang Y, Shi Z, Wang C, Zhang F. Human-robot mutual trust in (Semi) autonomous underwater robots. Cooperative Robots and Sensor Networks. Berlin: Springer, 2014. 115−137
    [60] Clare A S. Modeling Real-time Human-automation Collaborative Scheduling of Unmanned Vehicles [Ph. D. dissertation], Massachusetts Institute of Technology, USA, 2013.
    [61] Gao F, Clare A S, Macbeth J C, Cummings M L. Modeling the impact of operator trust on performance in multiple robot control. In: Proceeding of the AAAI Spring Symposium Series. Palo Alto, USA: AAAI, 2013. 164−169
    [62] Hoogendoorn M, Jaffry S W, Treur J. Cognitive and neural modeling of dynamics of trust in competitive trustees. Cognitive Systems Research, 2012, 14(1): 60−83 doi: 10.1016/j.cogsys.2010.12.011
    [63] Hussein A, Elsawah S, Abbass H A. Towards trust-aware human-automation interaction: An overview of the potential of computational trust models. In: Proceedings of the 53rd Hawaii International Conference on System Sciences. Hawaii, USA: University of Hawaii, 2020. 47−57
    [64] Lee J, Moray N. Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 1992, 35(10): 1243−1270 doi: 10.1080/00140139208967392
    [65] Akash K, Hu W L, Reid T, Jain N. Dynamic modeling of trust in human-machine interactions. In: Proceedings of the 2017 American Control Conference (ACC). Seattle, USA: IEEE, 2017. 1542−1548
    [66] Hu W L, Akash K, Reid T, Jain N. Computational modeling of the dynamics of human trust during human-machine interactions. IEEE Transactions on Human-Machine Systems, 2019, 49(6): 485−497 doi: 10.1109/THMS.2018.2874188
    [67] Akash K, Reid T, Jain N. Improving human-machine collaboration through transparency-based feedback-part Ⅱ: Control design and synthesis. IFAC-PapersOnLine, 2019, 51(34): 322−328 doi: 10.1016/j.ifacol.2019.01.026
    [68] Hu W L, Akash K, Jain N, Reid T. Real-time sensing of trust in human-machine interactions. IFAC-PapersOnLine, 2016, 49(32): 48−53 doi: 10.1016/j.ifacol.2016.12.188
    [69] Akash K, Hu W L, Jain N, Reid T. A classification model for sensing human trust in machines using EEG and GSR. ACM Transactions on Interactive Intelligent Systems, 2018, 8(4): Article No. 27
    [70] Akash K, Reid T, Jain N. Adaptive probabilistic classification of dynamic processes: A case study on human trust in automation. In: Proceedings of the 2018 Annual American Control Conference (ACC). Milwaukee, USA: IEEE, 2018. 246−251
    [71] Merritt S M, Ilgen D R. Not all trust is created equal: Dispositional and history-based trust in human-automation interactions. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2008, 50(2): 194−210 doi: 10.1518/001872008X288574
    [72] Bagheri N, Jamieson G A. The impact of context-related reliability on automation failure detection and scanning behaviour. In: Proceedings of the 2004 International Conference on Systems, Man and Cybernetics. The Hague, Netherlands: IEEE, 2004. 212−217
    [73] Cahour B, Forzy J F. Does projection into use improve trust and exploration? An example with a cruise control system. Safety Science, 2009, 47(9): 1260−1270 doi: 10.1016/j.ssci.2009.03.015
    [74] Cummings M L, Clare A, Hart C. The role of human-automation consensus in multiple unmanned vehicle scheduling. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2010, 52(1): 17−27 doi: 10.1177/0018720810368674
    [75] Kraus J M, Forster Y, Hergeth S, Baumann M. Two routes to trust calibration: effects of reliability and brand information on trust in automation. International Journal of Mobile Human Computer Interaction, 2019, 11(3): 1−17 doi: 10.4018/IJMHCI.2019070101
    [76] Kelly C, Boardman M, Goillau P, Jeannot E. Guidelines for Trust in Future ATM Systems: A Literature Review, Technical Report HRS/HSP-005-GUI-01, European Organization for the Safety of Air Navigation, Belgium, 2003.
    [77] Riley V. A general model of mixed-initiative human-machine systems. Proceedings of the Human Factors Society Annual Meeting, 1989, 33(2): 124−128 doi: 10.1177/154193128903300227
    [78] Parasuraman R, Manzey D H. Complacency and bias in human use of automation: An attentional integration. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2010, 52(3): 381−410 doi: 10.1177/0018720810376055
    [79] Bailey N R, Scerbo M W, Freeman F G, Mikulka P J, Scott L A. Comparison of a brain-based adaptive system and a manual adaptable system for invoking automation. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2006, 48(4): 693−709 doi: 10.1518/001872006779166280
    [80] Moray N, Hiskes D, Lee J, Muir B M. Trust and Human Intervention in Automated Systems. Expertise and Technology: Cognition & Human-computer Cooperation. New Jersey: L. Erlbaum Associates Inc, 1995. 183−194
    [81] Yu K, Berkovsky S, Taib R, Conway D, Zhou J L, Chen F. User trust dynamics: An investigation driven by differences in system performance. In: Proceedings of the 22nd International Conference on Intelligent User Interfaces. Limassol, Cyprus: ACM, 2017. 307−317
    [82] De Visser E J, Monfort S S, McKendrick R, Smith M A B, McKnight P E, Krueger F, et al. Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 2016, 22(3): 331−349 doi: 10.1037/xap0000092
    [83] Pak R, Fink N, Price M, Bass B, Sturre L. Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. Ergonomics, 2012, 55(9): 1059−1072 doi: 10.1080/00140139.2012.691554
    [84] De Vries P, Midden C, Bouwhuis D. The effects of errors on system trust, self-confidence, and the allocation of control in route planning. International Journal of Human-Computer Studies, 2003, 58(6): 719−735 doi: 10.1016/S1071-5819(03)00039-9
    [85] Moray N, Inagaki T, Itoh M. Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. Journal of Experimental Psychology: Applied, 2000, 6(1): 44−58 doi: 10.1037/1076-898X.6.1.44
    [86] Lewis M, Sycara K, Walker P. The role of trust in human-robot interaction. Foundations of Trusted Autonomy. Cham: Springer, 2018. 135−159
    [87] Verberne F M F, Ham J, Midden C J H. Trust in smart systems: Sharing driving goals and giving information to increase trustworthiness and acceptability of smart systems in cars. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2012, 54(5): 799−810 doi: 10.1177/0018720812443825
    [88] de Visser E, Parasuraman R. Adaptive aiding of human-robot teaming: Effects of imperfect automation on performance, trust, and workload. Journal of Cognitive Engineering and Decision Making, 2011, 5(2): 209−231 doi: 10.1177/1555343411410160
    [89] Endsley M R. Situation awareness in future autonomous vehicles: Beware of the unexpected. In: Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018)}. Cham, Switzerland: Springer, 2018. 303−309
    [90] Wang L, Jamieson G A, Hollands J G. Trust and reliance on an automated combat identification system. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2009, 51(3): 281−291 doi: 10.1177/0018720809338842
    [91] Dzindolet M T, Peterson S A, Pomranky R A, Pierce G L, Beck H P. The role of trust in automation reliance. International Journal of Human-Computer Studies, 2003, 58(6): 697−718 doi: 10.1016/S1071-5819(03)00038-7
    [92] Davis S E. Individual Differences in Operators´ Trust in Autonomous Systems: A Review of the Literature, Technical Report DST-Group-TR-3587, Joint and Operations Analysis Division, Defence Science and Technology Group, Australia, 2019.
    [93] Merritt S M. Affective processes in human-automation interactions. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2011, 53(4): 356−370 doi: 10.1177/0018720811411912
    [94] Stokes C K, Lyons J B, Littlejohn K, Natarian J, Case E, Speranza N. Accounting for the human in cyberspace: Effects of mood on trust in automation. In: Proceedings of the 2010 International Symposium on Collaborative Technologies and Systems. Chicago, USA: IEEE, 2010. 180−187
    [95] Merritt S M, Heimbaugh H, LaChapell J, Lee D. I trust it, but I don' t know why: Effects of implicit attitudes toward automation on trust in an automated system. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2013, 55(3): 520−534 doi: 10.1177/0018720812465081
    [96] Ardern‐Jones J, Hughes D K, Rowe P H, Mottram D R, Green C F. Attitudes and opinions of nursing and medical staff regarding the supply and storage of medicinal products before and after the installation of a drawer-based automated stock-control system. International Journal of Pharmacy Practice, 2009, 17(2): 95−99 doi: 10.1211/ijpp.17.02.0004
    [97] Gao J, Lee J D, Zhang Y. A dynamic model of interaction between reliance on automation and cooperation in multi-operator multi-automation situations. International Journal of Industrial Ergonomics, 2006, 36(5): 511−526 doi: 10.1016/j.ergon.2006.01.013
    [98] Reichenbach J, Onnasch L, Manzey D. Human performance consequences of automated decision aids in states of sleep loss. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2011, 53(6): 717−728 doi: 10.1177/0018720811418222
    [99] Chen J Y C, Terrence P I. Effects of imperfect automation and individual differences on concurrent performance of military and robotics tasks in a simulated multitasking environment. Ergonomics, 2009, 52(8): 907−920 doi: 10.1080/00140130802680773
    [100] Chen J Y C, Barnes M J. Supervisory control of multiple robots in dynamic tasking environments. Ergonomics, 2012, 55(9): 1043−1058 doi: 10.1080/00140139.2012.689013
    [101] Naef M, Fehr E, Fischbacher U, Schupp J, Wagner G G. Decomposing trust: Explaining national and ethnical trust differences. International Journal of Psychology, 2008, 43(3-4): 577−577
    [102] Huerta E, Glandon T A, Petrides Y. Framing, decision-aid systems, and culture: Exploring influences on fraud investigations. International Journal of Accounting Information Systems, 2012, 13(4): 316−333 doi: 10.1016/j.accinf.2012.03.007
    [103] Chien S Y, Lewis M, Sycara K, Liu J S, Kumru A. Influence of cultural factors in dynamic trust in automation. In: Proceedings of the 2016 International Conference on Systems, Man, and Cybernetics (SMC). Budapest, Hungary: IEEE, 2016. 2884−2889
    [104] Donmez B, Boyle L N, Lee J D, McGehee D V. Drivers' attitudes toward imperfect distraction mitigation strategies. Transportation Research Part F: Traffic Psychology and Behaviour, 2006, 9(6): 387−398 doi: 10.1016/j.trf.2006.02.001
    [105] Kircher K, Thorslund B. Effects of road surface appearance and low friction warning systems on driver behaviour and confidence in the warning system. Ergonomics, 2009, 52(2): 165−176 doi: 10.1080/00140130802277547
    [106] Ho G, Wheatley D, Scialfa C T. Age differences in trust and reliance of a medication management system. Interacting with Computers, 2005, 17(6): 690−710 doi: 10.1016/j.intcom.2005.09.007
    [107] Steinke F, Fritsch T, Silbermann L. Trust in ambient assisted living (AAL) − a systematic review of trust in automation and assistance systems. International Journal on Advances in Life Sciences, 2012, 4(3-4): 77−88
    [108] McBride M, Morgan S. Trust calibration for automated decision aids [Online], available: https://www.researchgate.net/publication/303168234_Trust_calibration_for_automated_decision_aids, May 15, 2020
    [109] Gaines Jr S O, Panter A T, Lyde M D, Steers W N, Rusbult C E, Cox C L, et al. Evaluating the circumplexity of interpersonal traits and the manifestation of interpersonal traits in interpersonal trust. Journal of Personality and Social Psychology, 1997, 73(3): 610−623 doi: 10.1037/0022-3514.73.3.610
    [110] Looije R, Neerincx M A, Cnossen F. Persuasive robotic assistant for health self-management of older adults: Design and evaluation of social behaviors. International Journal of Human-Computer Studies, 2010, 68(6): 386−397 doi: 10.1016/j.ijhcs.2009.08.007
    [111] Szalma J L, Taylor G S. Individual differences in response to automation: The five factor model of personality. Journal of Experimental Psychology: Applied, 2011, 17(2): 71−96 doi: 10.1037/a0024170
    [112] Balfe N, Sharples S, Wilson J R. Understanding is key: An analysis of factors pertaining to trust in a real-world automation system. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2018, 60(4): 477−495 doi: 10.1177/0018720818761256
    [113] Rajaonah B, Anceaux F, Vienne F. Trust and the use of adaptive cruise control: A study of a cut-in situation. Cognition, Technology & Work, 2006, 8(2): 146−155
    [114] Fan X C, Oh S, McNeese M, Yen J, Cuevas H, Strater L, et al. The influence of agent reliability on trust in human-agent collaboration. In: Proceedings of the 15th European Conference on Cognitive Ergonomics: The Ergonomics of Cool Interaction. Funchal, Portugal: Association for Computing Machinery, 2008. Article No: 7
    [115] Sanchez J, Rogers W A, Fisk A D, Rovira E. Understanding reliance on automation: Effects of error type, error distribution, age and experience. Theoretical Issues in Ergonomics Science, 2014, 15(2): 134−160 doi: 10.1080/1463922X.2011.611269
    [116] Riley V. Operator reliance on automation: Theory and data. Automation and Human Performance: Theory and Applications. Mahwah, NJ: Lawrence Erlbaum Associates, 1996. 19−35
    [117] Lee J D, Moray N. Trust, self-confidence, and operators' adaptation to automation. International Journal of Human-Computer Studies, 1994, 40(1): 153−184 doi: 10.1006/ijhc.1994.1007
    [118] Perkins L A, Miller J E, Hashemi A, Burns G. Designing for human-centered systems: Situational risk as a factor of trust in automation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2010, 54(25): 2130−2134 doi: 10.1177/154193121005402502
    [119] Bindewald J M, Rusnock C F, Miller M E. Measuring human trust behavior in human-machine teams. In: Proceedings of the AHFE 2017 International Conference on Applied Human Factors in Simulation and Modeling. Los Angeles, USA: Springer, 2017. 47−58
    [120] Biros D P, Daly M, Gunsch G. The influence of task load and automation trust on deception detection. Group Decision and Negotiation, 2004, 13(2): 173−189 doi: 10.1023/B:GRUP.0000021840.85686.57
    [121] Workman M. Expert decision support system use, disuse, and misuse: A study using the theory of planned behavior. Computers in Human Behavior, 2005, 21(2): 211−231 doi: 10.1016/j.chb.2004.03.011
    [122] Jian J Y, Bisantz A M, Drury C G. Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 2000, 4(1): 53−71 doi: 10.1207/S15327566IJCE0401_04
    [123] Buckley L, Kaye S A, Pradhan A K. Psychosocial factors associated with intended use of automated vehicles: A simulated driving study. Accident Analysis & Prevention, 2018, 115: 202−208
    [124] Mayer R C, Davis J H. The effect of the performance appraisal system on trust for management: A field quasi-experiment. Journal of Applied Psychology, 1999, 84(1): 123−136 doi: 10.1037/0021-9010.84.1.123
    [125] Madsen M, Gregor S. Measuring human-computer trust. In: Proceedings of the 11th Australasian Conference on Information Systems. Brisbane, Australia: Australasian Association for Information Systems, 2000. 6−8
    [126] Chien S Y, Semnani-Azad Z, Lewis M, Sycara K. Towards the development of an inter-cultural scale to measure trust in automation. In: Proceedings of the 6th International Conference on Cross-cultural Design. Heraklion, Greece: Springer, 2014. 35−36
    [127] Garcia D, Kreutzer C, Badillo-Urquiola K, Mouloua M. Measuring trust of autonomous vehicles: A development and validation study. In: Proceedings of the 2015 International Conference on Human-Computer Interaction. Los Angeles, USA: Springer, 2015. 610−615
    [128] Yagoda R E, Gillan D J. You want me to trust a ROBOT? The development of a human-robot interaction trust scale. International Journal of Social Robotics, 2012, 4(3): 235−248 doi: 10.1007/s12369-012-0144-0
    [129] Dixon S R, Wickens C D. Automation reliability in unmanned aerial vehicle control: A reliance-compliance model of automation dependence in high workload. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2006, 48(3): 474−486 doi: 10.1518/001872006778606822
    [130] Chiou E, Lee J D. Beyond reliance and compliance: Human-automation coordination and cooperation. In: Proceedings of the 59th International Annual Meeting of the Human Factors and Ergonomics Society. Los Angeles, USA: SAGE, 2015. 159−199
    [131] Bindewald J M, Rusnock C F, Miller M E. Measuring human trust behavior in human-machine teams. In: Proceedings of the AHFE 2017 International Conference on Applied Human Factors in Simulation and Modeling. Los Angeles, USA: Springer, 2017. 47−58
    [132] Chancey E T, Bliss J P, Yamani Y, Handley H A H. Trust and the compliance-reliance paradigm: The effects of risk, error bias, and reliability on trust and dependence. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2017, 59(3): 333−345 doi: 10.1177/0018720816682648
    [133] Gremillion G M, Metcalfe J S, Marathe A R, Paul V J, Christensen J, Drnec K, et al. Analysis of trust in autonomy for convoy operations. In: Proceedings of SPIE 9836, Micro-and Nanotechnology Sensors, Systems, and Applications VⅢ. Washington, USA: SPIE, 2016. 9836−9838
    [134] Basu C, Singhal M. Trust dynamics in human autonomous vehicle interaction: A review of trust models. In: Proceedings of the 2016 AAAI Spring Symposium Series. Palo Alto, USA: AAAI, 2016. 238−245
    [135] Hester M, Lee K, Dyre B P. “Driver Take Over’’: A preliminary exploration of driver trust and performance in autonomous vehicles. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2017, 61(1): 1969−1973 doi: 10.1177/1541931213601971
    [136] De Visser E J, Monfort S S, Goodyear K, Lu L, O’Hara M, Lee M R, et al. A little anthropomorphism goes a long way: Effects of oxytocin on trust, compliance, and team performance with automated agents. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2017, 59(1): 116−133 doi: 10.1177/0018720816687205
    [137] Gaudiello I, Zibetti E, Lefort S, Chetouani M, Ivaldi S. Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Computers in Human Behavior, 2016, 61: 633−655 doi: 10.1016/j.chb.2016.03.057
    [138] Desai M, Kaniarasu P, Medvedev M, Steinfeld A, Yanco H. Impact of robot failures and feedback on real-time trust. In: Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Tokyo, Japan: IEEE, 2013. 251−258
    [139] Payre W, Cestac J, Delhomme P. Fully automated driving: Impact of trust and practice on manual control recovery. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2016, 58(2): 229−241 doi: 10.1177/0018720815612319
    [140] Hergeth S, Lorenz L, Vilimek R, Krems J F. Keep your scanners peeled: Gaze behavior as a measure of automation trust during highly automated driving. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2016, 58(3): 509−519 doi: 10.1177/0018720815625744
    [141] Khalid H M, Shiung L W, Nooralishahi P, Rasool Z, Helander M G, Kiong L C, Ai-Vyrn C. Exploring psycho-physiological correlates to trust: Implications for human-robot-human interaction. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2016, 60(1): 697−701 doi: 10.1177/1541931213601160
    [142] Gold C, Körber M, Hohenberger C, Lechner D, Bengler K. Trust in automation-Before and after the experience of take-over scenarios in a highly automated vehicle. Procedia Manufacturing, 2015, 3: 3025−3032 doi: 10.1016/j.promfg.2015.07.847
    [143] Walker F, Verwey W B, Martens M. Gaze behaviour as a measure of trust in automated vehicles. In: Proceedings of the 6th Humanist Conference. Washington, USA: HUMANIST, 2018. 117−123
    [144] Adolphs R. Trust in the brain. Nature Neuroscience, 2002, 5(3): 192−193 doi: 10.1038/nn0302-192
    [145] Delgado M R, Frank R H, Phelps E A. Perceptions of moral character modulate the neural systems of reward during the trust game. Nature Neuroscience, 2005, 8(11): 1611−1618 doi: 10.1038/nn1575
    [146] King-Casas B, Tomlin D, Anen C, Camerer C F, Quartz S R, Montague P R. Getting to know you: Reputation and trust in a two-person economic exchange. Science, 2005, 308(5718): 78−83 doi: 10.1126/science.1108062
    [147] Krueger F, McCabe K, Moll J, Kriegeskorte N, Zahn R, Strenziok M, et al. Neural correlates of trust. Proceedings of the National Academy of Sciences of the United States of America, 2007, 104(50): 20084−20089 doi: 10.1073/pnas.0710103104
    [148] Long Y, Jiang X, Zhou X. To believe or not to believe: Trust choice modulates brain responses in outcome evaluation. Neuroscience, 2012, 200: 50−58 doi: 10.1016/j.neuroscience.2011.10.035
    [149] Minguillon J, Lopez-Gordo M A, Pelayo F. Trends in EEG-BCI for daily-life: Requirements for artifact removal. Biomedical Signal Processing and Control, 2017, 31: 407−418 doi: 10.1016/j.bspc.2016.09.005
    [150] Choo S, Sanders N, Kim N, Kim W, Nam C S, Fitts E P. Detecting human trust calibration in automation: A deep learning approach. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2019, 63(1): 88−90 doi: 10.1177/1071181319631298
    [151] Morris D M, Erno J M, Pilcher J J. Electrodermal response and automation trust during simulated self-driving car use. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2017, 61(1): 1759−1762 doi: 10.1177/1541931213601921
    [152] Drnec K, Marathe A R, Lukos J R, Metcalfe J S. From trust in automation to decision neuroscience: Applying cognitive neuroscience methods to understand and improve interaction decisions involved in human automation interaction. Frontiers in Human Neuroscience, 2016, 10: Article No. 290
    [153] 刘伟. 人机融合智能的现状与展望. 国家治理, 2019, (4): 7−15

    Liu Wei. Current situation and prospect of human-computer fusion intelligence. Governance, 2019, (4): 7−15
    [154] 许为. 五论以用户为中心的设计: 从自动化到智能时代的自主化以及自动驾驶车. 应用心理学, 2020, 26(2): 108−128 doi: 10.3969/j.issn.1006-6020.2020.02.002

    Xu Wei. User-centered design (V): From automation to the autonomy and autonomous vehicles in the intelligence era. Chinese Journal of Applied Psychology, 2020, 26(2): 108−128 doi: 10.3969/j.issn.1006-6020.2020.02.002
    [155] 王新野, 李苑, 常明, 游旭群. 自动化信任和依赖对航空安全的危害及其改进. 心理科学进展, 2017, 25(9): 1614−1622 doi: 10.3724/SP.J.1042.2017.01614

    Wang Xin-Ye, Li Yuan, Chang Ming, You Xu-Qun. The detriments and improvement of automation trust and dependence to aviation safety. Advances in Psychological Science, 2017, 25(9): 1614−1622 doi: 10.3724/SP.J.1042.2017.01614
    [156] 曹清龙. 自动化信任和依赖对航空安全的危害及其改进分析. 技术与市场, 2018, 25(4): 160 doi: 10.3969/j.issn.1006-8554.2018.04.082

    Cao Qing-Long. Analysis of the detriments and improvement of automation trust and dependence to aviation safety. Technology and Market, 2018, 25(4): 160 doi: 10.3969/j.issn.1006-8554.2018.04.082
    [157] Adams B D, Webb R D G. Trust in small military teams. In: Proceedings of the 7th International Command and Control Technology Symposium. Virginia, USA: DTIC, 2002. 1−20
    [158] Beggiato M, Pereira M, Petzoldt T, Krems J. Learning and development of trust, acceptance and the mental model of ACC. A longitudinal on-road study. Transportation Research Part F: Traffic Psychology and Behaviour, 2015, 35: 75−84 doi: 10.1016/j.trf.2015.10.005
    [159] Reimer B. Driver assistance systems and the transition to automated vehicles: A path to increase older adult safety and mobility? Public Policy & Aging Report, 2014, 24(1): 27−31
    [160] Large D R, Burnett G E. The effect of different navigation voices on trust and attention while using in-vehicle navigation systems. Journal of Safety Research, 2014, 49: 69.e1−75 doi: 10.1016/j.jsr.2014.02.009
    [161] Zhang T R, Tao D, Qu X D, Zhang X Y, Zeng J H, Zhu H Y, et al. Automated vehicle acceptance in China: Social influence and initial trust are key determinants. Transportation Research Part C: Emerging Technologies, 2020, 112: 220−233 doi: 10.1016/j.trc.2020.01.027
    [162] Choi J K, Ji Y G. Investigating the importance of trust on adopting an autonomous vehicle. International Journal of Human-Computer Interaction, 2015, 31(10): 692−702 doi: 10.1080/10447318.2015.1070549
    [163] Kaur K, Rampersad G. Trust in driverless cars: Investigating key factors influencing the adoption of driverless cars. Journal of Engineering and Technology Management, 2018, 48: 87−96 doi: 10.1016/j.jengtecman.2018.04.006
    [164] Lee J D, Kolodge K. Exploring trust in self-driving vehicles through text analysis. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2020, 62(2): 260−277 doi: 10.1177/0018720819872672
  • 期刊类型引用(12)

    1. 葛畅,郭凯毅,朱雨菁,余隋怀. 智能化设备人机交互界面的信任校准综述. 机械设计. 2025(02): 159-165 . 百度学术
    2. 熊波波,程玉桂. 隐私视角下乘客对无人驾驶出租车服务的信任研究. 交通运输工程与信息学报. 2024(01): 39-53 . 百度学术
    3. 李祥文,宋程,丁帅. 人机协同决策中的人因能力评估研究. 中国管理科学. 2024(03): 145-155 . 百度学术
    4. 向安玲. 无以信,何以立:人机交互中的可持续信任机制. 未来传播. 2024(02): 29-41+129 . 百度学术
    5. 商希雪. 人机交互的模式变革与治理应对——以人形机器人为例. 东方法学. 2024(03): 143-158 . 百度学术
    6. 谭征宇,张瑞佛,刘芝孜,金奕,贺刚. 智能网联汽车人机交互信任研究现状与展望. 机械工程学报. 2024(10): 366-383 . 百度学术
    7. 王红卫,李珏,刘建国,樊瑛,马靓,霍红,刘作仪,丁烈云. 人机融合复杂社会系统研究. 中国管理科学. 2023(07): 1-21 . 百度学术
    8. 杜娟,张静怡,胡珉,甘丽凝. 面向智能盾构施工的人因研究综述. 隧道建设(中英文). 2023(08): 1269-1281 . 百度学术
    9. 李奕洁,张玲,黄琪璋,马舒. 系统透明度信息对人机信任和协同决策的影响. 包装工程. 2023(22): 25-33 . 百度学术
    10. 李奕洁,张玲,黄琪璋,马舒. 系统透明度信息对人机信任和协同决策的影响. 包装工程. 2023(20): 25-33 . 百度学术
    11. 孔祥维,王子明,王明征,胡祥培. 人工智能使能系统的可信决策:进展与挑战. 管理工程学报. 2022(06): 1-14 . 百度学术
    12. 方卫宁,王健新,陈悦源. 复杂系统人机交互中的多任务调度策略综述. 包装工程. 2021(18): 73-83+9 . 百度学术

    其他类型引用(22)

  • 加载中
图(6) / 表(4)
计量
  • 文章访问数:  2349
  • HTML全文浏览量:  1706
  • PDF下载量:  888
  • 被引次数: 34
出版历程
  • 收稿日期:  2020-06-17
  • 修回日期:  2020-08-11
  • 网络出版日期:  2021-06-10
  • 刊出日期:  2021-06-10

目录

/

返回文章
返回