2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于迁移学习的类别级物体识别与检测研究与进展

张雪松 庄严 闫飞 王伟

张雪松, 庄严, 闫飞, 王伟. 基于迁移学习的类别级物体识别与检测研究与进展. 自动化学报, 2019, 45(7): 1224-1243. doi: 10.16383/j.aas.c180093
引用本文: 张雪松, 庄严, 闫飞, 王伟. 基于迁移学习的类别级物体识别与检测研究与进展. 自动化学报, 2019, 45(7): 1224-1243. doi: 10.16383/j.aas.c180093
ZHANG Xue-Song, ZHUANG Yan, YAN Fei, WANG Wei. Status and Development of Transfer Learning Based Category-Level Object Recognition and Detection. ACTA AUTOMATICA SINICA, 2019, 45(7): 1224-1243. doi: 10.16383/j.aas.c180093
Citation: ZHANG Xue-Song, ZHUANG Yan, YAN Fei, WANG Wei. Status and Development of Transfer Learning Based Category-Level Object Recognition and Detection. ACTA AUTOMATICA SINICA, 2019, 45(7): 1224-1243. doi: 10.16383/j.aas.c180093

基于迁移学习的类别级物体识别与检测研究与进展

doi: 10.16383/j.aas.c180093
基金项目: 

国家自然科学基金 61503056

国家自然科学基金 U1508208

辽宁省教育厅基本科研项目 JDL2017017

详细信息
    作者简介:

    张雪松  博士, 大连交通大学软件学院讲师.主要研究方向为计算机视觉, 机器学习, 室内场景理解, 物体识别与检测.E-mail:zhangxuesong@djtu.edu.cn

    闫飞  大连理工大学控制科学与工程学院副教授.主要研究方向为移动机器人地图构建、路径规划、自主导航和场景理解.E-mail:fyan@dlut.edu.cn

    王伟  大连理工大学控制科学与工程学院教授.主要研究方向为自适应控制, 预测控制, 机器人学, 智能控制.E-mail:wangwei@dlut.edu.cn

    通讯作者:

    庄严  大连理工大学控制科学与工程学院教授.主要研究方向为移动机器人面向复杂三维环境的自主感知、建模、地图构建、场景理解及机器学习技术在机器人领域的应用.本文通信作者. E-mail:zhuang@dlut.edu.cn

Status and Development of Transfer Learning Based Category-Level Object Recognition and Detection

Funds: 

National Natural Science Foundation of China 61503056

National Natural Science Foundation of China U1508208

Fundamental Scientific Research Project of Liaoning Provincial Department of Education JDL2017017

More Information
    Author Bio:

     Ph. D., lecturer at the Software Technology Institute, Dalian Jiaotong University. His research interest covers computer vision, machine learning, indoor scene understanding, object recognition and detection

     Associate professor at the School of Control Science and Engineering, Dalian University of Technology. His research interest covers mobile robot map building, path planning, autonomous navigation and scene understanding

     Professor at the School of Control Science and Engineering, Dalian University of Technology. His research interest covers adaptive control, predictive control, robotics, and intelligent control

    Corresponding author: ZHUANG Yan  Professor at the School of Control Science and Engineering, Dalian University of Technology. His research interest covers mobile robot 3D environment perception, modelling, mapping, scene understanding, and machine learning in robotics. Corresponding author of this paper
  • 摘要: 类别级物体识别与检测属于计算机视觉领域的一个基础性问题,主要研究在图像或视频流中识别和定位出其中感兴趣的物体.在基于小规模数据集的类别级物体识别与检测应用中,模型过拟合、类不平衡和跨领域时特征分布变化等关键问题与挑战交织在一起.本文介绍了迁移学习理论的研究现状,对迁移学习理论解决基于小规模数据集的物体识别与检测中遇到的主要问题的研究思路和前沿技术进行了着重论述和分析.最后对该领域的研究重点和技术发展趋势进行了探讨.
    1)  本文责任编委 胡清华
  • 图  1  机器学习中迁移学习过程的概念表示

    Fig.  1  Concept illustration of transfer learning process in machine learning

    图  2  单任务学习vs.多任务学习过程

    Fig.  2  Single task learning vs. multi-task learning process

    表  1  迁移学习的不同方法[22]

    Table  1  Different approaches to transfer learning[22]

    迁移学习方法 简要描述
    基于实例的迁移 通过对源领域实例加权使其用于目标领域.
    基于特征表示的迁移 找到一种“好”的特征表示, 使其可以减少源领域和目标领域的差异以及分类和回归模型的错误率.
    基于参数的迁移 发现源领域和目标领域模型中可以共享的参数或者先验知识, 使其对迁移学习有益.
    关系型知识的迁移 建立源领域和目标领域关系型知识的映射.两个领域是相关领域且每个领域中的数据非独立同分布.
    下载: 导出CSV

    表  2  基于实例的迁移学习方法

    Table  2  Instance based transfer learning methods

    序号 文献 发表
    年份
    文献中提出的算法
    或方法的名称
    主要思想 源领域和目标领域训练数据的
    形式化描述
    迁移学习
    类型
    1 [37] 2007 TrAdaboost 以目标领域少量带类标签的训练数据为
    指导, 在每一轮Boosting迭代训练弱
    分类器时给每个源领域样本分配一个权
    重, 权重越高意味着将其迁移到目标领
    域的价值越大.
    $ {D_{src}} = \{ (\pmb x_{src}^{(i)}, y_{src}^{(i)})\} _{i = 1}^m, $
    $ {D_{Tgt}} = \{ (\pmb x_{tgt}^{(i)}, y_{tgt}^{(i)})\} _{i = 1}^n, $
    $ m>> n $
    有监督单源迁移学习
    2 [38] 2009 TransferBoost 提出了一种基于实例集和Boosting的
    迁移方法, 根据多个源任务中的实例集
    合迁移到目标领域后的分类效果, 以实
    例集合为单位进行整体的权重调整.
    $ D_{_{src}}^k = \{ {(\pmb x_{src}^{(i)}, y_{src}^{(i)})^k}\} _{i = 1}^{{m_k}}, $
    $ k = 1, \cdots, S, $
    $ {D_{Tgt}} = \{ (\pmb x_{tgt}^{(i)}, y_{tgt}^{(i)})\} _{i = 1}^n, $
    $ \sum\nolimits_k {{m_k}}>> n $
    有监督多源迁移学习
    3 [39] 2012 THUNTER 通过将特征映射到子空间的方法, 将源
    领域中异构且不含类标签的数据迁移到
    目标领域的聚类任务中去.
    $ {D_{src}} = \{\pmb x_{src}^{(i)}\} _{i = 1}^m, $
    $ {D_{Tgt}} = \{\pmb x_{tgt}^{(i)}\} _{i = 1}^n $
    $ m >> n $
    无监督单源迁移学习
    4 [40] 2014 DMITL 提出了一种深度多实例迁移学习算法,
    将传统的单实例迁移学习扩展到了多实例
    迁移学习.
    $ \chi = {\{ {\pmb x_i}\} _{i \in I}}, $
    $ {D_{src \cup tgt}} = \{ {\gamma _g}, {y_g}\} _{_{g = 1}}^G, $
    $ {\gamma _g} \in \chi $
    多实例单源迁移学习
    5 [41] 2014 MSD-
    TrAdaBoost
    提出了一种多源实例迁移算法, 在
    TrAdaboost算法的基础上引入动态
    因子和多个源领域实例, 在每一轮分类
    器迭代训练时多个源领域的样本都参
    加, 可以有效地减少负迁移的发生.
    $ D_{_{src}}^k = \{ {(\pmb x_{src}^{(i)}, y_{src}^{(i)})^k}\} _{i = 1}^{{m_k}}, $
    $ k = 1, \cdots, S, $
    $ {D_{Tgt}} = \{ (\pmb x_{tgt}^{(i)}, y_{tgt}^{(i)})\} _{i = 1}^n, $
    $ \sum\nolimits_k {{m_k}} >> n $
    有监督多源迁移学习
    6 [42] 2015 LSSS 提出了一种从源领域训练数据中选择和
    目标领域任务相关的最大实例子集的方
    法, 并且从理论上证明了该子集在进行
    实例迁移时的效果优于其超集.
    $ {D_{src}} = \{ (\pmb x_{src}^{(i)}, y_{src}^{(i)})\} _{i = 1}^m, $
    $ {D_{Tgt}} = \{ (\pmb x_{tgt}^{(i)}, y_{tgt}^{(i)})\} _{i = 1}^n, $
    $ m >> n $
    有监督单源迁移学习
    7 [43] 2017 PreIR-DT,
    PreSR-DT,
    EmbedIR-DT,
    EmbedSR-DT
    提出了4种基于决策树的实例迁移方
    法, 当源领域和目标领域实例的特征部
    分相关时, 可以有效地进行实例迁移.
    $ {D_{src}} = \{ (\pmb x_{src}^{(i)}, y_{src}^{(i)})\} _{i = 1}^m, $
    $ {D_{Tgt}} = \{ (\pmb x_{tgt}^{(i)}, y_{tgt}^{(i)})\} _{i = 1}^n, $
    $ m >> n $
    有监督单源迁移学习
    下载: 导出CSV

    表  3  基于特征表示的迁移学习方法

    Table  3  Feature representation based transfer learning methods

    序号 文献 发表
    年份
    文献中提出的算法
    或方法的名称
    主要思想 源领域和目标领域训练数据的
    形式化描述
    迁移学习
    类型
    1 [44] 2006 SCL 提出了一种结构对应的方法, 通过识别
    源和目标领域中频繁出现的Pivot特征并利用这些Pivot特
    征建立跨领域
    的特征对应关系, 该方法适用于各种基
    于特征表示的有区别模型训练.
    $ {D_{src}} = \{ (\pmb x_{src}^{(i)}, y_{src}^{(i)})\} _{i = 1}^t \cup $
    $ \{\pmb x_{src}^{(i)}\} _{i = t + 1}^m, $
    $ {D_{Tgt}} = \{\pmb x_{tgt}^{(i)}\} _{i = 1}^n, $
    $ m > > n $
    无监督单
    源迁移学
    2 [45] 2007 AUGMENT 在模型开始训练之前对实例的特征空间
    进行扩充, 即将源领域和目标领域实例
    中的不同位置处填充一定数量的0元
    素.特征扩充后的训练数据集可以使用
    传统的机器学习算法进行领域适应.
    $ D_{_{src}}^k = \{ {(\pmb x_{src}^{(i)}, y_{src}^{(i)})^k}\} _{i = 1}^{{m_k}}, $
    $ k = 1, \cdots, S, $
    $ {D_{Tgt}} = \{ (\pmb x_{tgt}^{(i)}, y_{tgt}^{(i)})\} _{i = 1}^n, $
    $ \sum\nolimits_k {{m_k}}>>n $
    有监督多
    源迁移学
    3[46] 2011 TCA, SSTCA 提出了一种迁移组件分析方法, 基于最
    大化平均差异(Maximum mean dis-
    crepancy, MMD)度量在共生核希尔
    伯特空间中学习迁移组件.在迁移组件
    张成的子空间中, 显著减少了源领域和
    目标领域的特征分布差异并保留了数据
    的可分性.
    $ {TCA: D_{src}} = \\\{ (\pmb x_{src}^{(i)}, y_{src}^{(i)})\}_{i = 1}^m, $
    $ {D_{Tgt}} = \{ (\pmb x_{tgt}^{(i)})\} _{i = 1}^n, m>>n. $
    $ {SSTCA: D_{src}} = \\\{ (\pmb x_{src}^{(i)}, y_{src}^{(i)})\}_{i = 1}^m, $
    $\;\;\;\;\;{D_{Tgt}} = \{\pmb x_{tgt}^{(i)}\} _{i = 1}^k \cup \\\{(\pmb x_{tgt}^{(i)}, y_{tgt}^{(i)})\} _{i = k + 1}^n, m>>n. $
    TCA:无
    监督单源
    迁移学习
    SSTCA:
    半监督单
    源迁移学
    4 [47] 2015 CoMuT 提出了一种为多个源领域和目标领域寻
    找共同特征表示的方法, 该方法采用半
    监督的方式将训练数据映射到一个隐空
    间, 利用隐空间建立源领域和目标领域
    的联系.
    $ D_{_{src}}^k = \{ {(\pmb x_{src}^{(i)}, y_{src}^{(i)})^k}\} _{i = 1}^{{m_k}}, $
    $ k = 1, \cdots, S, $
    $ {D_{Tgt}} = \{\pmb x_{tgt}^{(i)}\} _{i = 1}^k \cup $
    $ \{ (\pmb x_{tgt}^{(i)}, y_{tgt}^{(i)})\} _{i = k + 1}^n, $
    $ m>>n $
    半监督多
    源迁移学
    5 [48] 2015 TSM 提出了一个异构特征空间映射方法和任
    务选择方法用于无监督迁移学习.
    $ D_{_{src}}^k = \{\pmb x_k^{(i)}\} _{i = 1}^{{m_k}}, $
    $ k = 1, \cdots, S, $
    $ D_{_{Tgt}}^t = \{\pmb x_{tgt}^{(i)}\} _{i = 1}^{{n_t}}, $
    $ t = 1, \cdots, T $}
    无监督单
    源迁移学
    习和多任
    务学习
    6 [49] 2017 Autoencoder-Based FTL 基于降噪自动编码、共享隐层自动编码
    和极限学习机自动编码技术提出了三种
    特征迁移方法用于语音情感识别任务.
    $ {D_{src}} = \{ (\pmb x_{src}^{(i)}, y_{src}^{(i)})\} _{i = 1}^m, $
    $ {D_{Tgt}} = \{ (\pmb x_{tgt}^{(i)})\} _{i = 1}^n, $
    $ m >> n $
    无监督单
    源迁移学
    下载: 导出CSV

    表  4  基于参数的迁移学习方法

    Table  4  Parameter based transfer learning methods

    序号 文献 发表
    年份
    文献中提出的算法
    或方法的名称
    主要思想 源领域和目标领域训练数据的
    形式化描述
    迁移学习
    类型
    1 [31] 2007 Regularized
    SVM
    通过正则化适应的方式, 将源领域中训
    练SVM分类器学习到的参数迁移到
    训练目标领域SVM分类器的过程中.
    源领域模型: SVM
    $ {D_{Tgt}} = \{ (\pmb x_{tgt}^{(i)}, y_{tgt}^{(i)})\} _{i = 1}^n $
    单源有监督迁移学习
    2 [50] 2011 A-SVM,
    PMT-SVM,
    DA-SVM
    通过正则化源领域模型和目标领域模
    型间距离的方法, 将源领域SVM模型
    的参数Ws部分迁移到目标领域SVM
    模型W中.
    源领域模型: SVM
    $ {D_{Tgt}} = \{ (\pmb x_{tgt}^{(i)}, y_{tgt}^{(i)})\} _{i = 1}^n $
    单源有监督迁移学习
    3 [51] 2016 BNPTL 提出了一个贝叶斯网络参数迁移学习
    算法用于推导出网络和子图的关系, 同时
    解决了从源网络中找到用于迁移的最相
    关子图以及将源领域模型参数迁移到目
    标模型的方法.
    源领域模型: $ \{ {\rm{B}}{N^i}\} _{i = 1}^k, $
    $ {D_{Tgt}} = \{ (\pmb x_{tgt}^{(i)}, y_{tgt}^{(i)})\} _{i = 1}^n $
    多源有监督迁移学习
    4 [52] 2016 CNN
    Fine-tuning
    通过实验表明, 在目标领域训练深度卷
    积神经网络模型时, 微调一个预训练的
    源领域卷积神经网络模型要比在目标领
    域重新训练模型更好, 这里所说的微调
    实质上就是基于参数的迁移.
    源领域模型: CNN
    $ {D_{Tgt}} = \{ (\pmb x_{tgt}^{(i)}, y_{tgt}^{(i)})\} _{i = 1}^n $
    单源有监督迁移学习
    5 [53] 2017 SER,
    TRUT,
    MIX
    首先提出了SER和STRUT算法, 可
    以基于源领域的随机森林模型和目标
    领域训练数据进行参数迁移学习.最后,
    提出了一种MIX方法.将两种目标领
    域随机森林进行了集成.
    源领域模型: Random forest
    $ {D_{Tgt}} = \{ (\pmb x_{tgt}^{(i)}, y_{tgt}^{(i)})\} _{i = 1}^n $
    单源有监督迁移学习
    下载: 导出CSV

    表  5  ILSVRC图像分类方法[60]

    Table  5  Image classification methods in ILSVRC[60]

    竞赛年度 分类模型 排名 Top-5分类准确率 创新点概述
    2010 SVM 冠军 28.2% LBP+SIFT特征、局部受限线性编码+超向量编码、SVM分类
    2011 SVM 冠军 25.8% 压缩的高维图像签名、Product量化、SVM分类
    2012 AlexNet[61] 冠军 16.4% 使用了ReLU非线性激活单元代替传统的Tanh和Sigmoid单元、使用了重叠的Pooling、使用了数据集扩增和Dropout技术
    2013 ZFNet[62] 冠军 11.7% 使用了反卷积、前面的层使用了更小的卷积核和更小的步长、通过遮挡找出了决定图像类别的关键部位
    2014 GoogLeNet[63] 冠军 6.7% 使用了1×1卷积和Inception结构、使用Average pooling代替了Fully-connect
    2014 VGG[64] 亚军 7.3% 使用了Conv-pool-fully connect体系结构, 其中卷积层中所有卷积核都是3×3大小
    2015 ResNet[65] 冠军 3.57% 引入残差结构、发现CNN随着深度加深CNN会出现性能退化问题
    2016 Ensemble CNN models 冠军 2.99% 采用深度CNN模型的集成和深度特征融合
    2017 SENet[66] 冠军 2.25% 提出Squeeze-and-excitation模块、显式地建模特征通道之间的相互依赖关系、特征重标定
    下载: 导出CSV

    表  6  典型的物体检测方法

    Table  6  Typical object detection methods

    物体检测方法 类型 创新点概述
    DPM[67] 滑动窗口 HOG特征、特征金字塔匹配、多组件模型结合可变形部件模型实现多视角物体检测
    R-CNN[68] 两阶段法 基于选择性搜索(Selective search)算法的区域建议、提取CNN深度特征、矩形框回归定位
    SPP-NET[69] 两阶段法 改进了R-CNN方法, 在卷积层和全连接层之间加入空间金字塔池化结构、提高了产生候选框的速度和计算开销
    Fast R-CNN[70] 两阶段法 改进了SPP-NET方法, 设计了一种ROI pooling的池化层结构、提出多任务损失函数思想, 将分类损失和边框回归损失结合统一学习
    Faster R-CNN[71] 两阶段法 改进了Fast R-CNN方法, 提出了RPN (Region proposal networks)网络
    Mask R-CNN[72] 两阶段法 改进Faster-CNN方法, 将ROI_Pooling层替换成了ROI_Align, 添加分支FCN层(Mask层)用于语义掩码识别, 通过RPN网络生成目标候选框, 再对每个目标候选框进行分类判断和矩形框回归定位
    YOLO[73] 一阶段法 基于图像的全局信息进行预测、将输入图像归一化到448×448像素大小, 划分图像为7×7网格, 通过CNN提取特征, 直接预测每个网格内的边框坐标和每个物体类别的置信度, CNN训练时采用P-Relu激活函数
    YOLO 9000[74] 一阶段法 采用Darknet-19作为特征提取网络、批归一化(Batch normalization)预处理、微调两阶段预训练的深度CNN模型、在卷积层使用Anchor boxes
    SSD[75] 一阶段法 在不同卷积层的Feature map上预测物体区域, 输出离散化的Default boxes坐标, 利用小卷积核进行边框坐标补偿
    G-CNN[76] 一阶段法 将图像划分为叠加的多尺度的规则网格, 通过网格迭代来训练模型
    RON[77] 一阶段法 设计了Reverse connection结构、通过多任务损失函数联合优化了Reverse connection、Objectness prior和物体检测
    下载: 导出CSV

    表  7  基于同构特征迁移的物体识别与检测方法

    Table  7  Homogeneous feature transfer based object recognition and detection methods

    序号 文献 发表年份 提出的算法或方法名称 迁移学习类型 适用领域 创新点简述
    1 [93] 2010 Symm 有监督单源迁移学习 物体识别 跨领域度量函数学习、KNN多分类、采集了著名的Office数据集
    2 [95] 2014 CMA 无监督单源迁移学习 物体识别 连续演化的目标领域、领域流形学习、KNN和SVM多分类
    3 [96] 2014 Supervised DA-DPM, Self-adaptive DPM 有监督单源迁学习, 无监督单源迁移学习 物体检测 自适应结构化SVM、结构敏感-自适应结构化SVM、基于可变形部件模型的领域适应方法
    4 [97] 2015 DTN 无监督单源迁移学习 物体识别 深度迁移学习、具有线性计算复杂度适用于大规模领域适应问题
    5 [98] 2015 BMFL 无监督单源迁移学习 物体识别 Boosting集成学习、堆叠去噪自编码器mSDA、实例加权
    6 [99] 2015 DAN 无监督单源迁移学习 物体识别 核学习、基于多核最大平均差异(MK-MMD)的深度适应网络
    7 [100] 2016 MIDA, SMIDA 无监督单源迁移学习, 半监督单源迁移学习 物体识别 核学习、Feature augment、基于Hilbert-Schmidt独立性准则的最大化独立适应
    8 [101] 2016 RTN 无监督单源迁移学习 物体识别 端到端深度学习、最小化MMD度量、将AlexNet扩展为RTN
    9 [102] 2016 DANN 无监督单源迁移学习 物体识别 加入梯度反转层的深度(浅)前向神经网络、对抗式训练(Adversarial training)
    10 [103] 2017 JAN 无监督单源迁移学习 物体识别 基于深度CNN模型的领域适应、JMMD准则、对抗式训练
    11 [104] 2018 Domain adaptive faster R-CNN 无监督单源迁移学习 物体检测 DPR网络、基于H-divergence理论设计了两个领域适应模块、对抗式训练
    12 [105] 2018 DT+PL 弱监督单源迁移学习 物体检测 在人工生成数据集上微调物体检测器的两阶段渐进式领域适应
    下载: 导出CSV

    表  8  基于异构特征迁移的物体识别与检测方法

    Table  8  Heterogeneous feature transfer based object recognition and detection methods

    序号 文献 发表年份 提出的算法或方法名称 迁移学习类型 适用领域 创新点简述
    1 [106] 2011 Arc-t 有监督单源迁移学习 物体识别 异构特征非线性变换方法、特征变换独立于分类器
    2 [107] 2014 HFA, SHFA HFA:有监督单源迁移学习, SHFA:半监督单源迁移学习 物体识别 跨领域核学习框架、可以同时学习核函数和分类器
    3 [108] 2015 SSKMDA 半监督单源迁移学习 物体识别 基于半监督核匹配的领域适应
    4 [109] 2016 CDLS 有监督单源迁移学习 物体识别 通过学习跨领域landmarks来生成领域不变特征子空间
    下载: 导出CSV
  • [1] Szeliski R. Computer Vision:Algorithms and Applications. New York:Springer-Verlag, 2010. 1-28
    [2] Pinheiro P O, Collobert R. From image-level to pixel-level labeling with convolutional networks. In: Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015. 1713-1721 https://www.researchgate.net/publication/308850633_From_image-level_to_pixel-level_labeling_with_Convolutional_Networks
    [3] Grauman K, Leibe B. Visual object recognition. Synthesis Lectures on Artificial Intelligence and Machine Learning. San Rafael: Morgan & Claypool Publishers, 2011. 1-181
    [4] Lowe D G. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004, 60(2):91-110 http://d.old.wanfangdata.com.cn/NSTLQK/NSTL_QKJJ025429678/
    [5] Chen C C, Hsieh S L. Using binarization and hashing for efficient SIFT matching. Journal of Visual Communication and Image Representation, 2015, 30:86-93 doi: 10.1016/j.jvcir.2015.02.014
    [6] Bay H, Ess A, Tuytelaars T, Van Gool L. Speeded-up robust features (SURF). Computer Vision and Image Understanding, 2008, 110(3):346-359 doi: 10.1016/j.cviu.2007.09.014
    [7] Donahue J, Jia Y Q, Vinyals O, Hoffman J, Zhang N, Tzeng E, et al. DeCAF: a deep convolutional activation feature for generic visual recognition. In: Proceedings of the 31th International Conference on International Conference on Machine Learning. Beijing, China: ACM, 2014. I-647-I-655
    [8] Friedman J H, Bentley J L, Finkel R A. An algorithm for finding best matches in logarithmic expected time. ACM Transactions on Mathematical Software, 1977, 3(3):209-226 doi: 10.1145/355744.355745
    [9] Stewénius H, Gunderson S H, Pilet J. Size matters: exhaustive geometric verification for image retrieval. In: Proceedings of the 12th European Conference on Computer Vision. Florence, Italy: Springer, 2012. 674-687
    [10] Deng J, Dong W, Socher R, Li L J, Li K, Li F F. ImageNet: a large-scale hierarchical image database. In: Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, USA: IEEE, 2009. 248-255 https://www.researchgate.net/publication/221361415_ImageNet_a_Large-Scale_Hierarchical_Image_Database
    [11] Xiao J X, Hays J, Ehinger K A, Oliva A, Torralba A. SUN database: large-scale scene recognition from abbey to zoo. In: Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, USA: IEEE, 2010. 3485-3492
    [12] Torralba A, Efros A A. Unbiased look at dataset bias. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. Colorado Springs, USA: IEEE, 2011. 1521-1528
    [13] Tommasi T, Patricia N, Caputo B, Tuytelaars T. A deeper look at dataset bias. In: Proceedings of the 37th German Conference on Pattern Recognition. Aachen, Germany: Springer, 2015. 86-93
    [14] Griffin G, Holub A, Perona P. Caltech-256 object category dataset, Technical Report CNSTR-2007-001, California Institute of Technology, USA, 2007.
    [15] Zeng M, Ren J T. Domain transfer dimensionality reduction via discriminant kernel learning. In: Proceedings of the 16th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining. Kuala Lumpur, Malaysia: Springer-Verlag, 2012. 280-291
    [16] 陶剑文, Chung F L, 王士同, 姚奇富.稀疏标签传播:一种鲁棒的领域适应学习方法.软件学报, 2015, 26(5):977-1000 http://d.old.wanfangdata.com.cn/Periodical/rjxb201505001

    Tao Jian-Wen, Chung F L, Wang Shi-Tong, Yao Qi-Fu. Sparse label propagation:a robust domain adaptation learning method. Journal of Software, 2015, 26(5):977-1000 http://d.old.wanfangdata.com.cn/Periodical/rjxb201505001
    [17] 龙明盛.迁移学习问题与方法研究[博士学位论文], 清华大学, 中国, 2014 http://cdmd.cnki.com.cn/Article/CDMD-10003-1015039180.htm

    Long Ming-sheng. Transfer Learning: Problems and Methods[Ph. D. dissertation], Tsinghua University, China, 2014 http://cdmd.cnki.com.cn/Article/CDMD-10003-1015039180.htm
    [18] Taylor M E, Kuhlmann G, Stone P. Autonomous transfer for reinforcement learning. In: Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems. Estoril, Portugal: ACM, 2008. 283-290
    [19] 朱美强, 程玉虎, 李明, 王雪松, 冯涣婷.一类基于谱方法的强化学习混合迁移算法.自动化学报, 2012, 38(11):1765-1776 http://www.aas.net.cn/CN/abstract/abstract17783.shtml

    Zhu Mei-Qiang, Cheng Yu-Hu, Li Ming, Wang Xue-Song, Feng Huan-Ting. A hybrid transfer algorithm for reinforcement learning based on spectral method. Acta Automatica Sinica, 2012, 38(11):1765-1776 http://www.aas.net.cn/CN/abstract/abstract17783.shtml
    [20] Cheng B, Liu M X, Suk H I, Shen D G, Zhang D Q. Multimodal manifold-regularized transfer learning for MCI conversion prediction. Brain Imaging and Behavior, 2015, 9(4):913-926 doi: 10.1007/s11682-015-9356-x
    [21] Long M S, Wang J M, Cao Y, Sun J G, Yu P S. Deep learning of transferable representation for scalable domain adaptation. IEEE Transactions on Knowledge and Data Engineering, 2016, 28(8):2027-2040 doi: 10.1109/TKDE.2016.2554549
    [22] Pan S J, Yang Q. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 2010, 22(10):1345-1359 doi: 10.1109/TKDE.2009.191
    [23] Aytar Y. Transfer learning for object category detection[Ph. D. thesis], University of Oxford, UK, 2014
    [24] Caruana R. Multitask learning. Machine Learning, 1997, 28(1):41-75 doi: 10.1023/A:1007379606734
    [25] Zhang Y, Yang Q. A survey on multi-task learning. arXiv: 1707.08114v1, 2017
    [26] Ding Z M, Shao M, Fu Y. Incomplete multisource transfer learning. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(2):310-323 doi: 10.1109/TNNLS.2016.2618765
    [27] 顾鑫, 王士同, 许敏.基于多源的跨领域数据分类快速新算法.自动化学报, 2014, 40(3):531-547 http://www.aas.net.cn/CN/abstract/abstract18319.shtml

    Gu Xin, Wang Shi-Tong, Xu Min. A new cross-multidomain classification algorithm and its fast version for large datasets. Acta Automatica Sinica, 2014, 40(3):531-547 http://www.aas.net.cn/CN/abstract/abstract18319.shtml
    [28] Zhang Q, Li H G, Zhang Y, Li M. Instance transfer learning with multisource dynamic TrAdaBoost. The Scientific World Journal, 2014, 2014: Article No. 282747
    [29] Pan J, Wang X, Cheng Y, Cao G. Multi-source transfer ELM-based Q learning. Neurocomputing, 2014, 137:57-64 doi: 10.1016/j.neucom.2013.04.045
    [30] Weiss K, Khoshgoftaar T M, Wang D D. A survey of transfer learning. Journal of Big Data, 2016, 3: Article No. 9
    [31] Li X. Regularized Adaptation: Theory, Algorithms and Applications[Ph. D. thesis], University of Washington, USA, 2007
    [32] 刘建伟, 孙正康, 罗雄麟.域自适应学习研究进展.自动化学报, 2014, 40(8):1576-1600 http://www.aas.net.cn/CN/abstract/abstract18427.shtml

    Liu Jian-Wei, Sun Zheng-Kang, Luo Xiong-Lin. Review and research development on domain adaptation learning. Acta Automatica Sinica, 2014, 40(8):1576-1600 http://www.aas.net.cn/CN/abstract/abstract18427.shtml
    [33] Patel V M, Gopalan R, Li R N, Chellappa R. Visual domain adaptation:a survey of recent advances. IEEE Signal Processing Magazine, 2015, 32(3):53-69 doi: 10.1109/MSP.2014.2347059
    [34] Shao H. Kernel methods for transfer learning to avoid negative transfer. International Journal of Computing Science and Mathematics, 2016, 7(2):190-199
    [35] Ge L, Gao J, Ngo H, Li K, Zhang A D. On handling negative transfer and imbalanced distributions in multiple source transfer learning. Statistical Analysis and Data Mining, 2014, 7(4):254-271 doi: 10.1002/sam.11217
    [36] Mihalkova L, Huynh T, Mooney R J. Mapping and revising markov logic networks for transfer learning. In: Proceedings of the 22nd National Conference on Artificial Intelligence. Vancouver, Canada: AAAI, 2007. 608-614
    [37] Dai W Y, Yang Q, Xue G R, Yu Y. Boosting for transfer learning. In: Proceedings of the 24th International Conference on Machine Learning. Corvalis, USA: ACM, 2007. 193-200
    [38] Eaton E, DesJardins M. Set-based boosting for instance-level transfer. In: Proceedings of the 2009 IEEE International Conference on Data Mining Workshops. Miami, USA: IEEE, 2009. 422-428
    [39] Kong S, Wang D H. Transfer heterogeneous unlabeled data for unsupervised clustering. In: Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). Tsukuba, Japan: IEEE, 2012. 1193-1196
    [40] Kotzias D, Denil M, Blunsom P, De Freitas N. Deep multi-instance transfer learning. arXiv: 1411.3128, 2014
    [41] 张倩, 李明, 王雪松, 程玉虎, 朱美强.一种面向多源领域的实例迁移学习.自动化学报, 2014, 40(6):1176-1183 http://www.aas.net.cn/CN/abstract/abstract18387.shtml

    Zhang Qian, Li Ming, Wang Xue-Song, Cheng Yu-Hu, Zhu Mei-Qiang. Instance-based transfer learning for multi-source domains. Acta Automatica Sinica, 2014, 40(6):1176-1183 http://www.aas.net.cn/CN/abstract/abstract18387.shtml
    [42] Zhou S, Schoenmakers G, Smirnov E, Peeters R, Driessens K, Chen S Q. Largest source subset selection for instance transfer. In: Proceedings of the 7th Asian Conference on Machine Learning. Hong Kong, China, 2015. 423-438
    [43] Zhou S, Smirnov E N, Schoenmakers G, Peeters R. Conformal decision-tree approach to instance transfer. Annals of Mathematics and Artificial Intelligence, Switzerland:Springer, 2017, 81(1-2):85-104 doi: 10.1007/s10472-017-9554-x
    [44] Blitzer J, McDonald R, Pereira F. Domain adaptation with structural correspondence learning. In: Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. Sydney, Australia: ACM, 2006. 120-128
    [45] Daumé Ⅲ H. Frustratingly easy domain adaptation. In: Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Prague, Czech Republic: ACL, 2007. 256-263
    [46] Pan S J, Tsang I W, Kwok J T, Yang Q. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 2011, 22(2):199-210 doi: 10.1109/TNN.2010.2091281
    [47] Tahmoresnezhad J, Hashemi S. Common feature extraction in multi-source domains for transfer learning. In: Proceedings of the 7th Conference on Information and Knowledge Technology (IKT). Urmia, Iran: IEEE, 2015. 1-5
    [48] Xue S, Lu J, Zhang G Q, Xiong L. Heterogeneous feature space based task selection machine for unsupervised transfer learning. In: Proceedings of the 10th International Conference on Intelligent Systems and Knowledge Engineering (ISKE). Taipei, China: IEEE, 2015. 46-51
    [49] Deng J, Frühholz S, Zhang Z X, Schuller B. Recognizing emotions from whispered speech based on acoustic feature transfer learning. IEEE Access, 2017, 5:5235-5246
    [50] Aytar Y, Zisserman A. Tabula rasa: model transfer for object category detection. In: Proceedings of the 2011 IEEE International Conference on Computer Vision. Barcelona, Spain: IEEE, 2011. 2252-2259
    [51] Zhou Y, Hospedales T M, Fenton N. When and where to transfer for Bayesian network parameter learning. Expert Systems with Applications, 2016, 55:361-373 doi: 10.1016/j.eswa.2016.02.011
    [52] Tajbakhsh N, Shin J Y, Gurudu S R, Hurst R T, Kendall C B, Gotway M B, et al. Convolutional neural networks for medical image analysis:full training or fine tuning? IEEE Transactions on Medical Imaging, 2016, 35(5):1299-1312 doi: 10.1109/TMI.2016.2535302
    [53] Segev N, Harel M, Mannor S, Crammer K, El-Yaniv R. Learn on source, refine on target:a model transfer learning framework with random forests. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(9):1811-1824 doi: 10.1109/TPAMI.2016.2618118
    [54] Mihalkova L, Mooney R J. Transfer learning from minimal target data by mapping across relational domains. In: Proceedings of the 21st International Jont Conference on Artificial Intelligence. Pasadena, CA: Morgan Kaufmann, 2009. 1163-1168
    [55] Myeong H, Lee K M. Tensor-based high-order semantic relation transfer for semantic scene segmentation. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA: IEEE, 2013. 3073-3080
    [56] Xia R, Zong C Q, Hu X L, Cambria E. Feature ensemble plus sample selection:domain adaptation for sentiment classification. IEEE Intelligent Systems, 2013, 28(3):10-18 doi: 10.1109/MIS.2013.27
    [57] Mo Y, Zhang Z X, Wang Y H. A hybrid transfer learning mechanism for object classification across view. In: Proceedings of the 11th International Conference on Machine Learning and Applications. Boca Raton, USA: IEEE, 2012. 226-231
    [58] Lazebnik S, Schmid C, Ponce J. Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). New York, USA: IEEE, 2006. 2169-2178
    [59] 黄凯奇, 任伟强, 谭铁牛.图像物体分类与检测算法综述.计算机学报, 2014, 36(6):1225-1240 http://d.old.wanfangdata.com.cn/Periodical/jsjxb201406001

    Huang Kai-Qi, Ren Wei-Qiang, Tan Tie-Niu. A review on image object classification and detection. Chinese Journal of Computers, 2014, 36(6):1225-1240 http://d.old.wanfangdata.com.cn/Periodical/jsjxb201406001
    [60] Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S A, et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 2015, 115(3):211-252 http://d.old.wanfangdata.com.cn/NSTLHY/NSTL_HYCC0214533907/
    [61] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, USA: Curran Associates Inc., 2012. 1097-1105
    [62] Zeiler M D, Fergus R. Visualizing and understanding convolutional networks. In: Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer, 2014. 818-833
    [63] Szegedy C, Liu W, Jia Y Q, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015. 1-9
    [64] Simonyan K, Zisserman A. Very deep convolutional networks for large-Scale image recognition. arXiv: 1409.1556v6, 2015
    [65] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 770-778
    [66] Hu J, Shen L, Sun G. Squeeze-and-excitation networks. arXiv: 1709.01507v2, 2017
    [67] Felzenszwalb P F, Girshick R B, McAllester D, Ramanan D. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(9):1627-1645 doi: 10.1109/TPAMI.2009.167
    [68] Girshick R, Donahue J, Darrell T, Malik J. Region-based convolutional networks for accurate object detection and segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(1):142-158 doi: 10.1109/TPAMI.2015.2437384
    [69] He K M, Zhang X Y, Ren S Q, Sun J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9):1904-1916 doi: 10.1109/TPAMI.2015.2389824
    [70] Girshick R. Fast R-CNN. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE, 2015. 1440-1448
    [71] Ren S Q, He K M, Girshick R, Sun J. Faster R-CNN:towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6):1137-1149 doi: 10.1109/TPAMI.2016.2577031
    [72] He K M, Gkioxari G, Dollár P, Girshick P. Mask R-CNN. In: Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 2980-2988
    [73] Redmon J, Divvala S K, Girshick R, Farhadi A. You only look once: unified, real-Time object detection. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE, 2016. 779-788
    [74] Redmon J, Farhadi A. YOLO9000: better, faster, stronger. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA: IEEE, 2017. 6517-6525
    [75] Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C Y, et al. SSD: single shot MultiBox detector. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016. 21-37
    [76] Najibi M, Rastegari M, Davis L S. G-CNN: an iterative grid based object detector. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 2369-2377
    [77] Kong T, Sun F C, Yao A B, Liu H P, Lu M, Chen Y R. RON: reverse connection with objectness prior networks for object detection. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA: IEEE, 2017. 5244-5252
    [78] Weiss G M. Mining with rarity:a unifying framework. ACM SIGKDD Explorations Newsletter, 2004, 6(1):7-19 doi: 10.1145/1007730
    [79] He H B, Garcia E A. Learning from imbalanced data. IEEE Transactions on Knowledge and Data Engineering, 2009, 21(9):1263-1284 doi: 10.1109/TKDE.2008.239
    [80] Zhou Z H, Liu X Y. Training cost-sensitive neural networks with methods addressing the class imbalance problem. IEEE Transactions on Knowledge and Data Engineering, 2006, 18(1):63-77 doi: 10.1109/TKDE.2006.17
    [81] Chawla N V, Bowyer K W, Hall L O, Kegelmeyer W P. SMOTE:synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 2002, 16(1):321-357 http://d.old.wanfangdata.com.cn/Periodical/dianzixb200911024
    [82] Bunkhumpornpat C, Sinapiromsaran K, Lursinsap C. Safe-Level-SMOTE: safe-level-synthetic minority over-sampling technique for handling the class imbalanced problem. In: Proceedings of the 13th Pacific-Asia Conference on Knowledge Discovery and Data Mining. Bangkok, Thailand: Springer, 2009. 475-482
    [83] Yang P Y, Yoo P D, Fernando J, Zhou B B, Zhang Z L, Zomaya A Y. Sample subset optimization techniques for imbalanced and ensemble learning problems in bioinformatics applications. IEEE Transactions on Cybernetics, 2014, 44(3):445-455 doi: 10.1109/TCYB.2013.2257480
    [84] Al-Stouhi S, Reddy C K. Transfer learning for class imbalance problems with inadequate data. Knowledge and Information Systems, 2016, 48(1):201-228 doi: 10.1007/s10115-015-0870-3
    [85] Weiss K R, Khoshgoftaar T M. Comparing transfer learning and traditional learning under domain class imbalance. In: Proceedings of the 16th IEEE International Conference on Machine Learning and Applications (ICMLA). Cancun, Mexico: IEEE, 2017. 337-343
    [86] Wang K, Wu B. Power Equipment fault diagnosis model based on deep transfer learning with balanced distribution adaptation. In: Proceedings of the Advanced Data Mining and Applications. ADMA, Cham: Springer, 2018. 178-188
    [87] Su K M, Hairston W D, Robbins K A. Adaptive thresholding and reweighting to improve domain transfer learning for unbalanced data with applications to EEG imbalance. In: Proceedings of the 15th IEEE International Conference on Machine Learning and Applications (ICMLA). Anaheim, CA, USA: IEEE, 2016. 320-325
    [88] Zhang X S, Zhuang Y, Hu H S, Wang W. 3-D laser-based multiclass and multiview object detection in cluttered indoor scenes. IEEE Transactions on Neural Networks and Learning Systems, 2017, 28(1):177-190 doi: 10.1109/TNNLS.2015.2496195
    [89] Hsu T M H, Chen W Y, Hou C A, Tsai Y H H, Yeh Y R, Wang Y C F. Unsupervised domain adaptation with imbalanced cross-domain data. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE, 2015. 4121-4129
    [90] Tsai Y H H, Hou C A, Chen W Y, Yeh Y R, Wang Y C F. Domain-constraint transfer coding for imbalanced unsupervised domain adaptation. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence. Phoenix, Arizona, USA: AAAI Press, 2016. 3597-3603
    [91] Wang J D, Chen Y Q, Hao S J, Feng W J, Shen Z Q. Balanced distribution adaptation for transfer learning. In: Proceedings of the 2017 IEEE International Conference on Data Mining (ICDM). New Orleans, LA, USA: IEEE, 2017. 1129-1134
    [92] Zhang X S, Zhuang Y, Wang W, Pedrycz W. Transfer boosting with synthetic instances for class imbalanced object recognition. IEEE Transactions on Cybernetics, 2018, 48(1):357-370 doi: 10.1109/TCYB.2016.2636370
    [93] Saenko K, Kulis B, Fritz M, Darrell T. Adapting visual category models to new domains. In: Proceedings of the 11th European Conference on Computer Vision. Crete, Greece: Springer-Verlag, 2010. 213-226
    [94] Davis J V, Kulis B, Jain P, Sra S, Dhillon I S. Information-theoretic metric learning. In: Proceedings of the 24th International Conference on Machine Learning. Corvalis, Oregon, USA: ACM, 2007. 209-216
    [95] Hoffman J, Darrell T, Saenko K. Continuous manifold based adaptation for evolving visual domains. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH, USA: IEEE, 2014. 867-874
    [96] Xu J L, Ramos S, Vázquez D, López A M. Domain adaptation of deformable part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(12):2367-2380 doi: 10.1109/TPAMI.2014.2327973
    [97] Zhang X, Yu F X, Chang S F, Wang S J. Deep transfer network: unsupervised domain adaptation. arXiv: 1503.00591, 2015.
    [98] Yang X S, Zhang T Z, Xu C S, Yang M H. Boosted Multifeature learning for cross-domain transfer. ACM Transactions on Multimedia Computing, Communications, and Applications, 2015, 11(3): Article No. 35
    [99] Long M, Cao Y, Wang J, Jordan M I. Unsupervised domain adaptation with residual transfer networks. In: Proceedings of the 32th International Conference on Machine Learning. Lille, France: ACM, 2015. 97-105
    [100] Yan K, Kou L, Zhang D. Learning domain-invariant subspace using domain features and independence maximization. arXiv: 1603.04535, 2016.
    [101] Long M S, Zhu H, Wang J M, Jordan M I. Unsupervised domain adaptation with residual transfer networks. In: Proceedings of the 30th Conference on Neural Information Processing Systems. Barcelona, Spain: ACM Press 2016. 136-144
    [102] Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, et al. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 2016, 17(1):2096-2030 http://d.old.wanfangdata.com.cn/Periodical/chxb201712005
    [103] Long M S, Zhu H, Wang J M, Jordan M I. Deep transfer learning with joint adaptation networks. In: Proceedings of the 34th International Conference on Machine Learning. Sydney, Australia: ACM, 2017. 2208-2217
    [104] Chen Y H, Li W, Sakaridis C, Dai D X, Van Gool L. Domain adaptive faster R-CNN for object detection in the wild. In: Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, Utah, USA: IEEE, 2018. 3339-3348
    [105] Inoue N, Furuta R, Yamasaki T, Aizawa K. Cross-domain weakly-supervised object detection through progressive domain adaptation. In: Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, Utah, USA: IEEE, 2018. 5001-5009. DOI: 10.1109/CVPR.2018.00525
    [106] Kulis B, Saenko K, Darrell T. What you saw is not what you get: domain adaptation using asymmetric kernel transforms. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. Colorado Springs, RI, USA: IEEE, 2011. 1785-1792
    [107] Li W, Duan L X, Xu D, Tsang I W. Learning with augmented features for supervised and semi-supervised heterogeneous domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(6):1134-1148 doi: 10.1109/TPAMI.2013.167
    [108] Xiao M, Guo Y H. Feature space independent semi-supervised domain adaptation via kernel matching. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(1):54-66 doi: 10.1109/TPAMI.2014.2343216
    [109] Tsai Y H H, Yeh Y R, Wang Y C F. Learning cross-domain landmarks for heterogeneous domain adaptation. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE, 2016. 5081-5090
    [110] Zhao P L, Hoi S C H, Wang J L, Li B. Online transfer learning. Artificial Intelligence, 2014, 216:76-102 doi: 10.1016/j.artint.2014.06.003
    [111] Zhan Y S, Taylor M E. Online transfer learning in reinforcement learning domains. arXiv: 1507.00436, 2015.
    [112] Zhang X S, Zhuang Y, Wang W, Pedrycz W. Online feature transformation learning for cross-domain object category recognition. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(7):2857-2871
    [113] Ghifary M, Kleijn W B, Zhang M J, Balduzzi D. Domain generalization for object recognition with multi-task autoencoders. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE, 2015. 2551-2559
    [114] Fan J P, Zhao T Y, Kuang Z Z, Zheng Y, Zhang J, Yu J, et al. HD-MTL:hierarchical deep multi-task learning for large-scale visual recognition. IEEE Transactions on Image Processing, 2017, 26(4):1923-1938 doi: 10.1109/TIP.2017.2667405
    [115] Torralba A, Murphy K P, Freeman W T. Sharing visual features for multiclass and multiview object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(5):854-869 doi: 10.1109/TPAMI.2007.1055
    [116] Li X, Zhao L M, Wei L N, Yang M H, Wu F, Zhuang Y T, et al. Deepsaliency:multi-task deep neural network model for salient object detection. IEEE Transactions on Image Processing, 2016, 25(8):3919-3930 doi: 10.1109/TIP.2016.2579306
    [117] Kalogeiton V, Weinzaepfel P, Ferrari V, Schmid C. Joint learning of object and action detectors. In: Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 2001-2010
    [118] Chu W Q, Liu Y, Shen C, Cai D, Hua X S. Multi-task vehicle detection with region-of-interest voting. IEEE Transactions on Image Processing, 2018, 27(1):432-441 doi: 10.1109/TIP.2017.2762591
    [119] Lu X, Wang Y N, Zhou X Y, Zhang Z J, Ling Z G. Traffic sign recognition via multi-modal tree-structure embedded multi-task learning. IEEE Transactions on Intelligent Transportation Systems, 2017, 18(4):960-972 doi: 10.1109/TITS.2016.2598356
    [120] Zhang Y X, Du B, Zhang L P, Liu T L. Joint sparse representation and multitask learning for hyperspectral target detection. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(2):894-906 doi: 10.1109/TGRS.2016.2616649
    [121] Zhang T Z, Xu C S, Yang M H. Multi-task correlation particle filter for robust object tracking. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA: IEEE, 2017. 4819-4827
    [122] Lim J J, Salakhutdinov R, Torralba A. Transfer learning by borrowing examples for multiclass object detection. In: Proceedings of 24th International Conference on Neural Information Processing Systems. Granada, Spain: ACM, 2011. 118-126
    [123] Malisiewicz T, Gupta A, Efros A A. Ensemble of exemplar-SVMs for object detection and beyond. In: Proceedings of the 2011 IEEE International Conference on Computer Vision. Barcelona, Spain: IEEE, 2011. 89-96
    [124] Aytar Y, Zisserman A. Enhancing exemplar SVMs using part level transfer regularization. In: Proceedings of the 2012 British Machine Vision Conference. Guildford, UK: BMVA, 2012. 1-11
    [125] Aytar Y, Zisserman A. Part level transfer regularization for enhancing exemplar SVMs. Computer Vision and Image Understanding, 2015, 138:114-123 doi: 10.1016/j.cviu.2015.04.004
    [126] Oquab M, Bottou L, Laptev I, Sivic J. Learning and transferring mid-level image representations using convolutional neural networks. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH, USA: IEEE, 2014. 1717-1724
    [127] Hoffman J, Gupta S, Leong J, Guadarrama S, Darrell T. Cross-modal adaptation for RGB-D detection. In: Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA). Stockholm, Sweden: IEEE, 2016. 5032-5039
    [128] Yosinski J, Clune J, Bengio Y, Lipson H. How transferable are features in deep neural networks? In: Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Quebec, Canada: ACM, 2014. 3320-3328
    [129] Tang Y J, Wu B, Peng L R, Liu C S. Semi-supervised transfer learning for convolutional neural network based Chinese character recognition. In: Proceedings of the 14th IAPR International Conference on Document Analysis and Recognition (ICDAR). Kyoto, Japan: IEEE, 2017. 441-447
    [130] Christodoulidis S, Anthimopoulos M, Ebner L, Christe A, Mougiakakou S. Multisource transfer learning with convolutional neural networks for lung pattern analysis. IEEE Journal of Biomedical and Health Informatics, 2017, 21(1):76-84 doi: 10.1109/JBHI.2016.2636929
    [131] Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7):1409-1422 doi: 10.1109/TPAMI.2011.239
    [132] Hare S, Saffari A, Torr P H S. Struck: structured output tracking with kernels. In: Proceedings of the 2011 IEEE International Conference on Computer Vision. Barcelona, Spain: IEEE, 2011. 263-270
    [133] Milan A, Schindler K, Roth S. Multi-target tracking by discrete-continuous energy minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10):2054-2068 doi: 10.1109/TPAMI.2015.2505309
    [134] Bolme D S, Beveridge J R, Draper B A, Lui Y M. Visual object tracking using adaptive correlation filters. In: Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, USA: IEEE, 2010. 2544-2550
    [135] Henriques J F, Caseiro R, Martins P, Batista J. Exploiting the circulant structure of tracking-by-detection with kernels. In: Proceedings of the 12th European Conference on Computer Vision. Florence, Italy: Springer, 2012. 702-715
    [136] Henriques J F, Caseiro R, Martins P, Batista J. High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3):583-596 doi: 10.1109/TPAMI.2014.2345390
    [137] Danelljan M, Khan F S, Felsberg M, Van De Weijer J. Adaptive color attributes for real-time visual tracking. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE, 2014. 1090-1097
    [138] Wang N Y, Yeung D Y. Learning a deep compact image representation for visual tracking. In: Proceedings of the 26th International Conference on Neural Information Processing Systems. Lake Tahoe, USA: ACM, 2013. 809-817
    [139] Wang N Y, Li S Y, Gupta A, Yeung D Y. Transferring rich feature hierarchies for robust visual tracking. arXiv: 1501.04587v2, 2015.
    [140] Wang L J, Ouyang W L, Wang X G, Lu H C. Visual tracking with fully convolutional networks. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE, 2015. 3119-3127
    [141] Nam H, Han B. Learning multi-domain convolutional neural networks for visual tracking. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE, 2016. 4293-4302
    [142] Cui Z, Xiao S T, Feng J S, Yan S C. Recurrently target-attending tracking. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE, 2016. 1449-1458
  • 加载中
图(2) / 表(8)
计量
  • 文章访问数:  4085
  • HTML全文浏览量:  1070
  • PDF下载量:  1859
  • 被引次数: 0
出版历程
  • 收稿日期:  2018-02-12
  • 录用日期:  2018-05-30
  • 刊出日期:  2019-07-20

目录

    /

    返回文章
    返回