2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

深度长尾学习研究综述

韩佳艺 刘建伟 陈德华 徐璟东 代琪 夏鹏飞

王超, 刘侠, 董迪, 臧丽亚, 刘再毅, 梁长虹, 田捷. 基于影像组学的非小细胞肺癌淋巴结转移预测. 自动化学报, 2019, 45(6): 1087-1093. doi: 10.16383/j.aas.c160794
引用本文: 韩佳艺, 刘建伟, 陈德华, 徐璟东, 代琪, 夏鹏飞. 深度长尾学习研究综述. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240077
WANG Chao, LIU Xia, DONG Di, ZANG Li-Ya, LIU Zai-Yi, LIANG Chang-Hong, TIAN Jie. Radiomics Based Lymph Node Metastasis Prediction in Non-small-cell Lung Cancer. ACTA AUTOMATICA SINICA, 2019, 45(6): 1087-1093. doi: 10.16383/j.aas.c160794
Citation: Han Jia-Yi, Liu Jian-Wei, Chen De-Hua, Xu Jing-Dong, Dai Qi, Xia Peng-Fei. Survey on deep long-tailed learning. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240077

深度长尾学习研究综述

doi: 10.16383/j.aas.c240077 cstr: 32138.14.j.aas.c240077
详细信息
    作者简介:

    韩佳艺:中国石油大学(北京)人工智能学院自动化系博士研究生. 主要研究方向为深度长尾学习与计算机视觉. E-mail: 864494560@qq.com

    刘建伟:中国石油大学(北京)人工智能学院自动化系副教授. 2006年获得东华大学博士学位. 主要研究方向为智能系统, 机器学习, 复杂非线性系统的分析、预测与控制. E-mail: liujw@cup.edu.cn

    陈德华:东华大学计算机科学与技术学院教授. 主要研究方向为数据科学, 深度学习等. 本文通信作者. E-mail: chendehua@dhu.edu.cn

    徐璟东:中国石油大学(北京)人工智能学院自动化系硕士研究生. 主要研究方向为深度长尾学习, 因果推断. E-mail: 2948473452@qq.com

    代琪:中国石油大学(北京)人工智能学院自动化系博士. 2024年毕业于中国石油大学信息科学与工程学院自动化系控制理论与控制工程专业获博士学位. 主要研究方向为数据挖掘和机器学习

    夏鹏飞:东华大学计算机科学与技术学院博士研究生. E-mail: x6635570@163.com

Survey on deep long-tailed learning

More Information
    Author Bio:

    HAN Jia-Yi Ph. D.candidate at the Department of Automation, college of Artificial Intelligence, China University of Petroleum, Beijing. Her research interests include deep long-tailed learning and computer vision

    LIU Jian-Wei Associate Professor at the Department of Automation, college of Artificial Intelligence, China University of Petroleum, Beijing. He received the Ph.D. degree in control theory and control engineering from DongHua University in 2006. His research interests include pattern recognition and intelligent Systems, machine learning, analysis, prediction and control of complex non-linear system

    CHEN De-Hua Professor at the Department of computer science and technology, Donghua University. His research interests include data science and deep learning. Corresponding author of this paper

    XU Jing-Dong Master student at the Department of Automation, college of Artificial Intelligence, China University of Petroleum, Beijing. His research interests include deep long-tailed learning and causal inference

    Dai Qi Ph. D. at the Department of Automation, college of Artificial Intelligence, China University of Petroleum, Beijing. He received his Ph.D. degree in Control Theory and Control Engineering from China University of Petroleum, Beijing in 2024. His research interests include data mining and machine learning

    CHEN De-Hua Ph. D.candidate at the Department of computer science and technology, Donghua University

  • 摘要: 深度学习是一门依赖于数据的科学, 传统深度学习方法假定在平衡数据集上训练模型, 然而, 现实世界中大规模数据集通常表现出长尾分布现象, 样本数量众多的少量头部类主导模型训练, 而大量尾部类样本数量过少, 难以得到充分学习. 近年来, 长尾学习掀起学术界的研究热潮, 涌现出大量先进的工作. 本文综合梳理和分析了近年来发表在高水平会议或期刊上的文献, 对长尾学习进行全面的综述. 具体而言, 根据深度学习模型设计流程, 将图像识别领域的长尾学习算法分为丰富样本数量与语义信息的优化样本空间方法, 关注特征提取器、分类器、logits和损失函数这四个基本组成部分的优化模型方法以及通过引入帮助模型训练的辅助任务, 在多个空间共同优化长尾学习模型的辅助任务学习3大类, 并根据提出的分类方法综合对比分析每类长尾学习方法的优缺点. 然后, 进一步将基于样本数量的狭义长尾学习概念推广至多尺度广义长尾学习. 此外, 本文对文本数据、语音数据等其它数据形式下的长尾学习算法进行简要评述. 最后, 讨论了目前长尾学习面临的可解释性较差、数据质量较低等挑战, 并展望了如多模态长尾学习、半监督长尾学习等未来具有潜力的发展方向.
  • 肺癌是世界范围内发病率和死亡率最高的疾病之一, 占所有癌症病发症的18 %左右[1].美国癌症社区统计显示, 80 %到85 %的肺癌为非小细胞肺癌[2].在该亚型中, 大多数病人会发生淋巴结转移, 在手术中需对转移的淋巴结进行清扫, 现阶段通常以穿刺活检的方式确定淋巴结的转移情况.因此, 以非侵入性的方式确定淋巴结的转移情况对临床治疗具有一定的指导意义[3-5].然而, 基本的诊断方法在无创淋巴结转移的预测上存在很大挑战.

    影像组学是针对医学影像的兴起的热门方法, 指通过定量医学影像来描述肿瘤的异质性, 构造大量纹理图像特征, 对临床问题进行分析决策[6-7].利用先进机器学习方法实现的影像组学已经大大提高了肿瘤良恶性的预测准确性[8].研究表明, 通过客观定量的描述影像信息, 并结合临床经验, 对肿瘤进行术前预测及预后分析, 将对临床产生更好的指导价值[9].

    本文采用影像组学的方法来解决非小细胞肺癌淋巴结转移预测的问题.通过利用套索逻辑斯特回归(Lasso logistics regression, LLR)[10]模型得出基本的非小细胞肺癌淋巴结的转移预测概率, 并把组学模型的预测概率作为独立的生物标志物, 与患者的临床特征一起构建多元Logistics预测模型并绘制个性化诺模图, 在临床决策中的起重要参考作用.

    我们收集了广东省人民医院2007年5月至2014年6月期间的717例肺癌病例.这些病人在签署知情同意书后, 自愿提供自己的信息作为研究使用.为了充分利用收集到的数据对非小细胞肺癌淋巴结转移预测, 即对$N1-N3$与$N0$进行有效区分, 我们对收集的数据设置了三个入组标准: 1)年龄大于等于18周岁, 此时的肺部已经发育完全, 消除一定的干扰因素; 2)病理诊断为非小细胞肺癌无其他疾病干扰, 并有完整的CT (Computed tomography)增强图像及个人基本信息; 3)有可利用的术前病理组织活检分级用于确定N分期.经筛选, 共564例病例符合进行肺癌淋巴结转移预测研究的要求(如图 1).

    图 1  数据筛选流程图
    Fig. 1  Data filtering flow chart

    为了得到有价值的结果, 考虑到数据的分配问题, 为了保证客观性, 防止挑数据的现象出现, 在数据分配上, 训练集与测试集将按照时间进行划分, 并以2013年1月为划分点.得到训练集: 400例, 其中, 243例正样本$N1-N3$, 157例负样本$N0$; 测试集: 164例, 其中, 93例正样本, 71例负样本.

    在进行特征提取工作前, 首先要对肿瘤病灶进行分割.医学图像分割的金标准是需要有经验的医生进行手动勾画的结果.但手动分割无法保证每次的分割结果完全一致, 且耗时耗力, 尤其是在数据量很大的情况下.因此, 手动分割不是最理想的做法.在本文中, 使用的自动图像分割算法为基于雪橇的自动区域生长分割算法[11], 该算法首先选定最大切片层的种子点, 这时一般情况下最大切片为中间层的切片, 然后估计肿瘤的大小即直径, 作为一个输入参数, 再自动进行区域生长得到每个切片的肿瘤如图 2(a1), (b1), 之后我们进行雪橇滑动到邻接的上下两个切面, 进行分割, 这样重复上述的区域生长即滑动切片, 最终分割得到多个切片的的肿瘤区域, 我们将肿瘤切面层进行组合, 得到三维肿瘤如图 2(a2), (b2).

    图 2  三维病灶的分割
    Fig. 2  3D tumor segmentation

    利用影像组学处理方法, 从分割得到的肿瘤区域中总共提取出386个特征.这些特征可分为四组:三维形状特征, 表面纹理特征, Gabor特征和小波特征[12-13].形状特征通过肿瘤体积、表面积、体积面积比等特征描述肿瘤在空间和平面上的信息.纹理特征通过统计三维不同方向上像素的规律, 通过不同的分布规律来表示肿瘤的异质性. Gabor特征指根据特定方向, 特定尺度筛选出来的纹理信息.

    小波特征是指原图像经过小波变换滤波器后的纹理特征.在模式识别范畴中, 高维特征会增加计算复杂度, 此外, 高维的特征往往存在冗余性, 容易造成模型过拟合.因此, 本位通过特征筛选方法首先对所有特征进行降维处理.

    本文采用$L$1正则化Lasso进行特征筛选, 对于简单线性回归模型定义为:

    $$ \begin{equation} f(x)=\sum\limits_{j=1}^p {w^jx^j} =w^\mathrm{T}x \end{equation} $$ (1)

    其中, $x$表示样本, $w$表示要拟合的参数, $p$表示特征的维数.

    要进行参数$w$学习, 应用二次损失来表示目标函数, 即:

    $$ \begin{equation} J(w)=\frac{1}{n}\sum\limits_{i=1}^n{(y_i-f(x_i)})^2= \frac{1}{n}\vert\vert\ {{y}-Xw\vert\vert}^2 \end{equation} $$ (2)

    其中, $X$是数据矩阵, $X=(x_1 , \cdots, x_n)^\mathrm{T}\in {\bf R}^{n\times p}$, ${y}$是由标签组成的列向量, ${y}=(y_1, \cdots, y_n )^\mathrm{T}$.

    式(2)的解析解为:

    $$ \begin{equation} \hat{w}=(X^\mathrm{T}X)^{-1}X^\mathrm{T}{y} \end{equation} $$ (3)

    然而, 若$p\gg n$, 即特征维数远远大于数据个数, 矩阵$X^\mathrm{T}X$将不是满秩的, 此时无解.

    通过Lasso正则化, 得到目标函数:

    $$ \begin{equation} J_L(w)=\frac{1}{n} \vert\vert{y}-Xw\vert\vert^2+\lambda\vert\vert w\vert\vert _1 \end{equation} $$ (4)

    目标函数最小化等价为:

    $$ \begin{equation} \mathop {\min }\limits_w \frac{1}{n} \vert\vert{y}-Xw\vert\vert^2, \, \, \, \, \, \, \, \mathrm{s.t.}\, \, \vert \vert w\vert \vert _1 \le C \end{equation} $$ (5)

    为了使部分特征排除, 本文采用$L$1正则方法进行压缩.二维情况下, 在$\mbox{(}w^1, w^2)$平面上可画出目标函数的等高线, 取值范围则为平面上半径为$C$的$L$1范数圆, 等高线与$L$1范数圆的交点为最优解. $L$1范数圆和每个坐标轴相交的地方都有"角''出现, 因此在角的位置将产生稀疏性.而在维数更高的情况下, 等高线与L1范数球的交点除角点之外还可能产生在很多边的轮廓线上, 同样也会产生稀疏性.对于式(5), 本位采用近似梯度下降(Proximal gradient descent)[14]算法进行参数$w$的迭代求解, 所构造的最小化函数为$Jl=\{g(w)+R(w)\}$.在每次迭代中, $Jl(w)$的近似计算方法如下:

    $$ \begin{align} J_L (w^t+d)&\approx \tilde {J}_{w^t} (d)=g(w^t)+\nabla g(w^t)^\mathrm{T}d\, +\nonumber\\ &\frac{1} {2d^\mathrm{T}(\frac{I }{ \alpha })d}+R(w^t+d)=\nonumber\\ &g(w^t)+\nabla g(w^t)^\mathrm{T}d+\frac{{d^\mathrm{T}d} } {2\alpha } +\nonumber\\ &R(w^t+d) \end{align} $$ (6)

    更新迭代$w^{(t+1)}\leftarrow w^t+\mathrm{argmin}_d \tilde {J}_{(w^t)} (d)$, 由于$R(w)$整体不可导, 因而利用子可导引理得:

    $$ \begin{align} w^{(t+1)}&=w^t+\mathop {\mathrm{argmin}} \nabla g(w^t)d^\mathrm{T}d\, +\nonumber\\ &\frac{d^\mathrm{T}d}{2\alpha }+\lambda \vert \vert w^t+d\vert \vert _1=\nonumber\\ &\mathrm{argmin}\frac{1 }{ 2}\vert \vert u-(w^t-\alpha \nabla g(w^t))\vert \vert ^2+\nonumber\\ &\lambda \alpha \vert \vert u\vert \vert _1 \end{align} $$ (7)

    其中, $S$是软阈值算子, 定义如下:

    $$ \begin{equation} S(a, z)=\left\{\begin{array}{ll} a-z, &a>z \\ a+z, &a<-z \\ 0, &a\in [-z, z] \\ \end{array}\right. \end{equation} $$ (8)

    整个迭代求解过程为:

    输入.数据$X\in {\bf R}^{n\times p}, {y}\in {\bf R}^n$, 初始化$w^{(0)}$.

    输出.参数$w^\ast ={\rm argmin}_w\textstyle{1 \over n}\vert \vert Xw-{y}\vert \vert ^2+\\ \lambda \vert\vert w\vert \vert _1 $.

    1) 初始化循环次数$t = 0$;

    2) 计算梯度$\nabla g=X^\mathrm{T}(Xw-{y})$;

    3) 选择一个步长大小$\alpha ^t$;

    4) 更新$w\leftarrow S(w-\alpha ^tg, \alpha ^t\lambda )$;

    5) 判断是否收敛或者达到最大迭代次数, 未收敛$t\leftarrow t+1$, 并循环2)$\sim$5)步.

    通过上述迭代计算, 最终得到最优参数, 而参数大小位于软区间中的, 将被置为零, 即被稀疏掉.

    本文使用LLR对组学特征进行降维并建模, 并使用10折交叉验证, 提高模型的泛化能力, 流程如图 3所示.

    图 3  淋巴结转移预测模型构造图
    Fig. 3  Structure of lymph node metastasis prediction model

    将本文使用的影像组学模型的预测概率(Radscore)作为独立的生物标志物, 并与临床指标中显著的特征结合构建多元Logistics模型, 绘制个性化预测的诺模图, 最后通过校正曲线来观察预测模型的偏移情况.

    我们分别在训练集和验证集上计算各个临床指标与淋巴结转移的单因素P值, 计算方式为卡方检验, 结果见表 1, 发现吸烟与否和EGFR (Epidermal growth factor receptor)基因突变状态与淋巴结转移显著相关.

    表 1  训练集和测试集病人的基本情况
    Table 1  Basic information of patients in the training set and test set
    基本项训练集($N=400$) $P$值测试集($N=164$) $P$值
    性别144 (36 %)0.89678 (47.6 %)0.585
    256 (64 %)86 (52.4 %)
    吸烟126 (31.5 %)0.030*45 (27.4 %)0.081
    274 (68.5 %)119 (72.6 %)
    EGFR缺失36 (9 %)4 (2.4 %)
    突变138 (34.5 %)$ < $0.001*67 (40.9 %)0.112
    正常226 (56.5 %)93 (56.7 %)
    下载: 导出CSV 
    | 显示表格

    影像组学得分是每个病人最后通过模型预测后的输出值, 随着特征数的动态变化, 模型输出的AUC (Area under curve)值也随之变化, 如图 4所示, 使用R语言的Glmnet库可获得模型的参数$\lambda $的变化图.图中直观显示了参数$\lambda $的变化对模型性能的影响, 这次实验中模型选择了3个变量.如图 5所示, 横坐标表示$\lambda $的变化, 纵坐标表示变量的系数变化, 当$\lambda $逐渐变大时, 变量的系数逐渐减少为零, 表示变量选择的过程, 当$\lambda $越大表示模型的压缩程度越大.

    图 4  $\lambda $与变量数目对应走势
    Fig. 4  The trend of the parameters and the number of variables
    图 5  系数随$\lambda $参数变化图
    Fig. 5  The coefficient changes with the parameters

    通过套索回归方法, 自动的将变量压缩为3个, 其性能从图 4中也可发现, 模型的AUC值为最佳, 最终的特征如表 2所示. $V0$为截距项; $V179$为横向小波分解90度共生矩阵Contrast特征; $V230$为横向小波分解90度共生矩阵Entropy特征.

    表 2  Lasso选择得到的参数
    Table 2  Parameters selected by Lasso
    Lasso选择的参数含义数值$P$值
    $V0$截距项2.079115
    $V179$横向小波分解90度共生矩阵Contrast特征(Contrast_2_90)0.0000087< 0.001***
    $V230$横向小波分解90度共生矩阵Entropy特征(Entropy_3_180)$-$3.573315< 0.001***
    $V591$表面积与体积的比例(Surface to volume ratio)$-$1.411426< 0.001***
    下载: 导出CSV 
    | 显示表格

    $V591$为表面积与体积的比例; 将三个组学特征与$N$分期进行单因素分析, 其$P$值都是小于0.05, 表示与淋巴结转移有显著相关性.根据Lasso选择后的三个变量建立Logistics模型并计算出Rad-score, 详见式(9).并且同时建立SVM (Support vector machine)模型.

    NB (Naive Bayesian)模型, 进行训练与预测, LLR模型训练集AUC为0.710, 测试集为0.712, 表现较优; 如表 3所示.将实验中使用的三个机器学习模型的结果进行对比, 可以发现, LLR的实验结果是最好的.

    表 3  不同方法对比结果
    Table 3  Comparison results of different methods
    方法训练集(AUC)测试集(AUC)召回率
    LLR0.7100.7120.75
    SVM0.6980.6540.75
    NB0.7180.6810.74
    下载: 导出CSV 
    | 显示表格
    $$ \begin{equation} \begin{aligned} &\text{Rad-score}=2.328373+{\rm Contrast}\_2\_90\times\\ &\qquad 0.0000106 -{\rm entropy}\_3\_180\times 3.838207 +\\ &\qquad\text{Maximum 3D diameter}\times 0.0000002 -\\ &\qquad\text{Surface to volume ratio}\times 1.897416 \\ \end{aligned} \end{equation} $$ (9)

    为了体现诺模图的临床意义, 融合Rad-score, 吸烟情况和EGFR基因因素等有意义的变量进行分析, 绘制出个性化预测的诺模图, 如图 7所示.为了给每个病人在最后得到一个得分, 需要将其对应变量的得分进行相加, 然后在概率线找到对应得分的概率, 从而实现非小细胞肺癌淋巴结转移的个性化预测.我们通过一致性指数(Concordance index, $C$-index)对模型进行了衡量, 其对应的$C$-index为0.724.

    图 6  测试集ROC曲线
    Fig. 6  ROC curve of test set
    图 7  验证诺模图
    Fig. 7  Verifies the nomogram

    本文中使用校正曲线来验证诺模图的预测效果, 如图 8所示, 由校正曲线可以看出, 预测结果基本上没有偏离真实标签的结果, 表现良好, 因此, 该模型具有可靠的预测性能[15].

    图 8  一致性曲线
    Fig. 8  Consistency curves

    在构建非小细胞肺癌淋巴结转移的预测模型中, 使用LLR筛选组学特征并构建组学标签, 并与显著的临床特征构建多元Logistics模型, 绘制个性化预测的诺模图.其中LLR模型在训练集上的AUC值为0.710, 在测试集上的AUC值为0.712, 利用多元Logistics模型绘制个性化预测的诺模图, 得到模型表现能力$C$-index为0.724 (95 % CI: 0.678 $\sim$ 0.770), 并且在校正曲线上表现良好, 所以个性化预测的诺模图在临床决策上可起重要参考意义.[16].

  • 图  1  深度长尾学习研究综述组织结构图

    Fig.  1  Organizational Structure Diagram of a Survey on Deep Long-Tail Learning Research

    图  2  长尾训练集示意图

    Fig.  2  Illustration of Long-Tail Training Set

    图  3  长尾测试集示意图

    Fig.  3  Illustration of Long-Tail Testing Set

    图  4  常用长尾数据集分布

    Fig.  4  Distributions of Common Long-Tail Datasets

    图  6  神经网络结构示意图

    Fig.  6  Diagram of Neural Network Architecture

    图  5  长尾图像识别研究现状

    Fig.  5  Current Status of Long-Tail Image Recognition Research

    图  8  优化样本空间各方法关系示意图

    Fig.  8  Diagram of Relationships Among Various Methods for Optimizing Sample Space

    图  7  重采样示意图

    Fig.  7  Diagram of Resampling

    图  9  单样本变换示意图

    Fig.  9  Diagram of Single Sample Transformation

    图  10  多样本变换方法示意图

    Fig.  10  Diagram of Multiple Sample Transformation Methods

    图  11  背景增强示意图

    Fig.  11  Background Enhancement Diagram

    图  12  语义增强样例图

    Fig.  12  Example Diagram of Semantic Enhancement

    图  13  优化模型空间

    Fig.  13  Optimized Model Space

    图  14  辅助任务学习各方法关系示意图

    Fig.  14  Diagram of the Relationships Among Various Methods in Auxiliary Task Learning

    图  15  两阶段解耦学习模型示意图[1]

    Fig.  15  Diagram of the Two-Stage Decoupled Learning Model[1]

    图  16  双分支网络(Bilateral-Branch Network, BBN)结构示意图[14]

    Fig.  16  Diagram of the Bilateral-Branch Network (BBN) Architecture[14]

    图  17  Range loss示意图[40]

    Fig.  17  Diagram of Range Loss[40]

    图  18  三阶段长尾知识蒸馏模型[147]

    Fig.  18  Three-Stage Long-Tail Knowledge Distillation Model[147]

    图  19  长尾集成学习模型示意图

    Fig.  19  Diagram of Long-Tail Ensemble Learning Model

    图  20  类间样本数量长尾分布与类内属性长尾分布示例图

    Fig.  20  Example Diagram of Long-Tail Distribution of Inter-Class Sample Counts and Intra-Class Attributes

    表  1  常见长尾数据集基本信息

    Table  1  Basic Information of Common Long-Tail Datasets

    类型数据集类别数量训练集样本数量测试集样本数量最大类样本数量最小类样本数量
    图像分类CIFAR10-LT[13]10500001000050005($ \rho$=100), 50($ \rho$=10)
    图像分类CIFAR100-LT[13]10050000100005005($ \rho$=100), 50($ \rho$=10)
    目标检测ImageNet-LT[62]10001158465000012805
    场景识别Places-LT[62]365625003650049805
    人脸识别MS1M-LT[62]74500(ID)88753035305981
    目标检测iNaturalist2017[63]5089579184182707196613381
    目标检测iNaturalist 2018[63]81424375132442612755119
    实例分割LVIS v0.5[64]12305700020000261481
    实例分割LVIS v1[64]120310017019822505521
    场景理解SUN-LT[65]39740842868122
    目标检测AWA-LT[65]50671360927202
    鸟类识别CUB-LT[65]20029452348432或3
    图像分类STL10-LT[66]10500080005005($ \rho$=100), 50($ \rho$=10)
    目标检测VOC-LT[67]20114249527754
    视频分类VideoLT[68]100417935251244191244
    下载: 导出CSV

    表  2  长尾图像识别方法比较

    Table  2  Comparison of Long-Tail Image Recognition Methods

    分类 代表性文献 优点 缺点
    优化样本空间 重采样 [1, 2, 56, 80, 70, 82, 30, 169] 简单通用, 理论直观, 易于操作 1)会丢弃大量头部类有效信息
    2)重复采样尾部类不能增加有效信息, 并容易引发过拟合
    3)易引入其它噪声
    数据增强 [2, 8, 9, 15, 76, 88, 89, 94, 95] 样本变换法成本较低, 易与其它方法结合, 灵活性较高. 语义增强法丰富尾部样本的语义信息, 生成具有现实意义的新样本 1)样本变换法引入大量新数据, 增加模型训练成本, 且可能生成毫无意义的样本, 鲁棒性较差.
    2)语义增强方法需设计专门的模型结构, 操作复杂. 并过于依赖于头部类数据质量, 易给模型带来新的偏置.
    优化模型空间 优化特征提取器 [107, 108, 109, 111, 112, 170] 有效增强样本上下文语义特征帮助模型学到无偏的特征表示 1)引入大量参数, 占用内存, 降低训练效率
    2)可解释性较差
    优化分类器 [1, 16, 26, 113, 115, 116, 118, 119] 计算开销小, 训练稳定无需设计额外的损失函数或存储单元 1)对超参数和优化器的选择敏感, 试错代价高
    2)灵活性较低, 在目标检测与实例分割任务上表现不佳
    logits调整 [12, 28, 30, 55, 71, 120, 122] 既能优化训练过程, 又能进行事后修正. 计算开销较低, 泛化性能良好, 易与其它方法结合. 1)依赖于数据集的先验分布
    2)修正后的边际分布可能不满足期望分布.
    代价敏感加权损失函数 [11, 12, 54, 72, 127, 129, 133] 操作简单, 易于实现, 计算开销较小, 适应于实际应用场景 1)优化困难, 参数敏感, 难以处理大规模真实场景
    2)头尾性能像“跷跷板”, 无法从本质上解决信息缺乏的问题
    辅助任务学习 解耦学习 [1, 14, 134, 135, 138, 139] 利用大量头部类数据生成泛化能力良好的特征表示能够有效提升模型性能, 且计算成本较低. 1)两阶段方法不利于端到端的模型训练与部署
    2)对数据依赖性较强
    3)与其它算法结合使用时需重新设计, 实用性不强
    度量学习 [40, 58, 59, 127, 145, 149, 151] 便于公式化与计算构建一个正样本接近, 负样本远离的特征空间, 优化决策边界. 1)尾部类样本极少的情况下性能很差.
    2)依赖于度量损失函数的设计
    知识蒸馏 [17, 19, 36, 145, 147, 154] 重用模型资源, 充分利用数据集蕴含的知识. 稳定尾部类学习过程 1)计算开销大, 优化成本相对过高, 对超参数敏感
    2)易出现师生不匹配问题, 整体性能过于依赖教师模型的学习情况
    集成学习 [18, 19, 20, 158, 159, 161] 在头部类和尾部类上都能保持良好性能泛化能力良好, 能够处理未知分布的测试集 1)计算和存储负担过大, 框架部署复杂
    2)专家之间存在相互影响的情况, 难以有效整合专家
    层次学习 [23, 24, 25, 162] 对数据间的关系进行多粒度建模, 捕捉类间隐式语义关系有助于头尾知识迁移 1)模型设计复杂, 训练成本较高
    2)依赖于高质量数据, 有时需要数据集提供外部信息
    3)层次划分步骤对后续训练产生过大影响
    下载: 导出CSV
  • [1] Kang B, Xie S, Rohrbach M, et al. Decoupling representation and classifier for long-tailed recognition[J]. arXiv preprint arXiv: 1910.09217, 2019.
    [2] Zhang Y, Wei X S, Zhou B, et al. Bag of tricks for long-tailed visual recognition with deep convolutional neural networks[C]//Proceedings of the AAAI conference on artificial intelligence. 2021, 35(4): 3447−3455.
    [3] Wang J, Zhang W, Zang Y, et al. Seesaw loss for long-tailed instance segmentation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 9695−9704.
    [4] Fu Y, Xiang L, Zahid Y, et al. Long-tailed visual recognition with deep models: A methodological survey and evaluation. Neurocomputing, 2022
    [5] Yang L, Jiang H, Song Q, et al. A survey on long-tailed visual recognition. International Journal of Computer Vision, 2022, 130(7): 1837−1872 doi: 10.1007/s11263-022-01622-8
    [6] Drummond C, Holte R C. C4. 5, class imbalance, and cost sensitivity: why under-sampling beats over-sampling[C]//Workshop on learning from imbalanced datasets II. 2003, 11: 1−8.
    [7] Shen L, Lin Z, Huang Q. Relay backpropagation for effective learning of deep convolutional neural networks[C]//Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part VII 14. Springer International Publishing, 2016: 467−482.
    [8] Chou H P, Chang S C, Pan J Y, et al. Remix: rebalanced mixup[C]//Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part VI 16. Springer International Publishing, 2020: 95−110.
    [9] Kim J, Jeong J, Shin J. M2m: Imbalanced classification via major-to-minor translation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 13896−13905.
    [10] Chu P, Bian X, Liu S, et al. Feature space augmentation for long-tailed data[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16. Springer International Publishing, 2020: 694−710.
    [11] Cui Y, Jia M, Lin T Y, et al. Class-balanced loss based on effective number of samples[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 9268−9277.
    [12] Tan J, Wang C, Li B, et al. Equalization loss for long-tailed object recognition[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 11662−11671.
    [13] Cao K, Wei C, Gaidon A, et al. Learning imbalanced datasets with label-distribution-aware margin loss. Advances in neural information processing systems, 201932
    [14] Zhou B, Cui Q, Wei X S, et al. Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 9719−9728.
    [15] Zhou A, Tajwar F, Robey A, et al. Do deep networks transfer invariances across classes?[J]. arXiv preprint arXiv: 2203.09739, 2022.
    [16] Liu B, Li H, Kang H, et al. Gistnet: a geometric structure transfer network for long-tailed recognition[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 8209−8218.
    [17] Zhang S, Chen C, Hu X, et al. Balanced knowledge distillation for long-tailed learning. Neurocomputing, 2023, 527: 36−46 doi: 10.1016/j.neucom.2023.01.063
    [18] Sharma S, Yu N, Fritz M, et al. Long-tailed recognition using class-balanced experts[C]//Pattern Recognition: 42nd DAGM German Conference, DAGM GCPR 2020, Tübingen, Germany, September 28–October 1, 2020, Proceedings 42. Springer International Publishing, 2021: 86−100.
    [19] Xiang L, Ding G, Han J. Learning from multiple experts: Self-paced knowledge distillation for long-tailed classification[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16. Springer International Publishing, 2020: 247−263.
    [20] Cai J, Wang Y, Hwang J N. Ace: Ally complementary experts for solving long-tailed recognition in one-shot[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 112−121.
    [21] Cai J, Wang Y, Hsu H M, et al. Luna: Localizing unfamiliarity near acquaintance for open-set long-tailed recognition[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2022, 36(1): 131−139.
    [22] Liu X, Zhang J, Hu T, et al. Inducing Neural Collapse in Deep Long-tailed Learning[C]//International Conference on Artificial Intelligence and Statistics. PMLR, 2023: 11534−11544.
    [23] Wu J, Song L, Zhang Q, et al. Forestdet: Large-vocabulary long-tailed object detection and instance segmentation. IEEE Transactions on Multimedia, 2021, 24: 3693−3705
    [24] Ouyang W, Wang X, Zhang C, et al. Factors in finetuning deep model for object detection with long-tail distribution[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 864−873.
    [25] Li B. Adaptive Hierarchical Representation Learning for Long-Tailed Object Detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 2313−2322.
    [26] Tang K, Huang J, Zhang H. Long-tailed classification by keeping the good and removing the bad momentum causal effect. Advances in Neural Information Processing Systems, 2020, 33: 1513−1524
    [27] Zhu B, Niu Y, Hua X S, et al. Cross-domain empirical risk minimization for unbiased long-tailed classification[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2022, 36(3): 3589−3597.
    [28] Wu T, Liu Z, Huang Q, et al. Adversarial robustness under long-tailed distribution[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 8659−8668.
    [29] Wang Y X, Ramanan D, Hebert M. Learning to model the tail. Advances in neural information processing systems, 201730
    [30] Ren J, Yu C, Ma X, et al. Balanced meta-softmax for long-tailed visual recognition. Advances in neural information processing systems, 2020, 33: 4175−4186
    [31] Dong B, Zhou P, Yan S, et al. Lpt: Long-tailed prompt tuning for image classification[J]. arXiv preprint arXiv: 2210.01033, 2022.
    [32] Tang K, Tao M, Qi J, et al. Invariant feature learning for generalized long-tailed classification[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 709−726.
    [33] Zhang R, Haihong E, Yuan L, et al. MBNM: multi-branch network based on memory features for long-tailed medical image recognition. Computer Methods and Programs in Biomedicine, 2021, 212: 106448 doi: 10.1016/j.cmpb.2021.106448
    [34] Ju L, Yu Z, Wang L, et al. Hierarchical Knowledge Guided Learning for Real-world Retinal Disease Recognition. IEEE Transactions on Medical Imaging, 2023
    [35] Yang Z, Pan J, Yang Y, et al. Proco: Prototype-aware contrastive learning for long-tailed medical image classification[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer Nature Switzerland, 2022: 173−182.
    [36] Zhao W, Liu J, Liu Y, et al. Teaching teachers first and then student: Hierarchical distillation to improve long-tailed object recognition in aerial images. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 1−12
    [37] Li G, Pan L, Qiu L, et al. A Two-Stage Shake-Shake Network for Long-Tailed Recognition of SAR Aerial View Objects[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 249−256.
    [38] Jiao W, Zhang J. Sonar Images Classification While Facing Long-Tail and Few-Shot. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 1−20
    [39] Guo S, Liu R, Wang M, et al. Exploiting the Tail Data for Long-Tailed Face Recognition. IEEE Access, 2022, 10: 97945−97953 doi: 10.1109/ACCESS.2022.3206040
    [40] Zhang X, Fang Z, Wen Y, et al. Range loss for deep face recognition with long-tail[J]. arXiv preprint arXiv: 1611.08976, 2016.
    [41] Moon W J, Seong H S, Heo J P. Minority-Oriented Vicinity Expansion with Attentive Aggregation for Video Long-Tailed Recognition[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2023, 37(2): 1931−1939.
    [42] Zhang C, Ren L, Wang J, et al. Making Pre-trained Language Models Good Long-tailed Learners[J]. arXiv preprint arXiv: 2205.05461, 2022.
    [43] Li Y, Shen T, Long G, et al. Improving long-tail relation extraction with collaborating relation-augmented attention[J]. arXiv preprint arXiv: 2010.03773, 2020.
    [44] Huang Y, Giledereli B, Köksal A, et al. Balancing methods for multi-label text classification with long-tailed class distribution[J]. arXiv preprint arXiv: 2109.04712, 2021.
    [45] Li X, Sun X, Meng Y, et al. Dice loss for data-imbalanced NLP tasks[J]. arXiv preprint arXiv: 1911.02855, 2019.
    [46] Conde M V, Choi U J. Few-shot long-tailed bird audio recognition[J]. arXiv preprint arXiv: 2206.11260, 2022.
    [47] Chen Z, Chen J, Xie Z, et al. Multi-expert Attention Network with Unsupervised Aggregation for long-tailed fault diagnosis under speed variation. Knowledge-Based Systems, 2022, 252: 109393 doi: 10.1016/j.knosys.2022.109393
    [48] Sreepada R S, Patra B K. Mitigating long tail effect in recommendations using few shot learning technique. Expert Systems with Applications, 2020, 140: 112887 doi: 10.1016/j.eswa.2019.112887
    [49] Chaudhary A, Gupta H P, Shukla K K. Real-Time Activities of Daily Living Recognition Under Long-Tailed Class Distribution. IEEE Transactions on Emerging Topics in Computational Intelligence, 2022, 6(4): 740−750 doi: 10.1109/TETCI.2022.3150757
    [50] Zhang Y, Kang B, Hooi B, et al. Deep long-tailed learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
    [51] Pareto V. Cours d'économie politique[M]. Librairie Droz, 1964.
    [52] Zipf G K. The meaning-frequency relationship of words. The Journal of general psychology, 1945, 33(2): 251−256 doi: 10.1080/00221309.1945.10544509
    [53] Hitt M A. The long tail: Why the future of business is selling less of more[J]. 2007.
    [54] Tan J, Lu X, Zhang G, et al. Equalization loss v2: A new gradient balance approach for long-tailed object detection[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 1685−1694.
    [55] Zhang S, Li Z, Yan S, et al. Distribution alignment: A unified framework for long-tail visual recognition[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 2361−2370.
    [56] Sinha S, Ohashi H, Nakamura K. Class-difficulty based methods for long-tailed visual recognition. International Journal of Computer Vision, 2022, 130(10): 2517−2531 doi: 10.1007/s11263-022-01643-3
    [57] Cui J, Zhong Z, Liu S, et al. Parametric contrastive learning[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 715−724.
    [58] Kang B, Li Y, Xie S, et al. Exploring balanced feature spaces for representation learning[C]//International Conference on Learning Representations. 2020.
    [59] Li T, Cao P, Yuan Y, et al. Targeted supervised contrastive learning for long-tailed recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 6918−6928.
    [60] 叶志飞, 文益民, 吕宝粮. 不平衡分类问题研究综述. 智能系统学报, 2009, 4(002): 148−156 doi: 10.3969/j.issn.1673-4785.2009.02.010
    [61] 赵凯琳, 靳小龙, 王元卓. 小样本学习研究综述. 软件学报, 2020, 32(2): 349−369
    [62] Liu Z, Miao Z, Zhan X, et al. Large-scale long-tailed recognition in an open world[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 2537−2546.
    [63] Van Horn G, Mac Aodha O, Song Y, et al. The inaturalist species classification and detection dataset[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8769−8778.
    [64] Gupta A, Dollar P, Girshick R. Lvis: A dataset for large vocabulary instance segmentation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 5356−5364.
    [65] Samuel D, Atzmon Y, Chechik G. From generalized zero-shot learning to long-tail with class descriptors[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2021: 286−295.
    [66] Oh, Y., Kim, D. J., & Kweon, I. S. (2022). Daso: Distribution-aware semantics-oriented pseudo-label for imbalanced semi-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9786−9796).
    [67] Wu T, Huang Q, Liu Z, et al. Distribution-balanced loss for multi-label classification in long-tailed datasets[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16. Springer International Publishing, 2020: 162−178.
    [68] Zhang X, Wu Z, Weng Z, et al. Videolt: Large-scale long-tailed video recognition[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 7960−7969.
    [69] Feng C, Zhong Y, Huang W. Exploring classification equilibrium in long-tailed object detection[C]//Proceedings of the IEEE/CVF International conference on computer vision. 2021: 3417−3426.
    [70] Shrivastava A, Gupta A, Girshick R. Training region-based object detectors with online hard example mining[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 761−769.
    [71] Zhao Y, Chen W, Tan X, et al. Adaptive logit adjustment loss for long-tailed visual recognition[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2022, 36(3): 3472−3480.
    [72] Chen Z, Casser V, Kretzschmar H, et al. GradTail: learning long-tailed data using gradient-based sample weighting[J]. arXiv preprint arXiv: 2201.05938, 2022.
    [73] Wah C, Branson S, Welinder P, et al. The caltech-ucsd birds-200-2011 dataset[J]. 2011.
    [74] Zhou B, Lapedriza A, Khosla A, et al. Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(6): 1452−1464
    [75] Coates A, Ng A, Lee H. An analysis of single-layer networks in unsupervised feature learning[C]//Proceedings of the fourteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 2011: 215−223.
    [76] Zang Y, Huang C, Loy C C. Fasa: Feature augmentation and sampling adaptation for long-tailed instance segmentation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 3457−3466.
    [77] Park S, Hong Y, Heo B, et al. The majority can help the minority: Context-rich minority oversampling for long-tailed classification[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 6887−6896.
    [78] Li B, Han Z, Li H, et al. Trustworthy long-tailed classification[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 6970−6979.
    [79] Wang T, Zhu Y, Chen Y, et al. C2am loss: Chasing a better decision boundary for long-tail object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 6980−6989.
    [80] Buda M, Maki A, Mazurowski M A. A systematic study of the class imbalance problem in convolutional neural networks. Neural networks, 2018, 106: 249−259 doi: 10.1016/j.neunet.2018.07.011
    [81] Haixiang G, Yijing L, Shang J, et al. Learning from class-imbalanced data: Review of methods and applications. Expert systems with applications, 2017, 73: 220−239 doi: 10.1016/j.eswa.2016.12.035
    [82] Chawla N V, Bowyer K W, Hall L O, et al. SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research, 2002, 16: 321−357 doi: 10.1613/jair.953
    [83] Jaiswal A, Babu A R, Zadeh M Z, et al. A survey on contrastive self-supervised learning. Technologies, 2020, 9(1): 2 doi: 10.3390/technologies9010002
    [84] Zhang H, Cisse M, Dauphin Y N, et al. mixup: Beyond empirical risk minimization[J]. arXiv preprint arXiv: 1710.09412, 2017.
    [85] Yun S, Han D, Oh S J, et al. Cutmix: Regularization strategy to train strong classifiers with localizable features[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6023−6032.
    [86] Verma V, Lamb A, Beckham C, et al. Manifold mixup: Better representations by interpolating hidden states[C]//International conference on machine learning. PMLR, 2019: 6438−6447.
    [87] Zhang S, Chen C, Zhang X, et al. Label-occurrence-balanced mixup for long-tailed recognition[C]//ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022: 3224−3228.
    [88] Zhang C, Pan T Y, Li Y, et al. Mosaicos: a simple and effective use of object-centric images for long-tailed object detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 417−427.
    [89] Liu B, Li H, Kang H, et al. Breadcrumbs: Adversarial class-balanced sampling for long-tailed recognition[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 637−653.
    [90] Liu J, Li W, Sun Y. Memory-based jitter: Improving visual recognition on long-tailed data with diversity in memory[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2022, 36(2): 1720−1728.
    [91] Kingma D P, Welling M. An introduction to variational autoencoders. Foundations and Trends® in Machine Learning, 2019, 12(4): 307−392
    [92] Rangwani H, Jaswani N, Karmali T, et al. Improving GANs for Long-Tailed Data Through Group Spectral Regularization[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 426−442.
    [93] Rodriguez M G, Balduzzi D, Schölkopf B. Uncovering the temporal dynamics of diffusion networks[J]. arXiv preprint arXiv: 1105.0697, 2011.
    [94] Liu B, Li H, Kang H, et al. Semi-supervised long-tailed recognition using alternate sampling[J]. arXiv preprint arXiv: 2105.00133, 2021.
    [95] Wei C, Sohn K, Mellina C, et al. Crest: A class-rebalancing self-training framework for imbalanced semi-supervised learning[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 10857−10866.
    [96] Wang W, Zhao Z, Wang P, et al. Attentive feature augmentation for long-tailed visual recognition. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(9): 5803−5816 doi: 10.1109/TCSVT.2022.3161427
    [97] Wang Y, Pan X, Song S, et al. Implicit semantic data augmentation for deep networks. Advances in Neural Information Processing Systems, 201932
    [98] Li S, Gong K, Liu C H, et al. Metasaug: Meta semantic augmentation for long-tailed visual recognition[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 5212−5221.
    [99] Zhao Y, Chen W, Huang K, et al. Feature Re-Balancing for Long-Tailed Visual Recognition[C]//2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022: 1−8.
    [100] Vigneswaran R, Law M T, Balasubramanian V N, et al. Feature generation for long-tail classification[C]//Proceedings of the twelfth Indian conference on computer vision, graphics and image processing. 2021: 1−9.
    [101] Liu J, Sun Y, Han C, et al. Deep representation learning on long-tailed data: A learnable embedding augmentation perspective[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 2970−2979.
    [102] LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278−2324 doi: 10.1109/5.726791
    [103] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770−778.
    [104] Xie S, Girshick R, Dollár P, et al. Aggregated residual transformations for deep neural networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1492−1500.
    [105] He K, Gkioxari G, Dollár P, et al. Mask r-cnn[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2961−2969.
    [106] Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 201528
    [107] Long A, Yin W, Ajanthan T, et al. Retrieval augmented classification for long-tail visual recognition[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 6959−6969.
    [108] Zhou J, Li J, Yan Y, et al. Mixing Global and Local Features for Long-Tailed Expression Recognition. Information, 2023, 14(2): 83 doi: 10.3390/info14020083
    [109] Zhao W, Su Y, Hu M, et al. Hybrid ResNet based on joint basic and attention modules for long-tailed classification. International Journal of Approximate Reasoning, 2022, 150: 83−97 doi: 10.1016/j.ijar.2022.08.007
    [110] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Advances in neural information processing systems, 201730
    [111] Chen J, Agarwal A, Abdelkarim S, et al. Reltransformer: A transformer-based long-tail visual relationship recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 19507−19517.
    [112] Hou Z, Yu B, Tao D. Batchformer: Learning to explore sample relationships for robust representation learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 7256−7266.
    [113] Ye H J, Chen H Y, Zhan D C, et al. Identifying and compensating for feature deviation in imbalanced deep learning[J]. arXiv preprint arXiv: 2001.01385, 2020.
    [114] Djouadi A, Bouktache E. A fast algorithm for the nearest-neighbor classifier. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19(3): 277−282 doi: 10.1109/34.584107
    [115] Wei X S, Xu S L, Chen H, et al. Prototype-based classifier learning for long-tailed visual recognition. Science China Information Sciences, 2022, 65(6): 160105 doi: 10.1007/s11432-021-3489-1
    [116] Parisot S, Esperança P M, McDonagh S, et al. Long-tail recognition via compositional knowledge transfer[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 6939−6948.
    [117] Wu T Y, Morgado P, Wang P, et al. Solving long-tailed recognition with deep realistic taxonomic classifier[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VIII 16. Springer International Publishing, 2020: 171−189.
    [118] Jia Y, Peng X, Wang R, et al. Long-tailed Partial Label Learning by Head Classifier and Tail Classifier Cooperation[J]. 2024.
    [119] Duggal R, Freitas S, Dhamnani S, et al. Elf: An early-exiting framework for long-tailed classification[J]. arXiv preprint arXiv: 2006.11979, 2020.
    [120] Menon A K, Jayasumana S, Rawat A S, et al. Long-tail learning via logit adjustment[J]. arXiv preprint arXiv: 2007.07314, 2020.
    [121] Wang Y, Zhang B, Hou W, et al. Margin calibration for long-tailed visual recognition[C]//Asian Conference on Machine Learning. PMLR, 2023: 1101−1116.
    [122] Li M, Cheung Y, Lu Y. Long-tailed visual recognition via gaussian clouded logit adjustment[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 6929−6938.
    [123] Hong Y, Han S, Choi K, et al. Disentangling label distribution for long-tailed visual recognition[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 6626−6636.
    [124] Xu Z, Yang S, Wang X, et al. Rethink Long-Tailed Recognition with Vision Transforms[C]//ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023: 1−5.
    [125] He Y Y, Zhang P, Wei X S, et al. Relieving long-tailed instance segmentation via pairwise class balance[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 7000−7009.
    [126] Long H, Zhang X, Liu Y, et al. Mutual Exclusive Modulator for Long-Tailed Recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 4890−4899.
    [127] Huang C, Li Y, Loy C C, et al. Learning deep representation for imbalanced classification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 5375−5384.
    [128] Jamal M A, Brown M, Yang M H, et al. Rethinking class-balanced methods for long-tailed visual recognition from a domain adaptation perspective[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 7610−7619.
    [129] Hsieh T I, Robb E, Chen H T, et al. Droploss for long-tail instance segmentation[C]//Proceedings of the AAAI conference on artificial intelligence. 2021, 35(2): 1549−1557.
    [130] Park S, Lim J, Jeon Y, et al. Influence-balanced loss for imbalanced visual classification[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 735−744.
    [131] Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2980−2988.
    [132] Smith L N. Cyclical focal loss[J]. arXiv preprint arXiv: 2202.08978, 2022.
    [133] Li B, Yao Y, Tan J, et al. Equalized focal loss for dense long-tailed object detection[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 6990−6999.
    [134] Wang T, Li Y, Kang B, et al. The devil is in classification: A simple framework for long-tail instance segmentation[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16. Springer International Publishing, 2020: 728−744.
    [135] Li Y, Wang T, Kang B, et al. Overcoming classifier imbalance for long-tail object detection with balanced group softmax[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 10991−11000.
    [136] Zhong Z, Cui J, Liu S, et al. Improving calibration for long-tailed recognition[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 16489−16498.
    [137] Fan S, Zhang X, Song Z, et al. Cumulative dual-branch network framework for long-tailed multi-class classification. Engineering Applications of Artificial Intelligence, 2022, 114: 105080 doi: 10.1016/j.engappai.2022.105080
    [138] Guo H, Wang S. Long-tailed multi-label visual recognition by collaborative training on uniform and re-balanced samplings[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 15089−15098.
    [139] Bengio Y, Lamblin P, Popovici D, et al. Greedy layer-wise training of deep networks. Advances in neural information processing systems, 200619
    [140] Alshammari S, Wang Y X, Ramanan D, et al. Long-tailed recognition via weight balancing[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 6897−6907.
    [141] Zhu Z, Xing H, Xu Y. Easy balanced mixing for long-tailed data. Knowledge-Based Systems, 2022, 248: 108816 doi: 10.1016/j.knosys.2022.108816
    [142] Yang Y, Xu Z. Rethinking the value of labels for improving class-imbalanced learning. Advances in neural information processing systems, 2020, 33: 19290−19301
    [143] Liu X, Hu Y S, Cao X S, et al. Long-tailed class incremental learning[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 495−512.
    [144] Deng J, Dong W, Socher R, et al. Imagenet: A large-scale hierarchical image database[C]//2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009: 248−255.
    [145] Wang Y, Gan W, Yang J, et al. Dynamic curriculum learning for imbalanced data classification[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 5017−5026.
    [146] Wei T, Shi J X, Tu W W, et al. Robust long-tailed learning under label noise[J]. arXiv preprint arXiv: 2108.11569, 2021.
    [147] Li T, Wang L, Wu G. Self supervision to distillation for long-tailed visual recognition[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 630−639.
    [148] Chen T, Kornblith S, Norouzi M, et al. A simple framework for contrastive learning of visual representations[C]//International conference on machine learning. PMLR, 2020: 1597−1607.
    [149] Wang P, Han K, Wei X S, et al. Contrastive learning based hybrid networks for long-tailed image classification[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 943−952.
    [150] Fu S, Chu H, He X, et al. Meta-prototype Decoupled Training for Long-Tailed Learning[C]//Proceedings of the Asian Conference on Computer Vision. 2022: 569−585.
    [151] Zhong Z, Cui J, Li Z, et al. Rebalanced Siamese Contrastive Mining for Long-Tailed Recognition[J]. arXiv preprint arXiv: 2203.11506, 2022.
    [152] Zhu J, Wang Z, Chen J, et al. Balanced contrastive learning for long-tailed visual recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 6908−6917.
    [153] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network[J]. arXiv preprint arXiv: 1503.02531, 2015.
    [154] Iscen A, Araujo A, Gong B, et al. Class-balanced distillation for long-tailed visual recognition[J]. arXiv preprint arXiv: 2104.05279, 2021.
    [155] He Y Y, Wu J, Wei X S. Distilling virtual examples for long-tailed recognition[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 235−244.
    [156] Xia Y, Zhang S, Wang J, et al. One‐stage self‐distillation guided knowledge transfer for long‐tailed visual recognition. International Journal of Intelligent Systems, 2022, 37(12): 11893−11908 doi: 10.1002/int.23068
    [157] Yang C Y, Hsu H M, Cai J, et al. Long-tailed recognition of sar aerial view objects by cascading and paralleling experts[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 142−148.
    [158] Cui J, Liu S, Tian Z, et al. Reslt: Residual learning for long-tailed recognition. IEEE transactions on pattern analysis and machine intelligence, 2022, 45(3): 3695−3706
    [159] Wang X, Lian L, Miao Z, et al. Long-tailed recognition by routing diverse distribution-aware experts[J]. arXiv preprint arXiv: 2010.01809, 2020.
    [160] Li J, Tan Z, Wan J, et al. Nested collaborative learning for long-tailed visual recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 6949−6958.
    [161] Zhang Y, Hooi B, Hong L, et al. Self-supervised aggregation of diverse experts for test-agnostic long-tailed recognition. Advances in Neural Information Processing Systems, 2022, 35: 34077−34090
    [162] Chen Q, Liu Q, Lin E. A knowledge-guide hierarchical learning method for long-tailed image classification. Neurocomputing, 2021, 459: 408−418 doi: 10.1016/j.neucom.2021.07.008
    [163] Li Z, Zhao H, Lin Y. Multi-task convolutional neural network with coarse-to-fine knowledge transfer for long-tailed classification. Information Sciences, 2022, 608: 900−916 doi: 10.1016/j.ins.2022.07.015
    [164] Wen Y, Zhang K, Li Z, et al. A discriminative feature learning approach for deep face recognition[C]//Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part VII 14. Springer International Publishing, 2016: 499−515.
    [165] Cao D, Zhu X, Huang X, et al. Domain balancing: Face recognition on long-tailed domains[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 5671−5679.
    [166] Ma Y, Jiao L, Liu F, et al. Delving into Semantic Scale Imbalance[J]. arXiv preprint arXiv: 2212.14613, 2022.
    [167] Park B, Kim J, Cho S, et al. Balancing Domain Experts for Long-Tailed Camera-Trap Recognition[J]. arXiv preprint arXiv: 2202.07215, 2022.
    [168] Wang W, Wang M, Wang S, et al. One-shot learning for long-tail visual relation detection[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(07): 12225−12232.
    [169] Chang N, Yu Z, Wang Y X, et al. Image-level or object-level? a tale of two resampling strategies for long-tailed detection[C]//International conference on machine learning. PMLR, 2021: 1463−1472.
    [170] Zhang C, Lin G, Lai L, et al. Calibrating Class Activation Maps for Long-Tailed Visual Recognition[J]. arXiv preprint arXiv: 2108.12757, 2021.
    [171] Cao Y, Kuang J, Gao M, et al. Learning relation prototype from unlabeled texts for long-tail relation extraction. IEEE Transactions on Knowledge and Data Engineering, 2021
    [172] Zhang G, Liang R, Yu Z, et al. Rumour detection on social media with long-tail strategy[C]//2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022: 1−8.
    [173] Mottaghi A, Sarma P K, Amatriain X, et al. Medical symptom recognition from patient text: An active learning approach for long-tailed multilabel distributions[J]. arXiv preprint arXiv: 2011.06874, 2020.
    [174] Shi C, Hu B, Zhao W X, et al. Heterogeneous information network embedding for recommendation. IEEE Transactions on Knowledge and Data Engineering, 2018, 31(2): 357−370
    [175] Zhao T, Zhang X, Wang S. Graphsmote: Imbalanced node classification on graphs with graph neural networks[C]//Proceedings of the 14th ACM international conference on web search and data mining. 2021: 833−841.
    [176] Park J, Song J, Yang E. Graphens: Neighbor-aware ego network synthesis for class-imbalanced node classification[C]//International Conference on Learning Representations. 2021.
    [177] Yun S, Kim K, Yoon K, et al. Lte4g: long-tail experts for graph neural networks[C]//Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 2022: 2434−2443.
    [178] Hu Z, Dong Y, Wang K, et al. Gpt-gnn: Generative pre-training of graph neural networks[C]//Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020: 1857−1867.
    [179] Liu Z, Nguyen T K, Fang Y. Tail-gnn: Tail-node graph neural networks[C]//Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2021: 1109−1119.
    [180] Perrett T, Sinha S, Burghardt T, et al. Use Your Head: Improving Long-Tail Video Recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 2415−2425.
    [181] Tian C, Wang W, Zhu X, et al. Vl-ltr: Learning class-wise visual-linguistic representation for long-tailed visual recognition[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 73−91.
    [182] Ma T, Geng S, Wang M, et al. A simple long-tailed recognition baseline via vision-language model[J]. arXiv preprint arXiv: 2111.14745, 2021.
    [183] Wang R, Yu G, Domeniconi C, et al. Meta Cross-Modal Hashing on Long-Tailed Data[J]. arXiv preprint arXiv: 2111.04086, 2021.
    [184] Wang P, Wang X, Wang B, et al. Long-Tailed Time Series Classification via Feature Space Rebalancing[C]//International Conference on Database Systems for Advanced Applications. Cham: Springer Nature Switzerland, 2023: 151−166.
    [185] Deng J, Chen X, Jiang R, et al. St-norm: Spatial and temporal normalization for multi-variate time series forecasting[C]//Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining. 2021: 269−278.
    [186] Craw S, Horsburgh B, Massie S. Music recommendation: audio neighbourhoods to discover music in the long tail[C]//Case-Based Reasoning Research and Development: 23rd International Conference, ICCBR 2015, Frankfurt am Main, Germany, September 28-30, 2015. Proceedings 23. Springer International Publishing, 2015: 73−87.
    [187] Deng K, Cheng G, Yang R, et al. Alleviating asr long-tailed problem by decoupling the learning of representation and classification. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 30: 340−354
    [188] Winata G I, Wang G, Xiong C, et al. Adapt-and-adjust: Overcoming the long-tail problem of multilingual speech recognition[J]. arXiv preprint arXiv: 2012.01687, 2020.
    [189] Peng P, Lu J, Tao S, et al. Progressively balanced supervised contrastive representation learning for long-tailed fault diagnosis. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1−12
    [190] Deng S, Lei Z, Liu J, et al. A Cost-Sensitive Dense Network for Fault Diagnosis under Data Imbalance[C]//2022 International Conference on Sensing, Measurement & Data Analytics in the era of Artificial Intelligence (ICSMD). IEEE, 2022: 1−6.
    [191] Jiao W, Zhang J. Sonar images classification while facing long-tail and few-shot. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 1−20
    [192] Shao J, Zhu K, Zhang H, et al. DiffuLT: How to Make Diffusion Model Useful for Long-tail Recognition[J]. arXiv preprint arXiv: 2403.05170, 2024.
    [193] Shi J X, Wei T, Zhou Z, et al. Parameter-Efficient Long-Tailed Recognition[J]. arXiv preprint arXiv: 2309.10019, 2023.
    [194] Kabir H M. Reduction of Class Activation Uncertainty with Background Information[J]. arXiv preprint arXiv: 2305.03238, 2023.
    [195] Du F, Yang P, Jia Q, et al. Global and Local Mixture Consistency Cumulative Learning for Long-tailed Visual Recognitions[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 15814−15823.
    [196] Chen X, Liang C, Huang D, et al. Symbolic discovery of optimization algorithms[J]. arXiv preprint arXiv: 2302.06675, 2023.
    [197] Cui J, Zhong Z, Tian Z, et al. Generalized parametric contrastive learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
    [198] Hendrycks D, Gimpel K. A baseline for detecting misclassified and out-of-distribution examples in neural networks[J]. arXiv preprint arXiv: 1610.02136, 2016.
    [199] Liu W, Wang X, Owens J, et al. Energy-based out-of-distribution detection. Advances in neural information processing systems, 2020, 33: 21464−21475
    [200] Yang Y, Wang H, Katabi D. On multi-domain long-tailed recognition, imbalanced domain generalization and beyond[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 57−75.
    [201] Kim C D, Jeong J, Kim G. Imbalanced continual learning with partitioning reservoir sampling[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16. Springer International Publishing, 2020: 411−428.
    [202] Ditzler G, Polikar R, Chawla N. An incremental learning algorithm for non-stationary environments and class imbalance[C]//2010 20th International Conference on Pattern Recognition. IEEE, 2010: 2997−3000.
    [203] Shi J X, Wei T, Li Y F. Residual diverse ensemble for long-tailed multi-label text classification. Science CHINA Information Science, 2024
    [204] Kharbanda S, Gupta D, Schultheis E, et al. Learning label-label correlations in Extreme Multi-label Classification via Label Features[J]. arXiv preprint arXiv: 2405.04545, 2024.
    [205] Zhang Y, Cao S, Mi S, et al. Learning sample representativeness for class-imbalanced multi-label classification. Pattern Analysis and Applications, 20241−12
    [206] Du C, Han Y, Huang G. SimPro: A Simple Probabilistic Framework Towards Realistic Long-Tailed Semi-Supervised Learning[J]. arXiv preprint arXiv: 2402.13505, 2024.
    [207] Ma C, Elezi I, Deng J, et al. Three heads are better than one: Complementary experts for long-tailed semi-supervised learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(13): 14229−14237.
    [208] Shang X, Lu Y, Huang G, et al. Federated learning on heterogeneous and long-tailed data via classifier re-training with federated features[J]. arXiv preprint arXiv: 2204.13399, 2022.
    [209] Kou X, Xu C, Yang X, et al. Attention-guided Contrastive Hashing for Long-tailed Image Retrieval[C]//IJCAI. 2022: 1017−1023.
    [210] Geifman Y, El-Yaniv R. Deep active learning over the long tail[J]. arXiv preprint arXiv: 1711.00941, 2017.
  • 期刊类型引用(14)

    1. 王圣洁,刘乾义,文超,李忠灿,田文华. 考虑致因的初始晚点影响列车数预测模型研究. 综合运输. 2024(02): 105-110 . 百度学术
    2. 刘鲁岳,肖宝弟,岳丽丽. 基于改进RF-XGBoost算法的列车运行晚点预测研究. 铁道标准设计. 2023(03): 38-43 . 百度学术
    3. 李建民,许心越,丁忻. 基于多阶段特征优选的高速铁路列车晚点预测模型. 中国铁道科学. 2023(04): 219-229 . 百度学术
    4. 林鹏,田宇,袁志明,张琦,董海荣,宋海锋,阳春华. 高速铁路信号系统运维分层架构模型研究. 自动化学报. 2022(01): 152-161 . 本站查看
    5. 文超,李津,李忠灿,智利军,田锐,宋邵杰. 机器学习在铁路列车调度调整中的应用综述. 交通运输工程与信息学报. 2022(01): 1-14 . 百度学术
    6. 张芸鹏,朱志强,王子维. 高速铁路行车调度作业风险管控信息系统设计研究. 铁道运输与经济. 2022(03): 47-52+59 . 百度学术
    7. 张红斌,李军,陈亚茹. 京沪高铁列车运行晚点预测方法研究. 铁路计算机应用. 2022(05): 1-6 . 百度学术
    8. 俞胜平,韩忻辰,袁志明,崔东亮. 基于策略梯度强化学习的高铁列车动态调度方法. 控制与决策. 2022(09): 2407-2417 . 百度学术
    9. 唐涛,甘婧. 基于国内外铁路运营数据的列车运行时间预测模型. 中国安全科学学报. 2022(06): 123-130 . 百度学术
    10. 刘睿,徐传玲,文超. 基于马尔科夫链的高铁列车连带晚点横向传播. 铁道科学与工程学报. 2022(10): 2804-2812 . 百度学术
    11. 廖璐,张亚东,葛晓程,郭进,禹倩. 基于GBDT的列车晚点时长预测模型研究. 铁道标准设计. 2021(08): 149-154+176 . 百度学术
    12. 闫璐,张琦,王荣笙,丁舒忻. 基于动力学特性的列车运行态势分析. 铁道运输与经济. 2021(08): 64-70 . 百度学术
    13. 张俊,张欣愉,叶玉玲. 高速铁路非正常事件下初始延误场景聚类研究. 物流科技. 2021(06): 1-4+9 . 百度学术
    14. 徐传玲,文超,胡瑞,冯永泰. 高速铁路列车连带晚点产生机理及其判定. 交通运输工程与信息学报. 2020(04): 31-37 . 百度学术

    其他类型引用(28)

  • 加载中
计量
  • 文章访问数:  716
  • HTML全文浏览量:  611
  • 被引次数: 42
出版历程
  • 收稿日期:  2024-02-04
  • 录用日期:  2024-07-23
  • 网络出版日期:  2024-10-24

目录

/

返回文章
返回