2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

一种改进的特征子集区分度评价准则

谢娟英 吴肇中 郑清泉 王明钊

谢娟英, 吴肇中, 郑清泉, 王明钊. 一种改进的特征子集区分度评价准则. 自动化学报, 2022, 48(5): 1292−1306 doi: 10.16383/j.aas.c200704
引用本文: 谢娟英, 吴肇中, 郑清泉, 王明钊. 一种改进的特征子集区分度评价准则. 自动化学报, 2022, 48(5): 1292−1306 doi: 10.16383/j.aas.c200704
Xie Juan-Ying, Wu Zhao-Zhong, Zheng Qing-Quan, Wang Ming-Zhao. An improved criterion for evaluating the discernibility of a feature subset. Acta Automatica Sinica, 2022, 48(5): 1292−1306 doi: 10.16383/j.aas.c200704
Citation: Xie Juan-Ying, Wu Zhao-Zhong, Zheng Qing-Quan, Wang Ming-Zhao. An improved criterion for evaluating the discernibility of a feature subset. Acta Automatica Sinica, 2022, 48(5): 1292−1306 doi: 10.16383/j.aas.c200704

一种改进的特征子集区分度评价准则

doi: 10.16383/j.aas.c200704
基金项目: 国家自然科学基金(62076159, 12031010, 61673251), 中央高校基本科研业务费(GK202105003)资助
详细信息
    作者简介:

    谢娟英:陕西师范大学计算机科学学院教授. 主要研究方向为机器学习, 数据挖掘, 生物医学大数据分析. 本文通信作者. E-mail: xiejuany@snnu.edu.cn

    吴肇中:陕西师范大学计算机科学学院硕士研究生. 主要研究方向为机器学习, 生物医学数据分析. E-mail: wzz@snnu.edu.cn

    郑清泉:陕西师范大学计算机科学学院硕士研究生. 主要研究方向为数据挖掘, 生物医学数据分析. E-mail: zhengqingqsnnu@163.com

    王明钊:陕西师范大学生命科学学院博士研究生. 2017 年获得陕西师范大学计算机科学学院硕士学位. 主要研究方向为生物信息学. E-mail: wangmz2017@snnu.edu.cn

An Improved Criterion for Evaluating the Discernibility of a Feature Subset

Funds: Supported by National Natural Science Foundation of China (62076159, 12031010, 61673251), Fundamental Research Funds for the Central Universities (GK202105003)
More Information
    Author Bio:

    XIE Juan-Ying Professor at the School of Computer Science, Shaanxi Normal University. Her research interest covers machine learning, data mining, and biomedical big data analysis. Corresponding author of this paper

    WU Zhao-Zhong Master student at the School of Computer Science, Shaanxi Normal University. His research interest covers machine learning and biomedical data analysis

    ZHENG Qing-Quan Master student at the School of Computer Science, Shaanxi Normal University. His research interest covers data mining and biomedical data analysis

    WANG Ming-Zhao Ph.D. candidate at the College of Life Sciences, Shaanxi Normal University. He received his master degree from the School of Computer Science, Shaanxi Normal University in 2017. His main research interest is bioinformatics

  • 摘要: 针对特征子集区分度准则(Discernibility of feature subsets, DFS)没有考虑特征测量量纲对特征子集区分能力影响的缺陷, 引入离散系数, 提出GDFS (Generalized discernibility of feature subsets)特征子集区分度准则. 结合顺序前向、顺序后向、顺序前向浮动和顺序后向浮动4种搜索策略, 以极限学习机为分类器, 得到4种混合特征选择算法. UCI数据集与基因数据集的实验测试, 以及与DFS、Relief、DRJMIM、mRMR、LLE Score、AVC、SVM-RFE、VMInaive、AMID、AMID-DWSFS、CFR和FSSC-SD的实验比较和统计重要度检测表明: 提出的GDFS优于DFS, 能选择到分类能力更好的特征子集.
  • 大数据时代的数据不仅样本量剧增, 维数也日益剧增, 引发维数灾难[1], 增加计算复杂度, 而且冗余和不相关特征使得分类器性能较差, 给数据分析带来挑战. 因此, 特征选择及其评价成为一个研究热点[2-6].

    特征选择旨在发现具有强分类能力且互不相关或尽可能互不相关的少量特征构成特征子集. 特征搜索策略包括完全搜索、随机搜索和启发式搜索3大类[7]. 特征选择算法可分为: Filter[8], Wrapper[9], Embedded[10], Hybrid[11-13], 以及Ensemble[14]几大类. Filter方法根据独立于分类器的特征重要性评价准则, 如卡方检验等来判断特征的分类能力, 选择分类性能强的特征构成特征子集. Filter方法独立于学习过程, 速度快, 但需要阈值作为停止准则, 且准确率较低. Wrapper方法依赖于分类器, 需要将训练样本分为训练子集和验证子集两部分, 特征选择则过程中, 以分类器在验证子集的性能判断相应特征子集的分类能力, 选择分类能力强的特征子集. 构建基于特征子集的分类模型, 以测试集对模型进行评价, 从而评价特征子集和相应特征选择算法的性能. Wrapper方法中, 特征选择过程中使用的学习算法完全是一个“黑匣子”. 因此, Wrapper方法依赖于学习过程, 准确率较高, 但计算量大, 且存在过适应风险. Embedded方法通过优化一个目标函数实现特征选择, 特征选择在优化目标函数过程中完成, 不需要将训练样本分成训练子集和验证子集, 但构造合适的优化目标函数困难. Hybrid方法集成Filter方法和Wrapper方法的优势, 采用Filter方法独立于分类器的准则度量特征分类能力大小, 以一定的启发式策略来搜索特征子集, 采用Wrapper方法的以分类器分类性能评价相应特征子集的分类能力. 因此, Hybrid方法得到广泛关注. Ensemble方法集成不同特征选择算法实现特征选择, 一般情况下具有较好性能, 能选择到分类能力较好的特征子集, 但需要训练多个不同分类器.

    Relief算法[15]是经典的Filter方法, 但只适用于二分类问题. Relief-F[16]算法将Relief由二分类扩展到多分类问题. LVW (Las Vegas wrapper)算法[17]在拉斯维加斯方法(Las Vegas method)框架下使用随机搜索策略实现特征选择. SVM-RFE (SVM-recursive feature elimination)[18]基于SVM (Support vector machine)和后向剔除思想实现特征选择, 是经典的Embedded特征选择算法, 是为解决超高维基因选择问题提出的算法, 但若每次只剔除一个基因, 时间消耗将成为瓶颈. 为此, 作者Guyon指出, 对于超高维基因选择, 每次迭代, 可一次剔除上百个基因, 但她没有给出到底一次剔除多少个基因合适的理论依据和实践指导. mRMR (Max-relevance, min-redundancy)[19]基于特征相关性, 旨在选择到分类能力强且冗余度最小的特征构成特征子集, 但不同的相关性度量可能会得到不同的结果. F-score[20]是衡量特征在两类间分辨能力的有效准则. Xie等将F-score推广用于任意类分类问题[13, 21], 并提出考虑特征测量量纲的改进F-score特征重要度评价准则D-score[22], 用于皮肤病诊断. 针对F-score和D-score仅考虑单个特征区分能力, 没有考虑特征联合贡献的问题, 谢等提出了考虑特征联合贡献的特征子集区分度衡量准则DFS (Discernibility of feature subsets)[23], 从而获得分类能力更优的特征子集. LLE Score (Locally linear embedding score)[24]算法通过局部线性嵌入, 实现非线性维约简[25], 进行肿瘤基因选择. AVC (Feature selection with AUC-based variable complementarity)算法[26]通过最大化变量互补性实现特征选择. 最大化ROC曲线下面积的基因选择算法[27]实现了非平衡基因数据的特征选择. 特征选择算法DRJMIM (Dynamic relevance and joint mutual information maximization)[28]充分考虑特征相关性和特征相互依赖性, 采用动态相关性和最大化联合互信息实现特征选择. 基于邻域粗糙集的特征选择算法[29]基于邻域熵的不确定性度量, 从基因表达数据集中选择差异表达基因实现癌症分类. 谢等对非平衡基因数据的差异表达基因选择进行了系统研究[30], 提出了16种针对非平衡基因数据的特征选择算法. Li等[31]从数据视图角度对特征选择算法进行总结, 将特征选择算法分为基于相似度的方法、基于信息论的方法、基于稀疏学习的方法, 以及基于统计的方法4大类.

    特征选择研究已引起研究者广泛关注, 是高维小样本癌症基因数据分析的首要步骤, 也是其他高维数据分析的基础. 然而, 现有特征选择算法对特征分类能力的评价, 多数仅考虑单个特征的分类贡献, 并忽略了特征测量量纲的影响, DFS[23]准则考虑了特征的联合贡献, 但其没有考虑不同测量量纲对特征分类贡献的影响, 值域差异悬殊的特征, 相当于被赋予了差异悬殊的权重, 无法准确度量特征对分类的贡献量. 为此, 提出GDFS (Generalized discernibility of feature subsets)新准则, 引入离散系数对DFS准则进行改进, 客观度量特征子集的分类能力. 以ELM (Extreme learning machine)为分类工具评估特征子集的分类性能. UCI (University of California in Irvine)机器学习数据库数据集和基因数据集的实验测试, 以及与DFS和现有经典特征选择算法的实验比较与统计显著性检测表明, 提出的GDFS特征子集区分度评价准则是一种有效的特征子集分类能力度量准则, 能选择到分类性能很好的特征子集.

    设数据集${\boldsymbol{X}}$包含$l\left( {l \geq 2} \right)$个类, 第$c\left( {c = 1, \cdots ,l} \right)$类样本数为${n_c}$.

    DFS特征子集区分度衡量准则[23]考虑特征子集所包含特征的联合作用, 评价特征子集的类别间区分能力大小. 则含有$i$个特征的特征子集的区分度DFS定义为式(1).

    $$DF{S_i} = \frac{{\sum\limits_{c = 1}^l {\sum\limits_{j = 1}^i {{{\left( {\bar x_j^c - {{\bar x}_j}} \right)}^2}} } }}{{\sum\limits_{c = 1}^l {\frac{1}{{{n_c} - 1}}\sum\limits_{k = 1}^{{n_c}} {\sum\limits_{j = 1}^i {\left( {{{\left( {x_{k,j}^c - \bar x_j^c} \right)}^2}} \right)} } } }}$$ (1)

    式(1)分子的$ {\overline{x}}_{j}^{c}, {\overline{x}}_{j}$分别表示第$c$类质心(第$c$类样本均值)在第$j$个特征的取值, 以及整个数据集质心(全部样本均值)在第$j$个特征的取值, 因此, 分子表示对应当前$i$个特征的特征子集, 样本集$l$个类的质心(类中心)到样本集质心(样本集中心)的距离和, 表示类别间的可分性, 值越大表示类别间越疏. 式(1)分母的$x_{k,j}^c$表示第$c$类的第$k$个样本在第$j$个特征的取值, 因此, 分母表示对应当前$i$个特征的特征子集, 样本$l$个类的类内方差之和, 表示类内可聚性, 值越小表示类内越聚[23]. 因此, 式(1)$DF{S_i}$的值越大表明包含当前$i$个特征的特征子集的分类能力越强[23].

    离散系数(变异系数)是样本标准差与样本均值之比, 消除了特征测量量纲对度量样本离散程度的标准差大小的影响, 离散系数越大表明数据离散程度越大, 反之越小[32].

    DFS没有考虑特征测量量纲对特征重要度的影响, 不同特征取值范围差异悬殊情况下, 相当于对取值较大特征赋予了较大权重, 使其容易被选择到, 从而影响特征选择结果的客观性. 为了客观度量每个特征的分类能力, 避免特征测量量纲不同带来的影响, 提出GDFS特征子集区分能力度量准则, 克服DFS的缺陷, 以便发现真正具有区分能力的特征. GDFS定义为式(2).

    $$GDF{S_i} = \frac{{\frac{1}{{l - 1}}\sum\limits_{c = 1}^l {\left( {\sum\limits_{j = 1}^i {\frac{{{{\left( {\bar x_j^c - {{\bar x}_j}} \right)}^2}}}{{{{\bar x}_j}}}} } \right)} }}{{\sum\limits_{c = 1}^l {\frac{1}{{{n_c} - 1}}\sum\limits_{k = 1}^{{n_c}} {\left( {\sum\limits_{j = 1}^i {\frac{{{{\left( {x_{k,j}^c - \bar x_j^c} \right)}^2}}}{{\bar x_j^c}}} } \right)} } }}$$ (2)

    式(2)中分子表示$l$个类别对应当前$i$个特征的类别间离散系数, 其值越大, 表示各类别间的分散程度越好; 分母表示$l$个类别对应当前$i$个特征的类内离散系数之和, 其值越小, 表示各类别越紧凑. 因此, 式(2)的值越大, 表明当前$i$个特征构成的特征子集的分类能力越强.

    GDFS针对DFS没有考虑特征测度对特征区分能力影响的缺陷提出采用离散系数对DFS进行改进, 因此, 若能证明离散系数不受测度影响, 而标准差受测度影响, 则可证明GDFS正确. 为此, 提出下面的定理, 并进行理论证明.

    定理1. 不妨设有包含N个样本, 每个样本拥有n个不同测度特征的数据集${\boldsymbol{X}} = \{ {{\boldsymbol{x}}_s}| s = 1, \cdots , N \} \in $$ {{\bf {R}} ^{N \times n}}$, 如果某一特征${f_i}\left( {i = 1, \cdots ,n} \right)$采用米作为度量测度, 则${f_i} \in \left[ {0.5,2} \right]$, 而若采用厘米作为测度, 则${f_i} \in \left[ {50,200} \right]$. $std_i^c,std_i^m$分别表示特征${f_i}$采用厘米和米作为度量测度时的标准差, ${\rm{\sigma}} _i^c,{\rm{\sigma}} _i^m$分别表示特征${f_i}$采用厘米和米作为测度时的离散系数, 则${\rm{\sigma}} _i^c{\rm{ = }}{\rm{\sigma}} _i^m$, $std_i^c \ne std_i^m$.

    证明. 不妨将特征${f_i}$在数据集${\boldsymbol{X}}$的均值记为${\bar x_i}$, 标准差记为$st{d_i}$, 离散系数记为${{\rm{\sigma}} _i}$, 则:

    $$ st{d_i} = \sqrt {\frac{\sum\limits_{s = 1}^N {\left( {{x_{s,i}} - {{\bar x}_i}} \right)} ^2}{N-1}} ,\;\;\;\;\;{{\rm{\sigma}} _i}{\rm{ = }}\frac{{st{d_i}}}{{{{\bar x}_i}}} $$

    不妨记${f_i} \in \left[ {50,200} \right]$时的标准差为$std_i^c$, 样本值为$x_{s,i}^c$, 各样本在特征${f_i}$的均值记为$\bar x_i^c$; ${f_i} \in $$ \left[ {0.5,2} \right]$的标准差为$std_i^m$, 样本值记为$x_{s,i}^m$, 各样本在特征${f_i}$的均值记为$\bar x_i^m$. 则:

    $$ \begin{split} &x_{s,i}^c = 100x_{s,i}^m,\\ &\bar x_i^c{\rm{ = }}\frac{{\sum\limits_{s = 1}^N {x_{s,i}^c} }}{N} = \frac{{\sum\limits_{s = 1}^N {100x_{s,i}^m} }}{N} = \frac{{100\sum\limits_{s = 1}^N {x_{s,i}^m} }}{N} = 100\bar x_i^m,\\ &std_{_i}^c = \sqrt {{\frac{1}{N-1}\sum\limits_{s = 1}^N {\left( {x_{s,i}^c - \bar x_i^c} \right)} ^2}}=\\ &\quad\qquad\sqrt {{\frac{1}{N-1}\sum\limits_{s = 1}^N \left( {100x_{s,i}^m - 100\bar x_i^m} \right) ^2}} = \\ &\quad\qquad \sqrt {10\,000{\frac{1}{N-1}\sum\limits_{s = 1}^N {\left( {x_{s,i}^m - \bar x_i^m} \right)} ^2}} =\\ &\quad\qquad{\rm{ 100}}\sqrt {{\frac{1}{N-1}\sum\limits_{s = 1}^N {\left( {x_{s,i}^m - \bar x_i^m} \right)} ^2}} {\rm{ = }}100std_{_i}^m . \end{split} $$

    特征${f_i}$的离散系数${{\rm{\sigma}} _i}=\dfrac{{st{d_i}}}{{{{\bar x}_i}}}$. 则: ${\rm{\sigma}} _i^c= \dfrac{{std_i^c}}{{\bar x_ \cdot ^c}}= $$ \dfrac{{100std_i^m}}{{100\bar x_i^m}} = \dfrac{{std_i^m}}{{\bar x_i^m}} = {\rm{\sigma}} _i^m$.

    因此, ${\rm{\sigma}} _i^c{\rm{ = }}{\rm{\sigma}} _i^m$, $std_i^c \ne std_i^m$成立, 即离散系数与特征测度无关, 但方差与标准差均受到特征测度影响. 由此可见, 提出的GDFS在理论上是正确的. □

    极限学习机ELM是基于单隐层前馈神经网络的机器学习算法[33]. ELM随机产生输入层和隐藏层之间的连接权重和隐藏层阈值, 只需要设定隐藏层结点数便能获得唯一最优的隐藏层到输出层的连接权重.

    假设有$N$个训练样本对$\left( {{{\boldsymbol{x}}_i},{{\boldsymbol{t}}_i}} \right)$, ${{\boldsymbol{x}}_i} \in {{\bf{R}} ^n}$, ${{\boldsymbol{t}}_i} \in $$ {{\bf{R}} ^m}$, 激活函数为$g\left( \cdot \right)$, 则有$\tilde N$个隐结点的单隐层前馈神经网络的数学模型描述为式(3).

    $$\sum\limits_{j = 1}^{\tilde N} {{{\boldsymbol{\beta }}_j}g\left( {{{\boldsymbol{w}}_j} \cdot {{\boldsymbol{x}}_i} + {b_j}} \right)} = {{\boldsymbol{t}}_i}$$ (3)

    其中, ${{\boldsymbol{w}}_{\boldsymbol{j}}}$表示第$j$个隐结点和所有输入结点间的权重向量, ${{\boldsymbol{\beta }}_j}$表示第$j$个隐结点和所有输出结点间的权重向量, ${b_j}$是第$j$个隐结点的阈值.

    带有$\tilde N$个隐结点的ELM, 激活函数$g\left( \cdot \right)$能够以零误差逼近$N$个训练样本, 即存在${{\boldsymbol{\beta }}_j},{{\boldsymbol{w}}_j},{b_j}$, 使式(3)成立. 式(3)可简写为式(4)矩阵形式.

    $${\boldsymbol{H\beta }} = {\boldsymbol{T}}$$ (4)

    其中,

    $$ \begin{array}{l} {\boldsymbol{H}}\left( {{{\boldsymbol{w}}_{1}} ,\cdot \cdot \cdot ,{{\boldsymbol{w}}_{\tilde N}},{b_{1}}, \cdot \cdot \cdot ,{b_{\tilde N}},{{\boldsymbol{x}}_{1}}, \cdot \cdot \cdot ,{{\boldsymbol{x}}_N}} \right)= \\ {\left[ {\begin{array}{*{20}{c}} {g\left( {{{\boldsymbol{w}}_1} \cdot {{\boldsymbol{x}}_1} + {b_1}} \right)}& \cdots &{g\left( {{{\boldsymbol{w}}_{\tilde N}} \cdot {{\boldsymbol{x}}_1} + {b_{\tilde N}}} \right)} \\ \vdots & \cdots & \vdots \\ {g\left( {{{\boldsymbol{w}}_1} \cdot {{\boldsymbol{x}}_N} + {b_1}} \right)}& \cdots &{g\left( {{{\boldsymbol{w}}_{\tilde N}} \cdot {{\boldsymbol{x}}_N} + {b_{\tilde N}}} \right)} \end{array}} \right]_{N \times \tilde N}} \end{array} , $$

    ${\boldsymbol{\beta }} = {\left[ {\begin{aligned} {{\boldsymbol{\beta }}_1^{\rm{T}}} \\ \vdots\;\; \\ {{\boldsymbol{\beta }}_{\tilde N}^{\rm{T}}} \end{aligned}} \right]_{\tilde N \times m}},$ ${\boldsymbol{T}} = {\left[ {\begin{aligned} {{\boldsymbol{t}}_1^{\rm{T}}} \\ \vdots \;\; \\ {{\boldsymbol{t}}_N^{\rm{T}}} \end{aligned}} \right]_{N \times m}} .$ ${\boldsymbol{H}}$是隐藏层输出矩阵, ${\boldsymbol{\beta }}$是隐藏层与输出层之间的权值向量矩阵, ${\boldsymbol{T}}$是输出矩阵.

    求解式(4)的最小二乘解, 可转化为求解式(5). 根据最小范数准则, ELM的最小二乘解为$\hat {\boldsymbol{\beta }}= $$ {{\boldsymbol{H}}^{{ + }}}{\boldsymbol{T}}$, ${{\boldsymbol{H}}^{{ + }}}$${\boldsymbol{H}}$的广义逆矩阵.

    $$\begin{split} &\left\| {{\boldsymbol{H}}\left( {{{\boldsymbol{w}}_1}, \cdot \cdot \cdot, {{\boldsymbol{w}}_{\tilde N}},{b_1}, \cdot \cdot \cdot ,{b_{\tilde N}}} \right)\hat {\boldsymbol{\beta }} - {\boldsymbol{T}}} \right\|= \\ & \qquad \mathop {\min }\limits_\beta \left\| {{\boldsymbol{H}}\left( {{{\boldsymbol{w}}_1}, \cdot \cdot \cdot ,{{\boldsymbol{w}}_{\tilde N}},{b_1}, \cdot \cdot \cdot ,{b_{\tilde N}}} \right){\boldsymbol{\beta }} - {\boldsymbol{T}}} \right\| \end{split} $$ (5)

    假设S为包含n个特征的特征全集, C是选择的特征子集, C初始化为空集, 划分数据集为训练集和测试集, 在训练集进行特征选择, 采用SFS, SBS, SFFS和SBFS特征搜索策略, 以GDFS评价特征子集性能, 得到算法1 ~ 4描述的4种混合特征选择算法: GDFS+SFS, GDFS+SBS, GDFS+SFFS, GDFS+SBFS.

    算法1. GDFS+SFS特征选择算法

    输入: 训练集${\boldsymbol{X}} \in {{\bf{R}} ^{m \times n}}$,

    ${\boldsymbol{S}} = \left\{ {{f_i}\left| {i = 1, \cdots ,n} \right.} \right\},{\boldsymbol{C}} = \Phi .$ //$ \Phi $表示空集

    输出: 特征子集${\boldsymbol{C}}$

    步骤 1. 计算特征${f_i}\left( {i = 1, \cdots ,n} \right)$$D{\rm{ - }}score$值, 令$K{\rm{ = }}\mathop {{\rm{argmax}}}\limits_{i = 1, \cdots ,n} \left\{ {D{\rm{ - }}score\left( i \right)} \right\}$, C = C + K, S = S K;

    步骤 2. 5-折交叉验证训练ELM分类器, 训练集样本只含有C中全部特征, 记录5-折交叉验证的平均分类准确率$Acctrain$;

    步骤 3. 判断S是否为空, 若S不空, 将S中的特征逐一与C组合, 构成比当前C特征数多1的临时特征子集tempC, 根据式(2)计算tempC的GDFS值, 选择GDFS值最大的特征子集tempC对应的特征K加入到C, 令S = S K, 转步骤2; 若S为空, 则算法结束.

    $Acctrain$不再提高时对应的特征子集C为被选择特征子集. 以C中所含特征在训练集构建ELM分类器, 计算测试集的各项指标, 评价特征子集C的分类性能.

    算法2. GDFS+SBS特征选择算法

    输入: 训练集${\boldsymbol{X}} \in {{\bf{R}} ^{m \times n}}$,

    ${\boldsymbol{S}} = \left\{ {{f_i}\left| {i = 1, \cdots ,n} \right.} \right\},{\boldsymbol{C}} = \Phi .$ //$ \Phi $表示空集

    输出: 特征子集${\boldsymbol{C}}$

    步骤 1. 令C = S;

    步骤 2. 计算S的规模$\left\| {\boldsymbol{S}} \right\|$, 若$\left\| {\boldsymbol{S}} \right\| \ne 0$, 则5-折交叉验证训练ELM, 训练样本包括S中全部特征, 记录5-折交叉验证的平均分类准确率$Acctrain$, 若$\left\| {\boldsymbol{S}} \right\| = 0$, 则算法结束;

    步骤 3. 尝试删除S中每一个特征, 计算$\left\| {\boldsymbol{S}} \right\|$个特征数为$\left\| {\boldsymbol{S}} \right\|{\rm{ - }}1$的临时特征子集tempS的GDFS, 删除使GDFS值最大的tempS对应特征K, 令S = S K, C = C K, 转步骤2.

    $Acctrain$不再提高时的特征子集C为被选特征子集. 以C中所含特征在训练集构建ELM分类器, 通过测试集来评价特征子集C的分类性能.

    算法3. GDFS+SFFS特征选择算法

    输入: 训练集${\boldsymbol{X}} \in {{\bf{R}} ^{m \times n}}$,

    ${\boldsymbol{S}} = \left\{ {{f_i}\left| {i = 1, \cdots ,n} \right.} \right\},{\boldsymbol{C}} = \Phi .$ //$ \Phi $表示空集

    输出: 特征子集${\boldsymbol{C}}$

    步骤 1. 计算特征${f_i}\left( {i = 1, \cdots ,n} \right)$$D{\rm{ - }}score$, 令$K{\rm{ = }}\mathop {{\rm{argmax}}}\limits_{i = 1, \cdots ,n} \left\{ {D{\rm{ - }}score\left( i \right)} \right\}$, C = C + K, S = S K;

    步骤 2. 5-折交叉验证训练ELM, 训练集样本只含有C中全部特征, 记录5-折交叉验证的平均分类准确率$Acctrain$;

    步骤 3. 若$\left\| {\boldsymbol{S}} \right\| \ne 0$, 将S中的每一个特征与特征子集C组合, 构成特征数增1的临时特征子集tempC, 计算tempC的GDFS值, 选择GDFS值最大的特征子集tempC对应特征K加入到C, 令S = S K; 否则, 算法结束;

    步骤 4. 训练ELM, 训练样本只含有C中全部特征, 并记录相应的$Acctrain$;

    步骤 5. 若$Acctrain$上升, 则转步骤3; 否则, 从C中删除刚加入的特征K, 然后转步骤3.

    算法结束时的特征子集${\boldsymbol{C}} $为选择的特征子集. 以C中所含特征构建ELM模型, 通过测试集来评价特征子集的分类性能.

    算法4. GDFS+SBFS特征选择算法

    输入: 训练集${\boldsymbol{X}} \in {{\bf{R}} ^{m \times n}}$,

    ${\boldsymbol{S}}{\rm{ = }}\left\{ {{f_i}\left| {i = 1, \cdots ,n} \right.} \right\},{\boldsymbol{C}} = \Phi .$ //$ \Phi $表示空集

    输出: 特征子集${\boldsymbol{C}}$

    步骤 1. 令C = S, 5-折交叉验证训练ELM, 训练样本含有S中全部特征, 记录平均分类准确率$Acctrain$;

    步骤 2. 若$\left\| {\boldsymbol{S}} \right\| \ne 0$, 尝试删除S中每一特征, 得到$\left\| {\boldsymbol{S}} \right\|$个特征数为$\left\| {\boldsymbol{S}} \right\|{\rm{ - }}1$的临时特征子集tempS, 计算tempS的GDFS, 从S中删除GDFS值最大的临时特征子集tempS对应特征K, 即令S = S K; 否则, 算法结束;

    步骤 3. 训练ELM, 训练集样本含有当前S中全部特征, 记录相应的$Acctrain$;

    步骤 4. 若$Acctrain$上升或者保持不变, 则令C = C K;

    步骤 5. 转步骤2.

    算法结束时, ${\boldsymbol{C}} $为选择的特征子集, 构建基于C的ELM, 通过测试集来评价特征子集C的分类性能.

    实验分为4部分, 第1部分验证采用ELM分类器的合理性; 第2部分比较提出的GDFS与原始DFS的性能; 第3部分比较提出的4种特征选择算法与经典算法的性能; 第4部分是算法的统计重要性检测. 其中, 第1部分实验采用原始DFS特征子集评价准则, 以便选择与DFS结合最优的分类器, 这样使第2部分比较提出的GDFS与DFS时, 选择使DFS性能最佳的分类器, 能更凸显提出的GDFS的优越性.

    为了避免实验结果受不同数据集划分的影响, 采用5-折交叉验证实验, 以获得平均的实验结果. 并在实验前, 随机打乱样本获得随机实验数据. 打乱方法为: 随机生成一个足够大2维数组, 数组元素的取值为1~数据集规模之间的一个随机数, 交换数组每行两个元素值对应样本.

    本小节采用DFS特征子集评价准则, 结合SFS, SBS, SFFS和SBFS特征搜索策略, 分别采用ELM和SVM分类工具引导特征选择过程, 比较基于相应特征子集的ELM和SVM分类器的性能, 选择分类性能好的分类器. 实验采用UCI机器学习数据库[34]的iris, thyroid-disease, glass, wine, Heart Disease, WDBC (Wisconsin diagnostic breast cancer), WPBC (Wisconsin prognostic breast cancer), dermatology, ionosphere和Handwrite数据集. 数据集描述见表1. thyroid-disease是thyroid gland data数据集; Heart Disease为processed Cleveland, 删掉6个含有缺失数据的样本, 样本数由303变为297; WPBC删掉了4个含有缺失数据的样本, 样本数由198变为194; dermatology删掉了8个含有缺失数据的样本, 因此样本数由366变为358; Handwrite选择了前2类进行实验.

    表 1  实验用UCI数据集描述
    Table 1  Descriptions of datasets from UCI
    数据集样本个数特征数类别数
    iris15043
    thyroid-disease21553
    glass21492
    wine178133
    Heart Disease297133
    WDBC569302
    WPBC194332
    dermatology358346
    ionosphere351342
    Handwrite3232562
    下载: 导出CSV 
    | 显示表格

    SVM分类器采用林智仁等[35]开发的SVM工具箱, 核函数采用RBF (Radial basis function)核函数[36], 参数采用默认值. ELM采用RBF核函数, 参数为默认值, 隐藏层结点数以5为步长增加, 根据交叉验证结果选择最优隐结点数[33]. 为避免ELM的随机初始输入权重向量和隐结点阈值影响实验结果, 实验中设定阈值为0.01, 当训练数据集的分类正确率在一定范围内波动时, 认为分类正确. 图1 ~ 4展示了分别采用ELM与SVM为分类器, 以DFS度量特征子集性能的5-折交叉验证实验平均结果.

    图 1  DFS+SFS算法的5-折交叉验证实验结果
    Fig. 1  The 5-fold cross-validation experimental results of DFS+SFS
    图 4  DFS+SBFS算法的5-折交叉验证实验结果
    Fig. 4  The 5-fold cross-validation experimental results of DFS+SBFS

    图1实验结果显示: 采用SFS搜索策略, 以ELM分类器引导特征选择过程得到的特征子集不仅规模小, 且在绝大部分数据集上的分类性能更好. 图2 ~ 图3实验结果显示, 采用SBS和SFFS搜索策略, 以ELM或SVM为分类器, 除了Handwrite数据集, 其他数据集的特征数量差别不大, 但ELM分类器得到的特征子集分类能力更强. 图4的实验结果显示: ELM分类器选择的特征子集的规模在多数数据集上比SVM得到的特征子集规模稍大, 但ELM分类器得到的特征子集的分类性能优于SVM选择的特征子集的分类性能.

    图 2  DFS+SBS算法的5-折交叉验证实验结果
    Fig. 2  The 5-fold cross-validation experimental results of DFS+SBS
    图 3  DFS+SFFS算法的5-折交叉验证实验结果
    Fig. 3  The 5-fold cross-validation experimental results of DFS+SFFS

    特征选择的目标是: 发现规模小且分类性能好的特征子集. 综合图2 ~ 图4的实验结果可见, 采用ELM分类器能够获得分类能力更好的特征子集.

    图 5  各特征选择算法的Nemenyi检验结果
    Fig. 5  Nemenyi test results of 13 feature selection algorithms in terms of performance metrics of ELM built on their selected features

    本小节在第4.1节实验基础上, 选择使DFS性能更优的ELM分类器, 测试提出的GDFS特征子集性能评价准则的优越性. 提出的4种特征选择算法GDFS+SFS, GDFS+SBS, GDFS+SFFS, GDFS+SBFS与原DFS+SFS, DFS+SBS, DFS+SFFS, DFS+SBFS在表1数据集的5-折交叉验证的实验结果如表2 ~ 表5所示, 加粗和加下划线表示最优实验结果.

    表 2  GDFS+SFS与DFS+SFS算法的5-折交叉验证实验结果
    Table 2  The 5-fold cross-validation experimental results of GDFS+SFS and DFS+SFS algorithms
    Data sets#原特征#选择特征测试准确率
    GDFSDFSGDFSDFS
    iris42.230.97330.9667
    thyroid-disease51.41.60.91630.9070
    glass92.43.20.93460.9439
    wine133.63.60.92720.8925
    Heart Disease132.83.40.58890.5654
    WDBC303.46.20.92270.9193
    WPBC331.820.78350.7732
    dermatology344.650.71510.6938
    ionosphere344.430.90290.8717
    Handwrite2567.47.20.96570.9440
    平均43.13.43.820.86300.8478
    下载: 导出CSV 
    | 显示表格
    表 5  GDFS+SBFS与DFS+SBFS算法的5-折交叉验证实验结果
    Table 5  The 5-fold cross-validation experimental results of GDFS+SBFS and DFS+SBFS algorithms
    Data sets#原特征#选择特征 测试准确率
    GDFSDFS GDFSDFS
    iris42.42.8 0.980.9667
    thyroid-disease52.42.2 0.93950.9209
    glass95.44 0.89790.9490
    wine139.29.4 0.65190.6086
    Heart Disease135.46.4 0.57570.5655
    WDBC3022.824.6 0.89110.8893
    WPBC3324.625.4 0.76810.7319
    dermatology3428.227.2 0.94440.9362
    ionosphere3428.426.2 0.91740.9087
    Handwrite256137.4148 0.99380.9722
    平均43.126.6227.62 0.85600.8449
    下载: 导出CSV 
    | 显示表格
    表 3  GDFS+SBS与DFS+SBS算法的5-折交叉验证实验结果
    Table 3  The 5-fold cross-validation experimental results of GDFS+SBS and DFS+SBS algorithms
    Data sets#原特征#选择特征 测试准确率
    GDFSDFS GDFSDFS
    iris42.63.2 0.98670.9733
    thyroid-disease52.83.2 0.92690.9070
    glass98.26.8 0.95800.9375
    wine131211.6 0.68550.6515
    Heart Disease1311.811.8 0.54900.5419
    WDBC302828.8 0.89810.8616
    WPBC3330.831.6 0.77850.7633
    dermatology343131 0.94430.9303
    ionosphere3431.832.2 0.90310.8947
    Handwrite256245248.6 10.9936
    平均43.140.440.88 0.86300.8455
    下载: 导出CSV 
    | 显示表格
    表 4  GDFS+SFFS与DFS+SFFS算法的5-折交叉验证实验结果
    Table 4  The 5-fold cross-validation experimental results of GDFS+SFFS and DFS+SFFS algorithms
    Data sets#原特征#选择特征 测试准确率
    GDFSDFS GDFSDFS
    iris42.83 0.98670.9667
    thyroid-disease52.22.2 0.93950.9349
    glass94.24.4 0.96290.9442
    wine134.24.4 0.92610.9041
    Heart Disease134.44.8 0.59280.5757
    WDBC301111.4 0.93850.9074
    WPBC335.84.4 0.79430.7886
    dermatology3416.817.4 0.95220.9552
    ionosphere349.610.2 0.91730.9231
    Handwrite25642.240.8 0.99070.9846
    平均43.110.3210.3 0.89920.8885
    下载: 导出CSV 
    | 显示表格

    表2 ~ 表5的5-折交叉验证实验结果显示: GDFS+SFS, GDFS+SBS, GDFS+SFFS和GDFS+SBFS选择的特征子集的分类能力均分别优于DFS+SFS, DFS+SBS, DFS+SFFS和DFS+SBFS算法选择的特征子集的分类能力. 因此, GDFS比DFS选择的特征子集的分类能力更强. 从各算法选择的特征子集规模来看, GDFS+SFS选择的特征子集规模最小, 接着是GDFS+SFFS和GDFS+SBFS算法, GDFS+SBS算法选择的特征子集规模较大. 另外, GDFS+SFS, GDFS+SBS, GDFS+SBFS比DFS+SFS, DFS+SBS, DFS+SBFS选择的特征子集规模平均值略小, GDFS+SFFS与DFS+SFFS选择的特征子集规模基本相当, 前者略大一点.

    表2 ~ 表5的5-折交叉验证实验结果还显示, GDFS+SFFS算法选择的特征子集的分类性能最好, GDFS+SFS和GDFS+SBS选择的特征子集的分类能力相当, 不如GDFS+SFFS, 但优于GDFS+SBFS算法选择的特征子集的分类能力.

    综上分析可见, 提出的GDFS比原始DFS更优, 能选择到分类能力好且规模较小的特征子集. 其中, GDFS+SFFS算法选择的特征子集分类能力最优, 且规模较小. 因此后面对比实验中仅选择GDFS+SFFS算法与现有经典算法进行比较.

    本小节用6个经典基因数据集Colon[37]、Prostate[38]、Myeloma[39]、Gas2[40-41]、SRBCT[42]和Carcinoma[31]进一步测试提出的特征子集性能评价准则GDFS的优越性. 数据集详细信息见表6. 实验将比较提出的GDFS+SFFS与现有特征选择算法DFS+SFFS[23], Relief[15-16], DRJMIM[28], mRMR[19], LLE Score[24], AVC[26], SVM-RFE[18], VMInaive (Variational mutual information)[43], AMID (AUC and mutual information difference)[30], AMID-DWSFS (Dynamic weighted SFS using dynamic AUC and mutual information difference)[30], CFR (Composition of feature relevancy)[44], FSSC-SD (Feature selection by spectral clustering based on standard deviation)[45]选择的特征子集的ELM分类器的分类准确率Accuracy、查准率precision、查全率recall、查准率和查全率的调和平均F-measure、正负类查准率的调和平均F2-measure[30], ROC (Receiver operating characteristic)曲线下面积AUC (Area under and ROC curve)[46-48].

    表 6  实验使用的基因数据集描述
    Table 6  Descriptions of gene datasets using in experiments
    数据集样本数特征数类别数
    Colon6220002
    Prostate102126252
    Myeloma173126252
    Gas2124222832
    SRBCT8323084
    Carcinoma174918211
    下载: 导出CSV 
    | 显示表格

    由于基因数据集所含特征数成千上万, 为了减少各特征选择算法的运行时间开销, 实验首先采用D-score算法[22]表6数据集进行特征预选择, 剔除部分不相关和冗余特征, 得到各数据集的候选特征子集, 各算法在候选特征子集上进行特征选择. 表7展示了GDFS+SFFS与特征选择算法DFS+SFFS、Relief、DRJMIM、mRMR、LLE Score、AVC、SVM-RFE、VMInaive、AMID、AMID-DWSFS、CFR及FSSC-SD的5-折交叉验证实验结果, 加粗和下划线表示最优结果. 对比算法的参数设置为: Relief算法的最近邻数为3; LLE Score算法的类内邻域为4, 类外邻域为12; AVC算法的preSelePara参数为默认值.

    表 7  各算法在表6基因数据集的5-折交叉验证实验结果
    Table 7  The 5-fold cross-validation experimental results of all algorithms on datasets from Table 6
    Data sets算法特征数AccuracyAUCrecallprecisionF-measureF2-measure
    ColonGDFS+SFFS5.20.75900.89250.90.70.780.4133
    DFS+SFFS5.40.72560.780.82500.68560.73520.2332
    Relief80.72310.75750.90.62910.73960.16
    DRJMIM130.72820.78250.87500.66420.74950.3250
    mRMR50.76020.73250.850.62810.71850.1578
    LLE Score70.75770.65630.87500.65370.74310.2057
    AVC20.72560.72970.860.64390.72560.2126
    SVM-RFE50.75770.75880.750.62730.67750.3260
    VMInaive20.7423110.64620.78480
    AMID80.74360.950.950.63280.75810
    AMID-DWSFS20.83970.98750.97500.66880.78950.1436
    CFR30.76030.9510.64620.78480
    FSSC-SD20.72690.97500.97500.64010.77210
    ProstateGDFS+SFFS6.40.93050.90290.88360.88360.88290.8818
    DFS+SFFS6.60.91050.93490.88160.88180.85290.8497
    Relief110.930.85250.82550.78240.79810.79
    DRJMIM90.940.86290.78910.87470.82160.83
    mRMR120.94140.78950.73270.78160.75200.7597
    LLE Score260.91190.67960.72910.65820.68470.6616
    AVC120.95140.81440.76550.75980.75920.7573
    SVM-RFE220.920.84530.69270.84740.75670.7824
    VMInaive90.94190.86050.76550.74180.74810.7580
    AMID270.93140.79290.76550.79360.76900.7797
    AMID-DWSFS40.95140.72510.71270.71710.70110.7098
    CFR70.94100.78400.880.74300.79220.7942
    FSSC-SD230.90240.77960.80180.82050.78920.8130
    MyelomaGDFS+SFFS9.60.79740.68050.89710.82300.85580.5463
    DFS+SFFS9.80.77440.62960.89710.80470.84740.3121
    Relief230.86160.64530.86930.82250.84150.4631
    DRJMIM360.85590.62100.83920.78810.81240.2682
    mRMR120.84360.63320.80950.80460.80670.3539
    LLE Score640.84920.61690.91270.79090.84610.2313
    AVC220.83290.58200.89740.80980.85010.3809
    SVM-RFE200.83300.62700.89710.79350.84160.3846
    VMInaive190.83830.56390.88470.79020.83310.2691
    AMID110.83250.67430.89790.82820.86030.5254
    AMID-DWSFS380.83810.62330.83810.81970.82490.5224
    CFR140.85040.59310.91240.80140.85230.3010
    FSSC-SD150.83810.66620.87540.81730.84380.4992
    Gas2GDFS+SFFS7.40.98400.97040.90510.98460.94120.9474
    DFS+SFFS8.40.94290.94650.90640.92120.92030.9018
    Relief40.97630.95200.85770.93160.89110.9005
    DRJMIM190.97500.90040.81920.88480.84490.8584
    mRMR50.97560.93580.85510.91310.88150.8895
    LLE Score250.97690.93120.86590.87480.84490.8538
    AVC30.98400.90730.88970.93900.91220.9160
    SVM-RFE180.97560.90090.82050.90520.85030.8716
    VMInaive100.97630.94250.73720.97780.83110.8778
    AMID160.98330.93050.92050.88290.89680.9013
    AMID-DWSFS20.98400.92470.83590.94240.88390.8977
    CFR100.99170.90800.90130.82360.84320.8434
    FSSC-SD160.95960.90950.85380.87580.85550.8642
    SRBCTGDFS+SFFS11.60.93720.97490.95670.96840.95790.9573
    DFS+SFFS11.60.90340.91300.93560.94490.94520.9352
    Relief100.96310.94790.94390.95890.94670.9390
    DRJMIM40.93890.93630.96560.95110.95550.9503
    mRMR80.95280.94790.92830.96240.92750.9294
    LLE Score110.92710.89410.93330.93320.92470.9154
    AVC80.90420.93550.91390.95440.92230.9183
    SVM-RFE130.84210.91490.91280.93850.91590.8240
    VMInaive140.94090.91810.92500.94290.92690.9188
    AMID130.93870.89990.95670.93350.94070.9239
    AMID-DWSFS90.91670.81510.81780.85160.820.7466
    CFR80.93140.68390.89940.85700.86930.7150
    FSSC-SD60.88060.90960.92670.94220.92840.9160
    CarcinomaGDFS+SFFS23.40.76220.90370.78720.78790.78390.5570
    DFS+SFFS19.40.74690.89980.78080.78690.78010.6261
    Relief420.73510.87010.76870.77850.76800.5392
    DRJMIM130.77570.89910.67420.66210.66560.4557
    mRMR240.80790.91880.76130.75050.75330.5089
    LLE Score760.66820.84520.66890.67020.66630.4109
    AVC770.72270.87460.78720.77900.77960.5068
    SVM-RFE300.72130.870.70270.69330.69290.4065
    VMInaive330.74430.87840.74870.75270.74410.4731
    AMID420.73070.88780.72950.71650.71940.4841
    AMID-DWSFS380.74120.62310.75580.74470.74570.4255
    CFR330.70540.62160.75140.740.74100.5315
    FSSC-SD210.73060.87160.70390.70160.69920.4344
    下载: 导出CSV 
    | 显示表格

    表7各算法选择的特征子集的ELM分类器的Accuracy、AUC、recall、precision、F-measure和F2-measure实验结果显示, 提出的GDFS+SFFS算法所选特征子集的分类能力除了在Prostate数据集的AUC、在Gas2的recall、在Carcinoma的F2-measure略低于DFS+SFFS算法外, 在该3个数据集的其他5个评价指标, 以及在其他3个基因数据集的6个评价指标Accuracy、AUC、recall、precision、F-measure和F2-measure均优于原始DFS+SFFS算法. 从特征子集规模来看, 提出的GDFS+SFFS算法除了在Carcinoma数据集的特征子集规模略高于(即选择的特征数稍多于) DFS+SFFS算法外, 在其他数据集得到的特征子集的规模(特征数)都不高于DFS+SFFS. 因此, 可以说提出的特征子集区分度评价准则GDFS优于原始DFS, 能选择到规模较小且分类能力强的特征子集.

    另外, 提出的GDFS+SFFS算法所选特征子集的ELM分类器的precision和F2-emeasure在5/6个数据集是最优的, F-measure在4/6个数据集优于所有对比算法, AUC和recall分别在3/6和2/6个数据集上取得所有对比算法的最优值. 对比算法VMInaive在Colon数据集的AUC、recall和F-measure优于对比算法, AUC和recall的值均为最大值1, 但此时其F2-measure为0, 说明该算法将测试集的全部负类样本均误识为正类样本. 算法CFR在Colon数据集也存在选择的特征子集的ELM分类器的recall指标为最大值1, 但F2-measure为0的问题, 也是将测试集的负类样本全部误识为正类样本造成的. 另外, 表7的整体实验结果来看, GDFS+SFFS算法选择的特征子集的分类性能是所有13个算法中最好的.

    以上分析显示: 提出的特征子集评价准则GDFS比原始DFS准则更好, 能选择出规模小且分类能力更好的特征子集; 另外, GDFS选择的特征子集的分类能力优于特征选择算法Relief、DRJMIM、mRMR、LLE Score、AVC、SVM-RFE、VMInaive、AMID、AMID-DWSFS、CFR和FSSC-SD所选特征子集的分类能力.

    为了检验提出的GDFS+SFFS特征选择算法与对比特征选择算法Relief、DRJMIM、mRMR、LLE Score、AVC、SVM-RFE、VMInaive、AMID、AMID-DWSFS、CFR、FSSC-SD以及DFS+SFFS是否具有统计意义上的显著性区别, 采用Friedman检验来检验各算法之间的差异[49-51]. 在Friedman检验检测到算法间的显著性不同之后, 利用Nemenyi后续检验来检测算法对的两算法之间是否存在统计意义上的显著性不同. 根据Nemenyi检验方法, 在给定统计显著性水平$\alpha $时, 如果任一算法对的两算法之间的平均序数差小于临界阈值CD, 则以置信度$1{\rm{ - }}\alpha $接受零假设“两算法性能相同”, 否则拒绝原(零)假设, 认为两算法性能存在显著性不同. 其中临界阈值$CD = {q_\alpha }\sqrt {\frac{{M\left( {M + 1} \right)}}{{6N}}} $, 这里的M和N分别表示算法个数和数据集个数, ${q_\alpha }$可通过查表获取. 各算法所选特征子集的ELM分类器的Accuracy、AUC、recall、precision、F-measure和F2-measure在$\alpha {\rm{ = }} 0.05$时的Friedman检验结果如表8所示.

    表 8  各算法所选特征子集分类能力的Friedman检测结果
    Table 8  The Friedman's test of the classification capability of feature subsets of all algorithms
    AccuracyAUCrecallprecisionF-measureF2-measure
    ${\chi ^2}$23.409427.552722.158529.293626.760832.5446
    df121212121212
    p0.02440.00640.03580.00360.00840.0011
    下载: 导出CSV 
    | 显示表格

    表8的Friedma检验结果可知, 各算法所选特征子集的ELM分类器的Accuracy、AUC、recall、precision、F-measure和F2-measure指标对应的p值均小于0.05. 因此, 我们可以拒绝零假设“各特征选择算法性能相同”, 则各算法所选特征子集在6个基因数据集上的分类性能存在显著性差异.

    在各算法存在显著性差异的基础上, 采用Nemenyi后续检验来进一步验证各算法对的两算法之间的性能是否显著性不同. 当$\alpha = 0.05$, 算法个数为13时, 我们查表可知${q_\alpha }$= 3.13, 由$CD = $$ {q_\alpha }\sqrt {\frac{{M\left( {M + 1} \right)}}{{6N}}}$计算可得临界阈值CD = 7.4491, 则可信水平为0.95时, 每一对算法采用其选择的特征子集对应ELM分类器的Accuracy、AUC、recall、precision、F-measure和F2-measure指标值的Nemenyi检验结果如图5所示.

    图5(a)的Nemenyi检验结果显示, GDFS在Accuracy指标上与其他对比算法无显著差异. 众所周知, 基因数据集的不平衡性, 分类准确率已经不适于评价特征子集分类性能[30]. 尽管如此, 图5(a)的检验结果显示, GDFS与其他12种对比算法之间是存在差异的, 与DFS的差异最大, 且优于DFS算法. 图5(b)的Nemenyi检验结果显示, GDFS在AUC指标上与LLE Score和CFR算法存在显著性差异, 且优于LLE Score和CFR算法, 与其他10种对比算法无显著差异, 但存在差异, 且GDFS性能最优. 图5(c)的Nemenyi检验结果显示, GDFS在recall指标上与SVM-RFE存在显著差异, 与其他对比算法无显著差异, 但从实验结果可以看出GDFS与其他11种特征选择算法间存在差异, 且GDFS性能最优, 优于DFS算法. 图5(d)的Nemenyi检验结果可见, GDFS在precision指标上与LLE Score、SVM-RFE和CFR算法存在显著性差异, 且优于LLE Score、SVM-RFE和CFR算法, 与其他9种对比算法无显著差异, 但存在差异, 且优于DFS, 是13种特征选择算法中性能最优的. 图5(e)的Nemenyi检验结果显示, GDFS在F-measure指标上与LLE Score和SVM-RFE算法存在显著性差异, 且优于LLE Score和SVM-RFE算法, 与其他10种对比算法无显著差异, 但存在差异, 且GDFS性能最优, 优于DFS. 图5(f)的Nemenyi检验结果显示, GDFS在F2-measure指标上与LLE Score、CFR、VMInaive和AMID-DWSFS算法存在显著性差异, 且优于LLE Score、VMInaive、AMID-DWSFS和CFR算法, 与其他8种对比算法无显著差异, 但存在差异, 且GDFS性能最优, 优于DFS.

    图5各算法的Nemenyi检验结果还显示, 对比算法DFS、Relief、DRJMIM、mRMR、LLE Score、AVC、SVM-RFE、VMInaive、AMID、AMID-DWSFS、CFR和FSSC-SD, 各对算法间不存在统计意义上的显著性差异. 另外, 提出的GDFS优于DFS, 尽管其间没有统计意义上的显著性差异, 但图5的Nemenyi检验结果揭示, 除了recall指标, GDFS与DFS间的等级比较差异值大于2.5, 且recall指标时, GDFS与DFS的等级比较差异值也大于1.5, 这说明尽管GDFS与DFS没有统计意义上的显著性差异, 但其间存在差异. 这一点与表7的实验结果一致.

    以上统计重要性分析显示: 提出的GDFS特征子集区分度评价准则优于原始DFS, GDFS+SFFS算法优于12个对比特征选择算法, 能选择到分类性能更好的特征子集. 12个对比算法两两之间不存在显著性差异. 提出的GDFS准则与原始DFS特征子集评价准则选择的特征子集的分类能力有差异, 且GDFS优于DFS, 但不存在统计意义上的显著性差异.

    综合以上UCI机器学习数据集和经典基因数据集的5-折交叉验证实验结果得出: 提出的GDFS特征子集区分度评价准则是一种有效的特征子集辨识能力评价准则, UCI机器学习数据集和经典基因数据集的实验测试比较验证了基于该准则的特征选择算法能选择到分类性能更好的特征子集, 达到了保持数据集辨识能力不变情况下进行数据维数压缩的目的.

    提出了一种特征子集区分能力评价新准则GDFS, 克服了DFS准则没有考虑特征测量量纲对特征子集区分能力大小影响的缺陷; GDFS结合SFS、SBS、SFFS和SBFS搜索策略, 以ELM为分类器引导特征选择过程, 提出GDFS+SFS、GDFS+SBS、GDFS+SFFS和GDFS+SBFS共4种混合特征选择算法.

    UCI机器学习数据集和经典基因数据集的5-折交叉验证实验, 以及与DFS和经典特征选择算法Relief、DRJMIM、mRMR、LLE Score、AVC、SVM-RFE、VMInaive、AMID、AMID-DWSFS、CFR和FSSC-SD的性能比较和统计重要性检验表明, 提出的GDFS特征子集区分度评价准则是一种有效的特征子集辨识能力衡量准则, 其选择的特征子集优于DFS、Relief、DRJMIM、mRMR、LLE Score、AVC、SVM-RFE、VMInaive、AMID、AMID-DWSFS、CFR和FSSC-SD选择的特征子集, 具有更优的分类性能. GDFS准则在提升和保持数据集辨识能力情况下降低了数据的维度.

  • 图  1  DFS+SFS算法的5-折交叉验证实验结果

    Fig.  1  The 5-fold cross-validation experimental results of DFS+SFS

    图  4  DFS+SBFS算法的5-折交叉验证实验结果

    Fig.  4  The 5-fold cross-validation experimental results of DFS+SBFS

    图  2  DFS+SBS算法的5-折交叉验证实验结果

    Fig.  2  The 5-fold cross-validation experimental results of DFS+SBS

    图  3  DFS+SFFS算法的5-折交叉验证实验结果

    Fig.  3  The 5-fold cross-validation experimental results of DFS+SFFS

    图  5  各特征选择算法的Nemenyi检验结果

    Fig.  5  Nemenyi test results of 13 feature selection algorithms in terms of performance metrics of ELM built on their selected features

    表  1  实验用UCI数据集描述

    Table  1  Descriptions of datasets from UCI

    数据集样本个数特征数类别数
    iris15043
    thyroid-disease21553
    glass21492
    wine178133
    Heart Disease297133
    WDBC569302
    WPBC194332
    dermatology358346
    ionosphere351342
    Handwrite3232562
    下载: 导出CSV

    表  2  GDFS+SFS与DFS+SFS算法的5-折交叉验证实验结果

    Table  2  The 5-fold cross-validation experimental results of GDFS+SFS and DFS+SFS algorithms

    Data sets#原特征#选择特征测试准确率
    GDFSDFSGDFSDFS
    iris42.230.97330.9667
    thyroid-disease51.41.60.91630.9070
    glass92.43.20.93460.9439
    wine133.63.60.92720.8925
    Heart Disease132.83.40.58890.5654
    WDBC303.46.20.92270.9193
    WPBC331.820.78350.7732
    dermatology344.650.71510.6938
    ionosphere344.430.90290.8717
    Handwrite2567.47.20.96570.9440
    平均43.13.43.820.86300.8478
    下载: 导出CSV

    表  5  GDFS+SBFS与DFS+SBFS算法的5-折交叉验证实验结果

    Table  5  The 5-fold cross-validation experimental results of GDFS+SBFS and DFS+SBFS algorithms

    Data sets#原特征#选择特征 测试准确率
    GDFSDFS GDFSDFS
    iris42.42.8 0.980.9667
    thyroid-disease52.42.2 0.93950.9209
    glass95.44 0.89790.9490
    wine139.29.4 0.65190.6086
    Heart Disease135.46.4 0.57570.5655
    WDBC3022.824.6 0.89110.8893
    WPBC3324.625.4 0.76810.7319
    dermatology3428.227.2 0.94440.9362
    ionosphere3428.426.2 0.91740.9087
    Handwrite256137.4148 0.99380.9722
    平均43.126.6227.62 0.85600.8449
    下载: 导出CSV

    表  3  GDFS+SBS与DFS+SBS算法的5-折交叉验证实验结果

    Table  3  The 5-fold cross-validation experimental results of GDFS+SBS and DFS+SBS algorithms

    Data sets#原特征#选择特征 测试准确率
    GDFSDFS GDFSDFS
    iris42.63.2 0.98670.9733
    thyroid-disease52.83.2 0.92690.9070
    glass98.26.8 0.95800.9375
    wine131211.6 0.68550.6515
    Heart Disease1311.811.8 0.54900.5419
    WDBC302828.8 0.89810.8616
    WPBC3330.831.6 0.77850.7633
    dermatology343131 0.94430.9303
    ionosphere3431.832.2 0.90310.8947
    Handwrite256245248.6 10.9936
    平均43.140.440.88 0.86300.8455
    下载: 导出CSV

    表  4  GDFS+SFFS与DFS+SFFS算法的5-折交叉验证实验结果

    Table  4  The 5-fold cross-validation experimental results of GDFS+SFFS and DFS+SFFS algorithms

    Data sets#原特征#选择特征 测试准确率
    GDFSDFS GDFSDFS
    iris42.83 0.98670.9667
    thyroid-disease52.22.2 0.93950.9349
    glass94.24.4 0.96290.9442
    wine134.24.4 0.92610.9041
    Heart Disease134.44.8 0.59280.5757
    WDBC301111.4 0.93850.9074
    WPBC335.84.4 0.79430.7886
    dermatology3416.817.4 0.95220.9552
    ionosphere349.610.2 0.91730.9231
    Handwrite25642.240.8 0.99070.9846
    平均43.110.3210.3 0.89920.8885
    下载: 导出CSV

    表  6  实验使用的基因数据集描述

    Table  6  Descriptions of gene datasets using in experiments

    数据集样本数特征数类别数
    Colon6220002
    Prostate102126252
    Myeloma173126252
    Gas2124222832
    SRBCT8323084
    Carcinoma174918211
    下载: 导出CSV

    表  7  各算法在表6基因数据集的5-折交叉验证实验结果

    Table  7  The 5-fold cross-validation experimental results of all algorithms on datasets from Table 6

    Data sets算法特征数AccuracyAUCrecallprecisionF-measureF2-measure
    ColonGDFS+SFFS5.20.75900.89250.90.70.780.4133
    DFS+SFFS5.40.72560.780.82500.68560.73520.2332
    Relief80.72310.75750.90.62910.73960.16
    DRJMIM130.72820.78250.87500.66420.74950.3250
    mRMR50.76020.73250.850.62810.71850.1578
    LLE Score70.75770.65630.87500.65370.74310.2057
    AVC20.72560.72970.860.64390.72560.2126
    SVM-RFE50.75770.75880.750.62730.67750.3260
    VMInaive20.7423110.64620.78480
    AMID80.74360.950.950.63280.75810
    AMID-DWSFS20.83970.98750.97500.66880.78950.1436
    CFR30.76030.9510.64620.78480
    FSSC-SD20.72690.97500.97500.64010.77210
    ProstateGDFS+SFFS6.40.93050.90290.88360.88360.88290.8818
    DFS+SFFS6.60.91050.93490.88160.88180.85290.8497
    Relief110.930.85250.82550.78240.79810.79
    DRJMIM90.940.86290.78910.87470.82160.83
    mRMR120.94140.78950.73270.78160.75200.7597
    LLE Score260.91190.67960.72910.65820.68470.6616
    AVC120.95140.81440.76550.75980.75920.7573
    SVM-RFE220.920.84530.69270.84740.75670.7824
    VMInaive90.94190.86050.76550.74180.74810.7580
    AMID270.93140.79290.76550.79360.76900.7797
    AMID-DWSFS40.95140.72510.71270.71710.70110.7098
    CFR70.94100.78400.880.74300.79220.7942
    FSSC-SD230.90240.77960.80180.82050.78920.8130
    MyelomaGDFS+SFFS9.60.79740.68050.89710.82300.85580.5463
    DFS+SFFS9.80.77440.62960.89710.80470.84740.3121
    Relief230.86160.64530.86930.82250.84150.4631
    DRJMIM360.85590.62100.83920.78810.81240.2682
    mRMR120.84360.63320.80950.80460.80670.3539
    LLE Score640.84920.61690.91270.79090.84610.2313
    AVC220.83290.58200.89740.80980.85010.3809
    SVM-RFE200.83300.62700.89710.79350.84160.3846
    VMInaive190.83830.56390.88470.79020.83310.2691
    AMID110.83250.67430.89790.82820.86030.5254
    AMID-DWSFS380.83810.62330.83810.81970.82490.5224
    CFR140.85040.59310.91240.80140.85230.3010
    FSSC-SD150.83810.66620.87540.81730.84380.4992
    Gas2GDFS+SFFS7.40.98400.97040.90510.98460.94120.9474
    DFS+SFFS8.40.94290.94650.90640.92120.92030.9018
    Relief40.97630.95200.85770.93160.89110.9005
    DRJMIM190.97500.90040.81920.88480.84490.8584
    mRMR50.97560.93580.85510.91310.88150.8895
    LLE Score250.97690.93120.86590.87480.84490.8538
    AVC30.98400.90730.88970.93900.91220.9160
    SVM-RFE180.97560.90090.82050.90520.85030.8716
    VMInaive100.97630.94250.73720.97780.83110.8778
    AMID160.98330.93050.92050.88290.89680.9013
    AMID-DWSFS20.98400.92470.83590.94240.88390.8977
    CFR100.99170.90800.90130.82360.84320.8434
    FSSC-SD160.95960.90950.85380.87580.85550.8642
    SRBCTGDFS+SFFS11.60.93720.97490.95670.96840.95790.9573
    DFS+SFFS11.60.90340.91300.93560.94490.94520.9352
    Relief100.96310.94790.94390.95890.94670.9390
    DRJMIM40.93890.93630.96560.95110.95550.9503
    mRMR80.95280.94790.92830.96240.92750.9294
    LLE Score110.92710.89410.93330.93320.92470.9154
    AVC80.90420.93550.91390.95440.92230.9183
    SVM-RFE130.84210.91490.91280.93850.91590.8240
    VMInaive140.94090.91810.92500.94290.92690.9188
    AMID130.93870.89990.95670.93350.94070.9239
    AMID-DWSFS90.91670.81510.81780.85160.820.7466
    CFR80.93140.68390.89940.85700.86930.7150
    FSSC-SD60.88060.90960.92670.94220.92840.9160
    CarcinomaGDFS+SFFS23.40.76220.90370.78720.78790.78390.5570
    DFS+SFFS19.40.74690.89980.78080.78690.78010.6261
    Relief420.73510.87010.76870.77850.76800.5392
    DRJMIM130.77570.89910.67420.66210.66560.4557
    mRMR240.80790.91880.76130.75050.75330.5089
    LLE Score760.66820.84520.66890.67020.66630.4109
    AVC770.72270.87460.78720.77900.77960.5068
    SVM-RFE300.72130.870.70270.69330.69290.4065
    VMInaive330.74430.87840.74870.75270.74410.4731
    AMID420.73070.88780.72950.71650.71940.4841
    AMID-DWSFS380.74120.62310.75580.74470.74570.4255
    CFR330.70540.62160.75140.740.74100.5315
    FSSC-SD210.73060.87160.70390.70160.69920.4344
    下载: 导出CSV

    表  8  各算法所选特征子集分类能力的Friedman检测结果

    Table  8  The Friedman's test of the classification capability of feature subsets of all algorithms

    AccuracyAUCrecallprecisionF-measureF2-measure
    ${\chi ^2}$23.409427.552722.158529.293626.760832.5446
    df121212121212
    p0.02440.00640.03580.00360.00840.0011
    下载: 导出CSV
  • [1] 陈晓云, 廖梦真. 基于稀疏和近邻保持的极限学习机降维. 自动化学报, 2019, 45(2): 325-333

    Chen Xiao-Yun, Liao Meng-Zhen. Dimensionality reduction with extreme learning machine based on sparsity and neighborhood preserving. Acta Automatica Sinica, 2019, 45(2): 325-333
    [2] Xie J Y, Lei J H, Xie W X, Shi Y, Liu X H. Two-stage hybrid feature selection algorithms for diagnosing erythemato-squamous diseases. Health Information Science and Systems, 2013, 1: Article No. 10 doi: 10.1186/2047-2501-1-10
    [3] 谢娟英, 周颖. 一种新聚类评价指标. 陕西师范大学学报(自然科学版), 2015, 43(6): 1-8

    Xie Juan-Ying, Zhou Ying. A new criterion for clustering algorithm. Journal of Shaanxi Normal University (Natural Science Edition), 2015, 43(6): 1-8
    [4] Kou G, Yang P, Peng Y, Xiao F, Chen Y, Alsaadi F E. Evaluation of feature selection methods for text classification with small datasets using multiple criteria decision-making methods. Applied Soft Computing, 2020, 86: Article No. 105836 doi: 10.1016/j.asoc.2019.105836
    [5] Xue Y, Xue B, Zhang M J. Self-adaptive particle swarm optimization for large-scale feature selection in classification. ACM Transactions on Knowledge Discovery from Data, 2019, 13(5): Article No. 50
    [6] Zhang Y, Gong D W, Gao X Z, Tian T, Sun X Y. Binary differential evolution with self-learning for multi-objective feature selection. Information Sciences, 2020, 507: 67-85. doi: 10.1016/j.ins.2019.08.040
    [7] Nguyen B H, Xue B, Zhang M J. A survey on swarm intelligence approaches to feature selection in data mining. Swarm and Evolutionary Computation, 2020, 54: Article No. 100663 doi: 10.1016/j.swevo.2020.100663
    [8] Solorio-Fernández S, Carrasco-Ochoa J A, Martínez-Trinidad J F. A review of unsupervised feature selection methods. Artificial Intelligence Review, 2020, 53(2): 907-948 doi: 10.1007/s10462-019-09682-y
    [9] Karasu S, Altan A, Bekiros S, Ahmad W. A new forecasting model with wrapper-based feature selection approach using multi-objective optimization technique for chaotic crude oil time series.Energy, 2020, 212: Article No. 118750 doi: 10.1016/j.energy.2020.118750
    [10] Al-Tashi Q, Abdulkadir S J, Rais H, Mirjalili S, Alhussian H. Approaches to multi-objective feature selection: A systematic literature review. IEEE Access, 2020, 8: 125076-125096 doi: 10.1109/ACCESS.2020.3007291
    [11] Deng X L, Li Y Q, Weng J, Zhang J L. Feature selection for text classification: A review. Multimedia Tools and Applications, 2019, 78(3): 3797-3816 doi: 10.1007/s11042-018-6083-5
    [12] 贾鹤鸣, 李瑶, 孙康健. 基于遗传乌燕鸥算法的同步优化特征选择. 自动化学报, DOI: 10.16383/j.aas.c200322

    Jia He-Ming, Li Yao, Sun Kang-Jian. Simultaneous feature selection optimization based on hybrid sooty tern optimization algorithm and genetic algorithm. Acta Automatica Sinica, DOI: 10.16383/j.aas.c200322
    [13] Xie J Y, Wang C X. Using support vector machines with a novel hybrid feature selection method for diagnosis of erythemato-squamous diseases. Expert Systems With Applications, 2011, 38(5): 5809-5815 doi: 10.1016/j.eswa.2010.10.050
    [14] Bolón-Canedo V, Alonso-Betanzos A. Ensembles for feature selection: A review and future trends. Information Fusion, 2019, 52: 1-12 doi: 10.1016/j.inffus.2018.11.008
    [15] Kira K, Rendell L A. The feature selection problem: Traditional methods and a new algorithm. In: Proceedings of the 10th National Conference on Artificial Intelligence. San Jos, USA: AAAI Press, 1992. 129−134
    [16] Kononenko I. Estimating attributes: Analysis and extensions of RELIEF. In: Proceedings of the 7th European Conference on Machine Learning. Catania, Italy: Springer, 1994. 171−182
    [17] Liu H, Setiono R. Feature selection and classification — a probabilistic wrapper approach. In: Proceedings of the 9th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems. Fukuoka, Japan: Gordon and Breach Science Publishers, 1997. 419−424
    [18] Guyon I, Weston J, Barnhill S. Gene selection for cancer classification using support vector machines. Machine Learning, 2002, 46(1-3): 389-422
    [19] Peng H C, Long F H, Ding C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(8): 1226-1238 doi: 10.1109/TPAMI.2005.159
    [20] Chen Y W, Lin C J. Combining SVMs with various feature selection strategies. Feature Extraction: Foundations and Applications. Berlin, Heidelberg: Springer, 2006. 315−324
    [21] 谢娟英, 王春霞, 蒋帅, 张琰. 基于改进的F-score与支持向量机的特征选择方法. 计算机应用, 2010, 30(4): 993-996 doi: 10.3724/SP.J.1087.2010.00993

    Xie Juan-Ying, Wang Chun-Xia, Jiang Shuai, Zhang Yan. Feature selection method combing improved F-score and support vector machine. Journal of Computer Applications, 2010, 30(4): 993-996 doi: 10.3724/SP.J.1087.2010.00993
    [22] 谢娟英, 雷金虎, 谢维信, 高新波. 基于D-score与支持向量机的混合特征选择方法. 计算机应用, 2011, 31(12): 3292-3296

    Xie Juan-Ying, Lei Jin-Hu, Xie Wei-Xin, Gao Xin-Bo. Hybrid feature selection methods based on D-score and support vector machine. Journal of Computer Applications, 2011, 31(12): 3292-3296
    [23] 谢娟英, 谢维信. 基于特征子集区分度与支持向量机的特征选择算法. 计算机学报, 2014, 37(8): 1704-1718

    Xie Juan-Ying, Xie Wei-Xin. Several feature selection algorithms based on the discernibility of a feature subset and support vector machines. Chinese Journal of Computers, 2014, 37(8): 1704-1718
    [24] 李建更, 逄泽楠, 苏磊, 陈思远. 肿瘤基因选择方法LLE Score. 北京工业大学学报, 2015, 41(8): 1145-1150

    Li Jian-Geng, Pang Ze-Nan, Su Lei, Chen Si-Yuan. Feature selection method LLE score used for tumor gene expressive data. Journal of Beijing University of Technology, 2015, 41(8): 1145-1150
    [25] Roweis S T, Saul L K. Nonlinear dimensionality reduction by locally linear embedding. Science, 2000, 290(5500): 2323-2326 doi: 10.1126/science.290.5500.2323
    [26] Sun L, Wang J, Wei J M. AVC: Selecting discriminative features on basis of AUC by maximizing variable complementarity. BMC Bioinformatics, 2017, 18(Suppl 3): Article No. 50
    [27] 谢娟英, 王明钊, 胡秋锋. 最大化ROC曲线下面积的不平衡基因数据集差异表达基因选择算法. 陕西师范大学学报(自然科学版), 2017, 45(1): 13-22

    Xie Juan-Ying, Wang Ming-Zhao, Hu Qiu-Feng. The differentially expressed gene selection algorithms for unbalanced gene datasets by maximize the area under ROC. Journal of Shaanxi Normal University (Natural Science Edition), 2017, 45(1): 13-22
    [28] Hu L, Gao W F, Zhao K, Zhang P, Wang F. Feature selection considering two types of feature relevancy and feature interdependency. Expert Systems With Applications, 2018, 93: 423-434 doi: 10.1016/j.eswa.2017.10.016
    [29] Sun L, Zhang X Y, Qian Y H, Xu J C, Zhang S G. Feature selection using neighborhood entropy-based uncertainty measures for gene expression data classification. Information Sciences, 2019, 502:18-41 doi: 10.1016/j.ins.2019.05.072
    [30] 谢娟英, 王明钊, 周颖, 高红超, 许升全. 非平衡基因数据的差异表达基因选择算法研究. 计算机学报, 2019, 42(6): 1232-1251 doi: 10.11897/SP.J.1016.2019.01232

    Xie Juan-Ying, Wang Ming-Zhao, Zhou Ying, Gao Hong-Chao, Xu Sheng-Quan. Differential expression gene selection algorithms for unbalanced gene datasets. Chinese Journal of Computers, 2019, 42(6): 1232-1251 doi: 10.11897/SP.J.1016.2019.01232
    [31] Li J D, Cheng K W, Wang S H, Morstatter F, Trevino R P, Tang J L, et al. Feature selection: A data perspective. ACM Computing Surveys, 2018, 50(6): Article No. 94
    [32] 刘春英, 贾俊平. 统计学原理. 北京: 中国商务出版社, 2008.

    Liu Chun-Ying, Jia Jun-Ping. The Principles of Statistics. Beijing: China Commerce and Trade Press, 2008.
    [33] Huang G B, Zhu Q Y, Siew C K. Extreme learning machine: Theory and applications. Neurocomputing, 2006, 70(1-3): 489-501 doi: 10.1016/j.neucom.2005.12.126
    [34] Frank A, Asuncion A. UCI machine learning repository [Online], available: http://archive.ics.uci.edu/ml, October 13, 2020
    [35] Chang C C, Lin C J. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2011, 2(3): Article No. 27
    [36] Hsu C W, Chang C C, Lin C J. A practical guide to support vector classification [Online], available: https://www.ee.columbia.edu/~sfchang/course/spr/papers/svm-practical-guide.pdf, March 11, 2021
    [37] Alon U, Barkai N, Notterman D A, Gish K, Ybarra S, Mack D, et al. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proceedings of the National Academy of Sciences of the United States of America, 1999, 96(12): 6745-6750 doi: 10.1073/pnas.96.12.6745
    [38] Singh D, Febbo P G, Ross K, Jackson D G, Manola J, Ladd C, et al. Gene expression correlates of clinical prostate cancer behavior. Cancer Cell, 2002, 1(2): 203-209 doi: 10.1016/S1535-6108(02)00030-2
    [39] Tian E M, Zhan F H, Walker R, Rasmussen E, Ma Y P, Barlogie B, et al. The role of the Wnt-signaling antagonist DKK1 in the development of osteolytic lesions in multiple myeloma. The New England Journal of Medicine, 2003, 349(26): 2483-2494 doi: 10.1056/NEJMoa030847
    [40] Wang G S, Hu N, Yang H H, Wang L M, Su H, Wang C Y, et al. Comparison of global gene expression of gastric cardia and noncardia cancers from a high-risk population in China. PLoS One, 2013, 8(5): Article No. e63826 doi: 10.1371/journal.pone.0063826
    [41] Li W Q, Hu N, Burton V H, Yang H H, Su H, Conway C M, et al. PLCE1 mRNA and protein expression and survival of patients with esophageal squamous cell carcinoma and gastric adenocarcinoma. Cancer Epidemiology, Biomarkers & Prevention, 2014, 23(8): 1579-1588
    [42] Khan J, Wei J S, Ringnér M, Saal L H, Ladanyi M, Westermann F, et al. Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nature Medicine, 2001, 7(6): 673-679 doi: 10.1038/89044
    [43] Gao S Y, Steeg G V, Galstyan A. Variational information maximization for feature selection. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: Curran Associates, 2016. 487−495
    [44] Gao W F, Hu L, Zhang P, He J L. Feature selection considering the composition of feature relevancy. Pattern Recognition Letters, 2018, 112: 70-74 doi: 10.1016/j.patrec.2018.06.005
    [45] 谢娟英, 丁丽娟, 王明钊. 基于谱聚类的无监督特征选择算法. 软件学报, 2020, 31(4): 1009-1024

    Xie Juan-Ying, Ding Li-Juan, Wang Ming-Zhao. Spectral clustering based unsupervised feature selection algorithms. Journal of Software, 2020, 31(4): 1009-1024
    [46] Muschelli III J. ROC and AUC with a binary predictor: A potentially misleading metric. Journal of Classification, 2020, 37(3): 696-708 doi: 10.1007/s00357-019-09345-1
    [47] Fawcett T. An introduction to ROC analysis. Pattern Recognition Letters, 2006, 27(8): 861-874 doi: 10.1016/j.patrec.2005.10.010
    [48] Bowers A J, Zhou X L. Receiver operating characteristic (ROC) area under the curve (AUC): A diagnostic measure for evaluating the accuracy of predictors of education outcomes. Journal of Education for Students Placed at Risk (JESPAR), 2019, 24(1): 20-46 doi: 10.1080/10824669.2018.1523734
    [49] 卢绍文, 温乙鑫. 基于图像与电流特征的电熔镁炉欠烧工况半监督分类方法. 自动化学报, 2021, 47(4): 891-902

    Lu Shso-Wen, Wen Yi-Xin. Semi-supervised classification of semi-molten working condition of fused magnesium furnace based on image and current features. Acta Automatica Sinica, 2021, 47(4): 891-902
    [50] Xie J Y, Gao H C, Xie W X, Liu X H, Grant P W. Robust clustering by detecting density peaks and assigning points based on fuzzy weighted K-nearest neighbors. Information Sciences, 2016, 354: 19-40 doi: 10.1016/j.ins.2016.03.011
    [51] 谢娟英, 吴肇中, 郑清泉. 基于信息增益与皮尔森相关系数的2D自适应特征选择算法. 陕西师范大学学报(自然科学版), 2020, 48(6): 69-81

    Xie Juan-Ying, Wu Zhao-Zhong, Zheng Qing-Quan. An adaptive 2D feature selection algorithm based on information gain and pearson correlation coefficient. Shaanxi Normal University (Natural Science Edition), 2020, 48(6): 69-81
  • 期刊类型引用(2)

    1. 孙世政,庞珂,于竞童,陈仁祥. 基于白鲨优化极限学习机的三维力传感器非线性解耦. 光学精密工程. 2023(18): 2664-2674 . 百度学术
    2. 谭敏刚,张潮海,陈斌. 高灵敏度和高区分度电压暂降能量指标研究. 控制理论与应用. 2022(03): 411-420 . 百度学术

    其他类型引用(4)

  • 加载中
图(5) / 表(8)
计量
  • 文章访问数:  1034
  • HTML全文浏览量:  416
  • PDF下载量:  128
  • 被引次数: 6
出版历程
  • 收稿日期:  2020-09-01
  • 修回日期:  2021-03-02
  • 网络出版日期:  2021-04-25
  • 刊出日期:  2022-05-13

目录

/

返回文章
返回