2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

深度学习批归一化及其相关算法研究进展

刘建伟 赵会丹 罗雄麟 许鋆

徐任超, 阎威武, 王国良, 杨健程, 张曦. 基于周期性建模的时间序列预测方法及电价预测研究. 自动化学报, 2020, 46(6): 1136−1144 doi: 10.16383/j.aas.c180712
引用本文: 刘建伟, 赵会丹, 罗雄麟, 许鋆. 深度学习批归一化及其相关算法研究进展. 自动化学报, 2020, 46(6): 1090−1120 doi: 10.16383/j.aas.c180564
Xu Ren-Chao, Yan Wei-Wu, Wang Guo-Liang, Yang Jian-Cheng, Zhang Xi. Time series forecasting based on seasonality modeling and its application to electricity price forecasting. Acta Automatica Sinica, 2020, 46(6): 1136−1144 doi: 10.16383/j.aas.c180712
Citation: Liu Jian-Wei, Zhao Hui-Dan, Luo Xiong-Lin, Xu Jun. Research progress on batch normalization of deep learning and its related algorithms. Acta Automatica Sinica, 2020, 46(6): 1090−1120 doi: 10.16383/j.aas.c180564

深度学习批归一化及其相关算法研究进展

doi: 10.16383/j.aas.c180564
基金项目: 国家重点研究发展计划项目基金(2016YFC0303703),中国石油大学(北京)年度前瞻导向及培育项目基金(2462018QZDX02)资助
详细信息
    作者简介:

    刘建伟:中国石油大学(北京)自动化系副研究员. 主要研究方向为模式识别与智能系统, 先进控制. 本文通信作者. E-mail: liujw@cup.edu.cn

    赵会丹:中国石油大学(北京)自动化系硕士研究生. 2016年获得中国石油大学(北京)自动化系学士学位. 主要研究方向为模式识别与智能系统. E-mail: zhaohuidan93@126.com

    罗雄麟:中国石油大学(北京)自动化系教授. 主要研究方向为智能控制, 复杂系统分析, 预测与控制. E-mail: luoxl@cup.edu.cn

    许鋆:哈尔滨工业大学(深圳)机电工程与自动化学院副教授. 主要研究方向为复杂非线性系统分析, 预测与控制. E-mail: xujunqgy@hit.edu.cn

Research Progress on Batch Normalization of Deep Learning and Its Related Algorithms

Funds: Supported by 2016 National Key Research and Development Program, 2016YFC0303703-03 and 2018 China University of Petroleum (Beijing) Prospective Orientation and Cultivation Project, 2462018QZDX02
  • 摘要: 深度学习已经广泛应用到各个领域, 如计算机视觉和自然语言处理等, 并都取得了明显优于早期机器学习算法的效果. 在信息技术飞速发展的今天, 训练数据逐渐趋于大数据集, 深度神经网络不断趋于大型化, 导致训练越来越困难, 速度和精度都有待提升. 2013年, Ioffe等指出训练深度神经网络过程中存在一个严重问题: 中间协变量迁移(Internal covariate shift), 使网络训练过程对参数初值敏感、收敛速度变慢, 并提出了批归一化(Batch normalization, BN)方法, 以减少中间协变量迁移问题, 加快神经网络训练过程收敛速度. 目前很多网络都将BN作为一种加速网络训练的重要手段, 鉴于BN的应用价值, 本文系统综述了BN及其相关算法的研究进展. 首先对BN的原理进行了详细分析. BN虽然简单实用, 但也存在一些问题, 如依赖于小批量数据集的大小、训练和推理过程对数据处理方式不同等, 于是很多学者相继提出了BN的各种相关结构与算法, 本文对这些结构和算法的原理、优势和可以解决的主要问题进行了分析与归纳. 然后对BN在各个神经网络领域的应用方法进行了概括总结, 并且对其他常用于提升神经网络训练性能的手段进行了归纳. 最后进行了总结, 并对BN的未来研究方向进行了展望.
  • 时间序列数据在现实生活中广泛存在, 例如金融领域中的交易数据和经济统计数据、消费电商领域中的用户浏览和购买数据、医疗领域中的医疗器械的信号记录、天气监测站记录的天气指标数据等[1-4]. 这些时间序列数据是相应领域非常宝贵的数据资源, 对这些数据的准确、有效分析和利用有助于减小人力成本, 提高生产效率, 提高经济收益[5].

    现实中的时间序列通常具有复杂的非线性动态, 这为时间序列预测带来了困难. 另外, 由于人类活动或自然运动的影响, 时间序列数据常常体现出一定的周期性. 周期性的提取对时间序列预测有着积极意义. 时间序列的趋势同样具有重要的意义, 有时甚至作为预测的目的.

    传统的时间序列分析方法源自于自回归模型(Autoregressive model, AR)和移动平滑模型(Mo-ving average, MA). 自回归移动平滑模型(Autoregressive model and moving average, ARMA)和在其基础上发展起来的自回归差分移动平滑模型(Auto-regressive integrated moving average, ARIMA)是时间序列分析的经典方法[6]. 另外, 自回归条件异方差模型(Autoregressive conditional heteroscedasticity, ARCH)[7]和广义自回归条件异方差模型(Gen-eralized autoregressive conditional heteroscedasticity, GARCH)[8]引入了异方差, 对时间序列的波动性进行建模. 基于机器学习的支持向量机回归(Sup-port vactor regression, SVR)和核岭回归(Kernel ridge regression, KRR)等方法在时间序列分析上已经有大量的研究[9-11]. 随着深度学习理论的发展, 循环神经网络(Recurrent neural networks, RNNs)已经成为处理时间序列数据的主流, 在大量应用场景中取得了不俗的效果[12-13]. 回声状态网络(Echo state network, ESN)也是时间序列预测的常用模型[14].

    传统的时间序列分析方法基于平稳性假设[15], 对短期平稳的时间序列有较好的预测效果, 但是难以对复杂非线性时间序列数据进行有效建模. SVR和KRR等机器学习方法使用滑动窗口的方式处理预测序列, 忽略了时间序列数据的序列属性, 不能很好地捕捉时间序列中的长时依赖. 同时SVR和KRR受限于模型容量, 难以达到预测非线性时间序列的最佳效果. 深度神经网络由于其超强的拟合能力, 在复杂非线性时间序列数据的处理方面有着天然的优势. 同样对于分割后的序列窗口, 深度神经网络仍能从中捕捉时间序列的长期依赖.

    能源领域是产生大量时间序列的领域, 如电力用量数据、风力序列数据、太阳能数据和电力价格数据等. 其中, 电力价格是能源市场上的关键因素, 影响着能源市场的流通和运行. 能源市场的自由属性使电力价格具备了金融商品性质, 但其还受到电力传输和需求量的影响. 电力传输受电网传输容量的限制, 而电力需求量受人类活动和天气因素的影响. 在这些影响因素下, 电力价格数据呈现出长期复杂非线性动态, 体现出高波动性. 另外电力价格还存在着明显的周期性.

    传统的时间序列预测方法对电力价格的预测已有大量研究和应用[16], 而循环神经网络在电力价格预测中同样取得了不错的效果[17]. 针对能源领域时间序列的周期性属性, 传统的方法主要采用信号处理方法, 而常规的循环神经网络中并没有对时间序列的周期性进行建模的模块. 目前, 许多研究针对数据的周期性展开, 主要研究工作有: Clements等[18]提出了多方程时间序列方法, 在方程中加入了周期设定, 对澳洲能源市场的电力负载进行预测; Anbazhagan等[19]使用离散余弦变换处理原始序列, 并用神经网络进行建模; Rafiei等[20]使用小波变换处理原始序列, 并用极限学习机对分解结果进行建模等.

    针对循环神经网络难以对时间序列数据的周期性直接进行建模的问题, 本文结合时间序列周期分解的思想, 设计了循环神经网络的周期损失和趋势损失, 分别对时间序列中的周期和趋势进行辅助建模; 在多任务学习框架下, 将周期损失、趋势损失和模型自身的损失函数相结合, 联合优化循环神经网络. 提出的模型用于能源市场的电力价格预测, 实验结果取得了较好的预测精度, 验证了周期和趋势对时间序列预测的作用, 说明本文提出的周期损失和趋势损失能够有效地辅助模型捕捉数据特征.

    循环神经网络是常用的建模时间序列的神经网络模型. 循环神经网络使用基于时间的反向传播算法(Back-propagation through time, BPTT)来训练网络.

    基础的循环神经网络及其展开形式如图1所示. 其数学形式为

    图 1  一个展开的基础循环神经网络
    Fig. 1  The unfold structure of RNN
    $$ h_t = f\left( {Ux_t +Wh_{t-1} } \right) $$ (1)
    $$ o_t = g\left( {Vh_t } \right) \quad \quad \quad \quad \;\; $$ (2)

    其中, $ U $, $ V $, $ W $都是网络的参数, $ f $$ g $表示激活函数, 可以是ReLU、sigmoid和tanh等非线性函数. $ x_t $表示$ t $时刻的输入, $ h_t $表示$ t $时刻网络的隐藏状态, $ o_t $表示$ t $时刻网络的输出.

    基础的循环神经网络在训练时会面对梯度爆炸或梯度消失的问题[21].

    为了克服循环神经网络的梯度爆炸和梯度消失问题, Hochreiter等[22]提出了长短时记忆网络(Long short term memory, LSTM). LSTM的内部结构如图2所示.

    图 2  LSTM的内部结构示意图
    Fig. 2  The schematic diagram of LSTM

    LSTM的主要思想是引入门控单元和长时记忆单元. 门控单元负责控制长时记忆单元中状态的记忆、修改和遗忘. 同时, LSTM还具有和基础循环神经网络相同的短期记忆单元.

    LSTM的数学表示为

    $$ f_t = \sigma \left( {W_f \left[ {h_{t-1} ,x_t } \right]+b_f } \right) \quad \quad \!\!\!$$ (3)
    $$ i_t = \sigma \left( {W_i \left[ {h_{t-1} ,x_t } \right]+b_i } \right) \quad \quad $$ (4)
    $$ o_t = \sigma \left( {W_o \left[ {h_{t-1} ,x_t } \right]+b_o } \right) \quad \quad \!\!\!$$ (5)
    $$ \tilde {C}_t = \tanh \left( {W_C \left[ {h_{t-1} ,x_t } \right]+b_C } \right) $$ (6)
    $$ C_t = i_t \tilde {C}_t +f_t C_{t-1} \quad \quad \quad \quad \quad \!\!\! $$ (7)
    $$ h_t = o_t \tanh \left( {C_t } \right) \quad \quad \quad \quad \quad\quad $$ (8)

    其中, $ C_t $$ \tilde {C}_t $分别表示$ t $时刻的长期记忆和短期记忆, $ f_t $, $ i_t $$ o_t $分别表示LSTM的遗忘门、输入门和输出门在$ t $时刻的值. $ \left( {W_f ,b_f } \right) $, $ \left( {W_i ,b_i } \right) $, $ \left( {W_o ,b_o } \right) $$ \left( {W_C ,b_C } \right) $分别表示LSTM的遗忘门、输入门、输出门和长时记忆单元的参数. $ \sigma $代表sigmoid函数. $ x_t $表示 $ t $时刻的输入, $ h_t $表示 $ t $时刻隐藏状态的输出.

    由于引入了门控单元和长时记忆单元, 状态的更新方式从乘性增量更新变为了加性增量更新[23], 从而一定程度上避免了梯度消失问题.

    门控循环单元(Gate recurrent unit, GRU)[24]是LSTM的一个重要变体. 相比于LSTM, GRU进行了一定的简化: 将遗忘门和输入门合并成为一个门控单元, 称为更新门; 去除了长期记忆单元, 与隐藏状态混合, 保留短期记忆单元; 去除了输出门, 改为重置门. GRU的内部结构如图3所示.

    图 3  GRU的内部结构示意图
    Fig. 3  The schematic diagram of GRU

    GRU的数学表示为

    $$ z_t = \sigma \left( {W_z \left[ {h_{t-1} ,x_t } \right]} \right) \quad \quad \quad $$ (9)
    $$ r_t = \sigma \left( {W_r \left[ {h_{t-1} ,x_t } \right]} \right) \quad \quad \quad $$ (10)
    $$ \tilde {h}_t = \tanh \left( {W\left[ {r_t \times h_{t-1} ,x_t } \right]} \right) $$ (11)
    $$ h_t = \left( {1-z_t } \right)h_{t-1} +z_t \tilde {h}_t \quad \quad $$ (12)

    其中, $ \tilde {h}_t $表示$ t $时刻的短期记忆, $ h_t $表示$ t $时刻的隐藏状态. $ z_t $$ r_t $分别表示GRU的更新门和重置门在$ t $时刻的值. $ W_z $, $ W_r $$ W $分别表示GRU的更新门、重置门和短期记忆单元的参数.

    GRU大大精简了参数, 相同隐藏状态的GRU的参数量约为LSTM的2/3, 使得模型更不容易过拟合. 本文使用GRU作为默认的循环神经网络模型.

    结合时间序列周期分解, 设计了循环神经网络的周期损失和趋势损失, 将周期损失、趋势损失和模型自身的损失函数相结合, 在多任务学习框架下, 联合优化循环神经网络.

    周期分解是一种对时间序列的周期和趋势建模的经典方式.

    周期建模是指, 针对周期性的时间序列, 可以显式的考虑序列跨周期的关系. 周期性的一种形式化表述为

    $$ \begin{split} & ACF\left( {x_t ,x_{t-T} } \right)\gg ACF\left( {x_t ,x_{t-\tau } } \right) \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \!\!\!\! \forall \tau :od \left( {\tau ,T} \right)\ne 0 \end{split} $$ (13)

    其中, $ ACF $代表自相关函数. 周期序列按照周期的延迟序列的自相关性应该远远大于其他的延迟序列.

    趋势建模是指除了建模每一个具体时间点的具体数值, 还应该捕捉到一段时间的“整体状态”, 包括总体走势、变化范围等因素.

    可以认为, 时间序列是趋势、周期和余差的组合. 时间序列的分解可以是加性的, 也可以是乘性的, 即

    $$ X_t = T_t +S_t +R_t $$ (14)
    $$ X_t = T_t \times S_t \times R_t $$ (15)

    其中, $ X_t $表示原始时间序列, $ T_t $表示趋势分量, $ S_t $表示周期分量, $ R_t $表示余差. 乘性的组合可以通过对数函数转换为加性组合, 所以这两种组合实质上是一致的.

    时间序列的周期分解启发了周期损失函数和趋势损失函数的提出.

    周期损失函数鼓励循环神经网络预测的隐藏状态出现周期性. 假设一个循环神经网络的隐藏状态具有周期性, 它的隐藏状态需要满足一定的条件:

    $$ \begin{split} \cos nt =\; & h_t +h_{t+1} +\cdots +h_{t+T-1}\cong \\ & h_{t+1} +h_{t+2} +\cdots +h_{t+T} \end{split} $$ (16)

    即有

    $$ h_t \cong h_{t+T} $$ (17)

    为了在循环神经网中加入这样的隐藏状态关系, 在$ \left[ {t_1 ,t_2 ,\cdots ,t_n } \right] $的时间窗上定义如下的周期损失函数

    $$ \ell _{\rm seasonal} \left( {kT} \right) = \frac{1}{n}\sum\limits_{t = t_1 }^{t_n } {D\left( {h_t ,h_{t+\tau } } \right)} $$ (18)

    其中, $ D\left( {\ast ,\ast } \right) $表示距离度量函数, $ \tau $表示隐藏状态之间的时间跨度.

    针对周期为$ T $的时间序列, 一般取$ \tau = kT $. $ \tau $也可以取1, 这样式(18)可以视为循环神经网络的一种正规化方法. $ D\left( {\cdot ,\cdot } \right) $一般取平方距离, 此时的周期损失为

    $$ \ell _{\rm seasonal} \left( {kT} \right) = \frac{1}{nm}\sum\limits_{t = t_1 }^{t_n } {\sum\limits_{i = 1}^m {\left\| {h_t^{\left( i \right)} -h_{t+kT}^{\left( i \right)} } \right\|^2} } $$ (19)

    其中, $ m $表示循环神经网络的隐藏状态的维度, $ h_t^{\left( i \right)} $表示$ t $时刻隐藏状态的第$ i $维的值. 更长周期跨度有助于鼓励长距离的周期平稳性.

    在很多实际应用中, 每个时间点的数值并不是最受关注的问题, 而关注的重点在于捕获某种趋势. 时间序列的趋势可以由一段时间内的均值、最大最小值和波动率来反映.

    $ \left[ {t_1 ,t_2 ,\cdots ,t_n } \right] $的时间窗上对时间序列的趋势进行建模, 从均值、最大最小值和波动率的角度可以得到如下的几个趋势损失函数

    $$ \begin{split} \ell _{\rm trend}^{\rm MEAN} \left( w \right) =\; & \frac{1}{n}\sum\limits_{t = t_1 }^{t_n } {\left\| {mean\left( {\hat {y}_{t-w+1} ,\cdots ,\hat {y}_t } \right)} \right.}- \\ & \left. {mean\left( {y_{t-w+1} ,\cdots ,y_t } \right)} \right\|^2 \end{split} $$ (20)
    $$ \begin{split} \ell _{\rm trend}^{\rm MAX} \left( w \right) =\; & \frac{1}{n}\sum\limits_{t = t_1 }^{t_n } {\left\| {\max \left( {\hat {y}_{t-w+1} ,\cdots ,\hat {y}_t } \right)} \right.} - \\ & \left. {\max \left( {y_{t-w+1} ,\cdots ,y_t } \right)} \right\|^2 \end{split}\;\;\;\; $$ (21)
    $$ \begin{split} \ell _{\rm trend}^{\rm MIN} \left( w \right) =\; & \frac{1}{n}\sum\limits_{t = t_1 }^{t_n } {\left\| {\min \left( {\hat {y}_{t-w+1} ,\cdots ,\hat {y}_t } \right)} \right.} -\\ &\left. {\min \left( {y_{t-w+1} ,\cdots ,y_t } \right)} \right\|^2 \end{split} $$ (22)
    $$ \begin{split} \ell _{\rm trend}^{\rm VAR} \left( w \right) =\; & \frac{1}{n}\sum\limits_{t = t_1 }^{t_n } {\left\| {var\left( {\hat {y}_{t-w+1} ,\cdots ,\hat {y}_t } \right)} \right.} - \\ & \left. {var\left( {y_{t-w+1} ,\cdots ,y_t } \right)} \right\|^2 \end{split} $$ (23)

    其中, $ \ell _{\rm trend}^{\rm MEAN} $$ \ell _{\rm trend}^{\rm MAX} $$ \ell _{\rm trend}^{\rm MIN} $$ \ell _{\rm trend}^{\rm VAR} $分别代表均值趋势损失、最大趋势损失、最小趋势损失和波动趋势损失. $ w $表示趋势窗宽, 一般与序列的时间窗宽度$ n $不同. $ \hat {y} $是循环神经网络的预测值, $ y $是真实值.

    在机器学习中, 通常只关注一个优化目标. 多任务学习(Multi-task learning)[25]是同时学习多个相关的任务并在这些任务间共享某些底层的特征表示的机器学习方式. 在深度学习中, 多任务学习非常普遍.

    一般地, 针对多个优化目标, 多任务学习框架中存在共同优化和分开交替优化两种优化方式. 共同优化适用于紧密相关的任务, 分开交替优化则适用于不相关任务和对抗性任务. 周期性和趋势约束作为时间序列预测的辅助任务, 与预测任务关系密切. 因此, 将周期损失和趋势损失引入优化目标进行共同优化是合理的.

    本文根据多任务学习的框架, 将周期损失和趋势损失将作为预测任务的共同优化目标, 在原来的损失函数的基础上, 加入了辅助损失函数, 得到时间序列多任务学习混合模型的优化目标函数

    $$ J = L_{\rm main} +\lambda _S \ell _{\rm seasonal} +\ell _{\rm trend} \quad \quad \quad \;$$ (24)
    $$ \begin{split} \ell _{\rm trend} =\; & \lambda _T^{\rm MEAN} \ell _{\rm trend}^{\rm MEAN} +\lambda _T^{\rm MAX} \ell _{\rm trend}^{\rm MAX}+ \\ & \lambda _T^{\rm MIN} \ell _{\rm trend}^{\rm MIN} +\lambda _T^{\rm VAR} \ell _{\rm trend}^{\rm VAR} \end{split} \;\;\;\; $$ (25)

    其中, $ \lambda _S $是周期损失的权重, $ \lambda _{\rm trend}^{\rm MEAN} $, $ \lambda _{\rm trend}^{\rm MAX} $, $ \lambda _{\rm trend}^{\rm MIN} $$ \lambda _{\rm trend}^{\rm VAR} $分别代表对应的趋势损失的权重.

    引入周期损失和趋势损失的循环神经网络依然可以通过BPTT算法来训练.

    欧洲能源交易所(European Power Exchange, EPEX)是欧洲最重要的能源交易市场, 覆盖了德国、法国、英国等欧洲最主要的国家. 电力在各个国家的电网之间进行交换, 其中一部分是通过事先规划的方式, 而另一部分则通过能源交易所进行交易. 每个国家都会有自己的子市场, 例如EPEX法国市场. EPEX 法国市场的现货市场主要分为三个子市场: 日前交易市场(Day-ahead market)、日内交易市场(Intraday market)和再平衡市场(Balancing market). 能源市场的实际交易主要发生在日前交易市场. 通常EPEX法国市场的日前交易市场每天形成24个报价, 即每小时一个报价.

    图4展示了EPEX法国市场2012年~2015年的一天的不同小时的价格的均值和标准差. 可以看出, 电力价格的标准差非常高, 在均值的1/4以上, 在个别时间段(9:00~11:00)的标准差甚至超出了均值. 这说明EPEX法国市场的电力价格具有超高的波动性. 图5图6分别展示了EPEX法国市场2012年~2015年每年的日级别平均价格和周级别平均价格. 可以看出, 无论是按日的级别还是周的级别对电价进行统计, 每年的电力价格均呈现非常相似的走势, 体现出明显的周期性.

    图 4  能源市场价格的高波动性
    Fig. 4  High volatility of electricity price
    图 5  能源市场价格的日级别周期性
    Fig. 5  The daily seasonality of electricity price
    图 6  能源市场价格的周级别周期性
    Fig. 6  The weekly seasonality of electricity price

    选用EPEX法国市场的电力价格数据作为仿真来源. 数据来自日前电力市场, 时间段取自2012年1月1日~2016年6月30日, 其中2012年1月1日~2015年12月31日(共1 461天)的序列数据用来做训练和验证, 2016年1月1日~2016年6月30日(共182天)的序列数据用来做测试. 通常每天有24个时间点(夏令时转换日有23个时间点, 冬令时转换日有25个时间点).

    为了保证数据的稳定性, 需要对数据进行预处理, 去除数据的均值和波动率的影响

    $$ \tilde {x}_t^h = \frac{x_t^h -\bar {x}_t }{std\left( x \right)} $$ (26)

    其中, $ x $代表所有的序列数据, $ h $表示序列中的年份信息(2012年~2016年),$ x_t^h $表示不同年份的在同一时间点$ t $的序列值, $ \bar {x}_t $表示$ x_t^h $按年份展开的子序列的均值, $ \tilde {x}_t^h $为处理后的序列值. 经过这样的变换, 大部分时间点的数据处于[−1, 1]区间, 可以避免出现一些数值问题.

    实验对比了几种基础的循环神经网络: RNN、LSTM和GRU. 为了分析周期损失和趋势损失的作用, 在GRU的基础上分别添加周期损失、趋势损失或者两者都添加.

    循环神经网络需要调整的超参数有: 隐层大小、优化器、初始学习率、批大小(batch size)、训练轮数(epochs)以及延迟窗宽. 针对引入周期损失和趋势损失的GRU, 还需要调整的有: 周期损失系数$ \lambda _S $、周期损失中的周期跨度kT、趋势损失系数($ \lambda _{\rm trend}^{\rm VAR} $$ \lambda _{\rm trend}^{\rm MEAN} $$ \lambda _{\rm trend}^{\rm MAX} $$ \lambda _{\rm trend}^{\rm MIN} $)和趋势窗宽$ w $. 这里考虑能源市场的短期周期, 故$ k $取1,$ T $取24. 由于能源市场的价格预测任务往往不关注每日的平均价格和价格的波动预测的误差, $ \lambda _{\rm trend}^{\rm MEAN} $$ \lambda _{\rm trend}^{\rm VAR} $的值都预设为0, 不再纳入超参数选择中. 能源市场的价格预测比较关注一天内的最大最小值, 因此趋势窗宽$ w $设为24. 训练时间段的最后一个月(2015年12月)作为验证时间段. 通过在验证时间段内进行多次实验的方式来选取超参数. 选出来的超参数将被用于在完整的训练时间段上的训练. 循环神经网络的超参数设置如表1所示.

    表 1  循环神经网络的超参数设置
    Table 1  The hyperparameters of RNN
    超参数具体取值
    隐层大小64
    优化器RMSProp, 配合梯度裁剪
    初始学习率0.001
    批大小64
    训练轮数12
    延迟窗宽14
    下载: 导出CSV 
    | 显示表格

    表2列出了周期损失和趋势损失的合理的取值范围.

    表 2  周期损失和趋势损失的权重范围
    Table 2  Weights range of seasonal loss and trend loss
    权重取值范围
    $\lambda_S $0.05~0.15
    $\lambda_T^{\rm{MEAN}} $0
    $\lambda_T^{\rm{MAX}} $0.05~0.1
    $\lambda_T^{\rm{MIN}} $0.05~0.1
    $\lambda_T^{\rm{VAR}} $0
    下载: 导出CSV 
    | 显示表格

    除了与基础的循环神经网络进行对比, 另外还选取了ARIMA、SVR和KRR作为对比模型. SVR和KRR的超参数均通过在训练时间段内的5折交叉验证直接选取, 均选用3阶多项式核. 为了满足ARIMA模型的静态性要求, 电力价格数据还需要经过特殊处理, 即将序列减去最小值后取对数, 然后进行天级和周级的延迟差分

    $$ {p}' = (1-\beta ^{24})(1-\beta ^{168})\ln (p-\min p+1) $$ (27)

    实验采用均方根误差(Root mean square error, RMSE)和平均绝对误差(Mean absolute error, MAE)作为评价指标

    $$ RMSE = \sqrt {\frac{1}{N}\sum\limits_{t = 1}^N {\left( {\hat {y}_t -y_t } \right)^2} } $$ (28)
    $$ MAE = \frac{1}{N}\sum\limits_{t = 1}^N {\left| {\hat {y}_t -y_t } \right|} \quad \quad \quad $$ (29)

    其中, $ N $表示测试时间段的时间点数. RMSE也是各模型训练时的损失指标.

    为了更合理地比较趋势损失的影响, 本文还引入了两个额外指标, 最大趋势绝对误差$ {MAE}^{\rm{MAX}} $和最小趋势绝对误差$ {MAE}^{\rm{MIN}} $, 分别考察模型对每天电力价格最大值和最小值的预测能力

    $$ \begin{split} MAE^{\rm{MAX}} =\; & \frac{1}{M}\sum\limits_{t = 1}^M {\left| {\max \left( {\hat {y}_{t-w+1} ,\cdots ,\hat {y}_t } \right)} \right.}- \\ & \left. {\max \left( {y_{t-w+1} ,\cdots ,y_t } \right)} \right| \\ \end{split} $$ (30)
    $$ \begin{split} MAE^{\rm{MIN}} =\; & \frac{1}{M}\sum\limits_{t = 1}^M {\left| {\min \left( {\hat {y}_{t-w+1} ,\cdots ,\hat {y}_t } \right)} \right.}- \\ & \left. {\min \left( {y_{t-w+1} ,\cdots ,y_t } \right)} \right| \\ \end{split} $$ (31)

    其中, $ M $表示测试时间段划分的趋势窗的数目.

    由于神经网络在训练过程中具有一定的随机性, 每次训练的得到的结果都有所不同. 为了保证实验结果的可信性, 对所有循环神经网络采用了重复10次训练并取测试结果的平均值和标准差. 各方法在能源价格预测的结果见表3.

    表 3  各种方法的能源价格预测效果对比
    Table 3  The result comparisons of different methods for electricity price forecasting
    模型RMSEMAE${MAE}^{\rm{MAX}}$${MAE}^{\rm{MIN}}$
    ARIMA6.414.775.154.82
    SVR4.913.714.273.34
    KRR5.143.753.813.78
    RNN5.09±0.243.75±0.193.72±0.283.78±0.19
    LSTM4.90±0.183.65±0.173.65±0.423.61±0.26
    GRU4.83±0.193.54±0.063.64±0.313.56±0.26
    GRU, $\lambda_S$ = 0.1, $\lambda_T^{\rm{MAX}}$ = 0, $\lambda_T^{\rm{MIN}}$ = 04.71±0.163.49±0.133.53±0.283.53±0.15
    GRU, $\lambda_S$ = 0.05, $\lambda_T^{\rm{MAX}}$ = 0, $\lambda_T^{\rm{MIN}}$ = 04.74±0.113.45±0.183.53±0.233.48±0.26
    GRU, $\lambda_S$ = 0 , $\lambda_T^{\rm{MAX}}$ = 0.1 , $\lambda_T^{\rm{MIN}}$ = 0.14.85±0.163.57±0.203.41±0.263.41±0.18
    GRU, $\lambda_S$ = 0 , $\lambda_T^{\rm{MAX}}$ = 0.05 , $\lambda_T^{\rm{MIN}}$ = 0.054.83±0.113.54±0.083.39±0.183.42±0.15
    GRU, $\lambda_S$ = 0.1, $\lambda_T^{\rm{MAX}}$ = 0.1 , $\lambda_T^{\rm{MIN}}$ = 0.14.68±0.083.45±0.033.35±0.133.33±0.12
    GRU, $\lambda_S$ = 0.05, $\lambda_T^{\rm{MAX}}$ = 0.05 , $\lambda_T^{\rm{MIN}}$ = 0.054.60±0.153.34±0.123.38±0.133.27±0.11
    下载: 导出CSV 
    | 显示表格

    从实验结果可以看出, 对于电力价格预测这类复杂的时间序列预测问题, ARIMA模型的预测效果不佳, SVR、KRR和RNN的效果接近且远远优于ARIMA模型LSTM和GRU的预测效果比SVR和KRR更好, 而基础的GRU在对比模型中效果最佳. 添加周期损失能对GRU起到比较好的辅助效果, 有助于降低模型的预测误差, 提升预测效果. 趋势损失最直接的影响是显著降低最大趋势误差和最小趋势误差, 说明模型通过趋势损失捕捉到了数据在时间窗内的趋势信息(包括电力价格的极值、均值和变化率等), 这对模型捕捉数据特征的能力起到了有力的补充. 另外可以注意到的是, 加入周期损失也能够降低最大趋势误差和最小趋势误差. 这说明周期损失有助于网络捕捉一些数据的本质特征. 实验结果表明: 同时引入周期损失和趋势损失可以使GRU的性能达到最佳.

    引入周期损失和趋势损失的GRU在最佳的参数下($ \lambda _S = \lambda _T^{\rm{MAX}} = \lambda _T^{\rm{MIN}} = 0.05 $)的预测结果如图7所示.

    图 7  加入了周期损失和趋势损失的GRU模型的预测结果
    Fig. 7  The results of electricity price forecasting with seasonal loss and trend loss

    可以看出, 引入周期损失和趋势损失的GRU对电力价格数据的预测效果不错, 预测曲线对真实数值的走势和变化跟踪准确. 预测曲线具有非常好的周期性, 这体现了周期损失中周期性约束的作用. 同时, 预测曲线更偏向于平稳的预测, 这在一定程度上避免了噪声带来的预测性能损失. 另外, 预测曲线在正常情况下的最大最小值的预测效果很好, 说明趋势损失能够起到有效的监督效果. 但可以观察到的是, 模型在一些极端值(电力价格的异常攀升和下降)上的预测效果不佳. 这涉及到突变的预测问题, 在现有模型框架下难有较好的预测效果.

    为了进一步分析周期损失的效果, 将GRU和加入周期损失的GRU在训练一段时间之后的隐藏状态$ h_t $分别抽出, 并将向量$ h_t $的均值作对比, 部分结果如图8所示.

    图 8  GRU在有无周期损失时的隐藏状态均值的比较
    Fig. 8  Hidden states of GRU with and without the seasonality

    由于时间序列本身具有一定的周期性, 使得隐藏状态序列都表现出了一定的周期性. 但是, 使用了周期损失的GRU的隐藏状态明显更加平稳, 并且每个周期内的数值更加平滑; 而未使用周期损失的GRU的隐藏状态的波动性更大, 并且单个周期内的波动也更大. 这些更大的波动可以认为是捕捉到了更多的噪声. 所以, 周期损失有助于捕捉数据更本质的特征而非噪声.

    循环神经网络能够提取时间序列中的深层信息, 是非常有效的时间序列预测方法. 本文通过定义周期损失和趋势损失的合理形式, 试图在循环神经网络内对时间序列的周期和趋势进行建模, 改善模型的性能, 建立了基于周期性建模和多任务学习的时间序列预测模型. 在能源市场价格预测的任务的仿真实验表明: 本文提出的周期损失能够有效引导循环神经网络学习数据的周期性, 提升预测效果, 降低数据噪声的影响. 周期损失还有助于模型捕捉数据的本质特征. 趋势损失则能对提升趋势相关的预测任务的预测效果.

  • 图  1  批归一化算法结构图

    Fig.  1  Structural diagram of batch normalization

    图  2  隐层中的批归一化算法结构图

    Fig.  2  Structural diagram of batch normalization in hidden layers

    图  3  批归一化相关结构与算法

    Fig.  3  Correlation structure and algorithms of batch normalization

    图  4  归一化传播算法结构图

    Fig.  4  Structure diagram of normalization propagation

    图  5  批量重归一化算法结构图

    Fig.  5  Structure diagram of batch renormalization

    图  6  逐步归纳批量归一化算法结构图

    Fig.  6  Structure diagram of diminishing batch normalization

    图  7  批归一化和权重归一化对比图

    Fig.  7  Comparison graph of batch normalization and layer normalization

    图  8  层归一化算法结构图

    Fig.  8  Structure diagram of layer normalization

    图  9  权重归一化算法结构图

    Fig.  9  Structure diagram of weight normalization

    图  10  自归一化神经网络结构图

    Fig.  10  Structure diagram of self-normalizing neural networks

    图  11  批归一化应用领域

    Fig.  11  Applications of batch normalization

    图  12  在CNN中应用BN

    Fig.  12  Applications of BN in CNN

    图  13  RNN结构图

    Fig.  13  Structure diagram of RNN

    图  14  LSTM结构图

    Fig.  14  Structure diagram of LSTM

    图  15  AdaBN域自适应过程

    Fig.  15  Domain adaptive process of AdaBN

    表  1  各种BN-Inception模型分类效果对比

    Table  1  Comparison of classification effects of various BN-Inception models

    模型正确率达到 72.2 % 所需迭代次数最高正确率 (%)
    Inception$ 31.0 \times 10^6 $72.2
    BN-Inception$ 13.3 \times 10^6$72.7
    BN-x5$2.1 \times 10^6$73.0
    BN-x30$2.7 \times 10^6$74.8
    BN-x5-sigmoid69.8
    下载: 导出CSV

    表  2  NIN + NP与相关模型分类效果对比 (%)

    Table  2  Comparison classification effects of NIN + NP and related models (%)

    模型CIFAR-10CIFAR-100SVHN
    NIN10.4735.682.35
    NIN + NP9.1132.191.88
    NIN + BN9.4135.322.25
    Maxout11.6838.572.47
    下载: 导出CSV

    表  3  使用不同$ \alpha ^{(j)} $值的模型分类效果对比

    Table  3  Comparison of classification effects using different $ \alpha ^{(j)}$ in model

    $ \alpha ^{(j)} $MNISTNICIFAR-10
    训练周期误差 (%)训练周期误差 (%)训练周期误差 (%)
    1522.70587.694517.31
    0.75691.91677.374917.03
    0.5691.84807.464417.11
    0.25461.91387.324317.00
    0.1481.90667.364817.10
    0.01511.94747.474316.82
    0.001481.95987.434616.28
    $ 1/j $592.10787.453717.26
    $ 1/j^2 $532.00747.594417.23
    019924.275326.09279.34
    下载: 导出CSV

    表  4  DQN + WN与DQN模型实验效果对比

    Table  4  Comparison of experimental results of DQN + WN and DQN

    游戏DQNDQN + WN
    Breakout410403
    Enduro1 2501448
    Seaquest7 1887 357
    Space invaders1 7792 179
    下载: 导出CSV

    表  5  FNN + SNN与相关模型实验效果对比(1)

    Table  5  Comparing experimental results of FNN + SNN and related models (1)

    模型平均秩差
    FNN + SNN−6.7
    SVM−6.4
    Random forest−5.9
    FNN + LN−5.3
    下载: 导出CSV

    表  6  FNN + SNN与相关模型实验效果对比(2) (%)

    Table  6  Comparing experimental results of FNN + SNN and related models (2) (%)

    方法网络层数
    24681632
    FNN + SNN83.784.283.984.583.582.5
    FNN + BN80.077.277.075.073.776.0
    FNN + WN83.782.282.581.978.156.6
    FNN + LN84.384.082.580.978.778.8
    FNN + ResNet82.280.581.281.881.280.4
    下载: 导出CSV

    表  7  CNN + BN与CNN模型分类效果对比

    Table  7  Comparing experimental results of CNN + BN and CNN

    数据集激活函数模型学习率错误率 (%)
    wm50ReLUCNN + BN0.0833.4
    wm50ReLUCNN0.00835.32
    wm50SigmoidCNN + BN0.0835.52
    wm50SigmoidCNN0.00842.80
    wm100ReLUCNN + BN0.0832.90
    wm100ReLUCNN0.00833.10
    wm100SigmoidCNN + BN0.0833.77
    wm100SigmoidCNN0.00838.50
    下载: 导出CSV

    表  8  LSRM + BN模型与相关模型实验效果对比

    Table  8  Comparing experimental results of LSRM + BN and related models

    模型PPL
    小型LSTM78.5
    小型LSTM + BN62.5
    中型LSTM49.1
    中型LSTM + BN41.0
    大型LSTM49.3
    大型LSTM + BN35.0
    下载: 导出CSV

    表  9  MIM模型与相关模型实验效果对比(%)

    Table  9  Comparing experimental results of MIM and related models (%)

    模型CIFAR-10MNIST
    maxout11.680.47
    NIN10.410.45
    RCNN-160[67]8.690.35
    MIM8.520.31
    下载: 导出CSV

    表  10  AdaBN与相关模型实验效果对比(%)

    Table  10  Comparing experimental results of AdaBN and related models (%)

    模型A$ \to $ WD$ \to $ WW $ \to $DA$ \to $ D
    AlexNet[73]61.695.499.063.8
    DDC[74]61.895.098.564.4
    DAN[75]68.568.599.067.0
    Inception BN70.394.310070.5
    GFK[76]66.797.099.470.1
    AdaBN74.295.799.873.1
    下载: 导出CSV

    表  11  批归一化及其相关算法功能对比

    Table  11  An exampletable in one column

    归一化方法收敛速度
    (训练周期)
    计算量优势缺点应用领域
    未加归一化的
    网络
    批归一化 (BN)相比于未加批归一化的网络, 收敛速度加快10倍以上适中减少网络训练过程中的中间协变量迁移问题, 使网络训练过程对参数初始值不再敏感, 可以使用更高的学习率进行训练, 加快网络训练过程收敛速度依赖 mini-batch 数据集的大小, 训练和推理时计算过程不同在CNN、分片线性神经网络等FNN中效果较好, 对RNN促进效果相对较差
    归一化传播 (NormProp)比BN更稳定、收敛速度明显更快少于BN减少中间协变量迁移现象, 不依赖于mini-batch数据集大小, 网络中每一层的输出都服从正态分布, 训练和推理阶段计算过程相同没有正则化效果, 也不能和其他正则化手段如Dropout
    共用
    理论上可以应用到使用任何激活函数、目标函数的网络, 网络可以使用任何梯度传播算法进行训练, 但具体效果还需要进一步
    证实
    批量重归一化 (BR)mini-batch数据集中含有的数据量很少或包含服从非独立同分布的样本时, 比BN更稳定, 收敛更快计算量稍多于BN减少中间协变量迁移现象, 使网络训练对参数初值不再敏感, 可以使用更高的学习率进行训练, mini-batch中数据量很少或服从非独立同分布时, 使用BR的网络性能明显优于使用BN的网络,收敛速度更快, 训练精度更高计算量稍多于BN在mini-batch数据量很少或包含服从非独立同分布的样例时, 应用效果优于BN
    逐步归纳批量归一化 (DBN)比BN更稳定, 收敛速度类似BN计算量多于BN减少中间协变量迁移, 将神经网络的训练和推理过程关联起来, 使得网络在训练时不仅考虑当前使用的mini-batch数据集, 会同时考虑过去网络训练使用过的mini-batch数据集对mini-batch数据集仍有一定的依赖性理论上可以应用BN的网络, 都可以应用DBN, 但是因为没有从根本上克服BN的问题, 在应用上同样会受到一定的限制
    层归一化 (LN)比BN鲁棒性强, 收敛速度更快计算量少于BNLN对每一层内的神经元使用单一样例进行归一化, 在训练和推理阶段计算过程相同, 应用到在线学习任务和RNN中的效果明显优于其他归一化方法, 可减少训练时间, 提升网络性能在CNN等神经网络中的效果不如BN层归一化对于稳定RNN中的隐层状态很有效, 可进一步推广, 但在CNN等前馈神经网络中的效果不如BN
    连接边权值行向量归一化 (WN)比BN收敛速度更快计算量少于BN对mini-batch数据集没有依赖性, 不需要对过去处理过的情况进行记忆, 计算复杂度低. 网络训练和推理时计算过程相同, 不会像BN一样引入过多噪声对网络没有正则化效果可以更好地应用到RNN和一些对噪声敏感的网络中, 如深度强化学习和深度生成式模型, 这些模型中使用BN的效果都不够好
    自归一化神经网络 (SNN)适中使用SeLU构造网络, 输入数据经过SNN的多层映射后, 网络中每一层输出的均值和方差可以收敛到固定点, 具有归一化特性, 网络鲁棒性强需要使用特定的激活函数SeLU才能构成网络, 在网络中使用dropout等手段会破坏网络结构, 使网络失去自归一化特性理论上可以构建任何前馈神经网络和递归神经网络, 但网络需要使用SeLU激活函数, 且不能破坏对数据均值和方差的逐层特征映射
    下载: 导出CSV

    表  12  深度神经网络加速训练方法

    Table  12  Accelerated training method of deep neural network

    名称    作用     代表文献
       Dropout防止网络过拟合, 是最常用的正则化方法   文献 [8191]
       正则化防止网络过拟合   文献 [92106]
       数据增广 (Data augmentation)通过数据变换增加训练样本数量   文献 [107118]
       改进梯度下降算法选用合适的梯度下降算法, 更有利于神经网络训练   文献 [119133]
       激活函数选择选择适当的激活函数, 更有利于网络训练   文献 [134145]
       学习率选择选择适当的学习率可以加速神经网络训练   文献 [146154]
       参数初始化好的参数初始化更易于神经网络训练   文献 [155158]
       预训练对网络进行预训练, 适当加入先验信息, 更易于网络训练   文献 [159162]
       二值化网络 (Binarized neural networks)节省神经网络训练过程所需存储空间和训练时间   文献 [163167]
       随机深度神经网络缓解深度过深的神经网络训练困难的问题   文献 [168171]
       深度神经网络压缩在不影响网络精度的情况下减少神经网络训练所需存储要求   文献 [172178]
    下载: 导出CSV
  • [1] Raina R, Madhavan A, Ng A Y. Large-scale deep unsupervised learning using graphics processors. In: Proceedings of the 26th Annual International Conference on Machine Learning. Montreal, Canada: ACM, 2009. 873−880
    [2] Lecun Y A, Bottou L, Orr G B, Müller K R. Efficient BackProp. Neural Networks: Tricks of the Trade (2nd edition). New York, America: Springer, 1998.
    [3] Wiesler S, Richard A, Schlüter R, Ney H. Mean-normalized stochastic gradient for large-scale deep learning. In: Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing. Florence, Italy: IEEE, 2014. 180−184
    [4] Raiko T, Valpola H, Lecun Y. Deep learning made easier by linear transformations in perceptrons. In: Proceedings of the 15th International Conference on Artificial Intelligence and Statistics. La Palma, Canary Islands: JMLR, 2012. 924−932
    [5] Povey D, Zhang X H, Khudanpur S. Parallel training of deep neural networks with natural gradient and parameter averaging. arXiv preprint arXiv: 1410.7455, 2014.
    [6] Desjardins G, Simonyan K, Pascanu R, Kavukcuoglu K. Natural neural networks. arXiv preprint arXiv: 1507.00210, 2015.
    [7] Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on Machine Learning. Lille, France: JMLR.org, 2015. 448−456
    [8] Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, et al. Mastering the game of Go without human knowledge. Nature, 2017, 550(7676): 354−359 doi: 10.1038/nature24270
    [9] Shimodaira H. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 2000, 90(2): 227−244 doi: 10.1016/S0378-3758(00)00115-4
    [10] Daume III H, Marcu D. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 2006, 26: 101−126 doi: 10.1613/jair.1872
    [11] 刘建伟, 孙正康, 罗雄麟. 域自适应学习研究进展. 自动化学报, 2014, 40(8): 1576−1600

    Liu Jian-Wei, Sun Zheng-Kang, Luo Xiong-Lin. Review and research development on domain adaptation learning. Acta Automatica Sinica, 2014, 40(8): 1576−1600
    [12] Nair V, Hinton G E. Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning. Madison, USA: ACM, 2010. 807−814
    [13] Saxe A M, McClelland J L, Ganguli S. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv: 1312.6120, 2013.
    [14] Devarakonda A, Naumov M, Garland M. AdaBatch: Adaptive batch sizes for training deep neural networks. arXiv preprint arXiv: 1712.02029, 2017.
    [15] Rifai S, Vincent P, Muller X, Glorot X, Bengio Y. Contractive auto-encoders: Explicit invariance during feature extraction. In: Proceedings of the 28th International Conference on Machine Learning. Bellevue, USA: ACM, 2011. 833−840
    [16] Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014, 15(56): 1929−1958
    [17] Mazilu S, Iria J. L1 vs. L2 regularization in text classification when learning from labeled features. In: Proceedings of the 10th International Conference on Machine Learning and Applications and Workshops. Honolulu, Hawaii, USA: ACM, 2011. 166−171
    [18] Szegedy C, Liu W, Jia Y Q, Sermanet P, Reed S, Anguelov S, et al. Going deeper with convolutions. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 1−9
    [19] Sutskever I, Martens J, Dahl G, Hinton G. On the importance of initialization and momentum in deep learning. In: Proceedings of the 30th International Conference on Machine Learning. Atlanta, USA: ACM, 2013. 1139−1147
    [20] Jain L P, Scheirer W J, Boult T E. Multi-class open set recognition using probability of inclusion. In: Proceedings of the 13th European Conference. Zurich, Switzerland: Springer, 2014. 393−409
    [21] Arpit D, Zhou Y B, Kota B U, Govindaraju V. Normalization propagation: A parametric technique for removing internal covariate shift in deep networks. In: Proceedings of the 33rd International Conference on Machine Learning. New York, USA: ACM, 2016. 1168−1176
    [22] Lin M, Chen Q, Yan S C. Network in network. arXiv preprint arXiv: 1312.4400, 2013.
    [23] Krizhevsky A. Learning Multiple Layers of Features from Tiny Images. Technical Report. University of Toronto, Canada, 2009.
    [24] Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng A Y. Reading digits in natural images with unsupervised feature learning. NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011, 2(5)
    [25] Goodfellow I J, Warde-Farley D, Mirza M, Courville A, Bengio Y. Maxout networks. arXiv preprint arXiv: 1302.4389, 2013.
    [26] Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the 13th International Conference on Artificial Intelligence and Statistics. Sardinia, Italy: AISTATS, 2010. 249−256
    [27] Ioffe S. Batch renormalization: Towards reducing minibatch dependence in batch-normalized models. arXiv preprint arXiv: 1702.03275, 2017.
    [28] Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S A, et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 2015, 115(3): 211−252 doi: 10.1007/s11263-015-0816-y
    [29] Goldberger J, Roweis S, Hinton G, Salakhutdinov R. Neighbourhood components analysis. In: Proceedings of the 17th International Conference on Neural Information Processing Systems. Cambridge, USA: ACM, 2004. 513−520
    [30] Ma Y, Klabjan D. Convergence analysis of batch normalization for deep neural nets. arXiv preprint arXiv: 1705.08011, 2017.
    [31] Bottou L, Curtis F E, Nocedal J. Optimization methods for large-scale machine learning. arXiv preprint arXiv: 1606.04838, 2016.
    [32] Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278−2324 doi: 10.1109/5.726791
    [33] KDD Cup 1999 Data [Online], available: http://www.kdd.org/kdd-cup/view/kdd-cup-1999/Data, 1999.
    [34] Graves A, Mohamed A R, Hinton G. Speech recognition with deep recurrent neural networks. In: Proceedings of the 2013 IEEE International Conference on Acoustics, Speech, and Signal Processing. Vancouver, Canada: IEEE, 2013. 6645−6649
    [35] Ba J L, Kiros J R, Hinton G E. Layer normalization. arXiv preprint arXiv: 1607.06450, 2016.
    [36] Vendrov I, Kiros R, Fidler S, Urtasun R. Order-embeddings of images and language. arXiv preprint arXiv: 1511.06361, 2015.
    [37] Graves A, Wayne G, Reynolds M, Harley T, Danihelka I, Grabska-Barwińska, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016, 538(7626): 471−476
    [38] Kiros R, Zhu Y K, Salakhutdinov R, Zemel R S, Torralba A, Urtasun R, Fidler S. Skip-thought vectors. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. Quebec, Canada: ACM, 2015. 3294−3302
    [39] Hermann K M, Kočiský T, Grefenstette E, Espeholt L, Kay W, Suleyman M, et al. Teaching machines to read and comprehend. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. Quebec, Canada: ACM, 2015. 1693−1701
    [40] Sundermeyer M, Schlüter R, Ney H. LSTM neural networks for language modeling. In: Proceedings of the 13th Annual Conference of the International Speech Communication Association. Oregon, USA: International Speech Communication Association, 2012. 194−197
    [41] Gregor K, Danihelka I, Graves A, Rezende D J, Wierstra D. DRAW: A recurrent neural network for image generation. arXiv preprint arXiv: 1502.04623, 2015.
    [42] Salimans T, Kingma D P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: ACM, 2016. 901−909
    [43] Kingma D P, Welling M. Auto-encoding variational Bayes. arXiv preprint arXiv: 1312.6114, 2013.
    [44] Mnih V, Kavukcuoglu K, Silver D, Rusu A A, Veness J, Bellemare M G, et al. Human-level control through deep reinforcement learning. Nature, 2015, 518(7540): 529−533 doi: 10.1038/nature14236
    [45] Hendrycks D, Gimpel K. Adjusting for dropout variance in batch normalization and weight initialization. arXiv preprint arXiv: 1607.02488, 2016.
    [46] Narayanan A, Shi E, Rubinstein B I P. Link prediction by de-anonymization: How we won the Kaggle social network challenge. In: Proceedings of the 2011 International Joint Conference on Neural Networks. San Jose, USA: IEEE, 2011. 1825−1834
    [47] Cutler D R, Edwards Jr T C, Beard K H, Cutler A, Hess K T, Gibson J, Lawler J J. Random forests for classification in ecology. Ecology, 2007, 88(11): 2783−2792 doi: 10.1890/07-0539.1
    [48] Joachims T. Making large-scale SVM learning practical. Advances in Kernel Methods—Support Vector Learning, 1999, 8(3): 499−526
    [49] Klambauer G, Unterthiner T, Mayr A, Hochreiter S. Self-normalizing neural networks. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, CA, USA: ACM, 2017. 972−981
    [50] Fernández-Delgado M, Cernadas E, Barro S, Amorim D. Do we need hundreds of classifiers to solve real world classification problems? The Journal of Machine Learning Research, 2014, 15(1): 3133-3181
    [51] Hsieh J H, Sedykh A, Huang R, Xia M H, Tice R R. A data analysis pipeline accounting for artifacts in Tox21 quantitative high-throughput screening assays. SLAS DISCOVERY: Advancing the Science of Drug Discovery, 2015, 20(7): 887−897 doi: 10.1177/1087057115581317
    [52] Littwin E, Wolf L. The loss surface of residual networks: Ensembles and the role of batch normalization. arXiv preprint arXiv: 1611.02525, 2016.
    [53] Luo C J, Zhan J F, Wang L, Yang Q. Cosine normalization: Using cosine similarity instead of dot product in neural networks. arXiv preprint arXiv: 1702.05870, 2017.
    [54] Cho M, Lee J. Riemannian approach to batch normalization. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, CA, USA: ACM, 2017. 5231−5241
    [55] Wu S, Li G Q, Deng L, Liu L, Xie Y, Shi L P. L1-norm batch normalization for efficient training of deep neural networks. arXiv preprint arXiv: 1802.09769, 2018.
    [56] Huang L, Liu X L, Liu Y, Lang B, Tao D C. Centered weight normalization in accelerating training of deep neural networks. In: Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 2822−2830
    [57] Abdel-Hamid O, Mohamed A R, Jiang H, Penn G. Applying convolutional neural networks concepts to hybrid NN-HMM model for speech recognition. In: Proceedings of the 2012 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Kyoto, Japan: IEEE, 2012. 4277−4280
    [58] Zhao Z Q, Bian H M, Hu D H, Cheng W J, Glotin H. Pedestrian detection based on fast R-CNN and batch normalization. In: Proceedings of the 13th International Conference on Intelligent Computing Theories and Application. Liverpool, UK: Springer, 2017. 735−746
    [59] Girshick R. Fast R-CNN. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE, 2015.
    [60] Darwish A, Nakhmani A. Internal covariate shift reduction in encoder-decoder convolutional neural networks. In: Proceedings of the 2017 ACM Southeast Conference. Kennesaw, USA: ACM, 2017. 179−182
    [61] Laurent C, Pereyra G, Brakel P, Zhang Y, Bengio Y. Batch normalized recurrent neural networks. In: Proceedings of the 2016 IEEE International Conference on Acoustics, Speech, and signal processing (ICASSP). Shanghai, China: IEEE, 2016. 2657−2661
    [62] Cooijmans T, Ballas N, Laurent C, Gülçehre Ç, Courville A. Recurrent batch normalization. arXiv preprint arXiv: 1603.09025, 2016.
    [63] Laurent C, Pereyra G, Brakel P, Zhang Y, Bengio Y. Batch normalized recurrent neural networks. arXiv preprint arXiv: 1510.01378, 2015.
    [64] Amodei D, Ananthanarayanan S, Anubhai R, Bai J L, Battenberg E, Case C, et al. Deep speech 2: End-to-end speech recognition in English and Mandarin. In: Proceedings of the 33rd International Conference on Machine Learning. New York, USA: JMLR.org, 2016. 173−182
    [65] Zhang Y, Chan W, Jaitly N. Very deep convolutional networks for end-to-end speech recognition. In: Proceedings of the 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). New Orleans, USA: IEEE, 2017. 4845−4849
    [66] Liao Z B, Carneiro G. On the importance of normalisation layers in deep learning with piecewise linear activation units. In: Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). Lake Placid, USA: IEEE, 2016. 1−8
    [67] Liang M, Hu X L. Recurrent convolutional neural network for object recognition. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015. 3367−3375
    [68] Long M C, Zhu H, Wang J M, Jordan M I. Unsupervised domain adaptation with residual transfer networks. arXiv preprint arXiv: 1602. 04433, 2017.
    [69] Tzeng E, Hoffman J, Darrell T, Saenko K. Simultaneous deep transfer across domains and tasks. arXiv preprint arXiv: 1510. 02192, 2015.
    [70] Ganin Y, Lempitsky V. Unsupervised domain adaptation by backpropagation. In: Proceedings of the 32nd International Conference on Machine Learning. Lille, France: JMLR.org, 2015. 1180−1189
    [71] Li Y H, Wang N Y, Shi J P, Liu J Y, Hou X D. Revisiting batch normalization for practical domain adaptation. arXiv preprint arXiv: 1603.04779, 2016.
    [72] Saenko K, Kulis B, Fritz M, Darrell T. Adapting visual category models to new domains. In: Proceedings of the 11th European Conference on Computer Vision. Heraklion, Greece: Springer, 2010. 213−226
    [73] Gretton A, Borgwardt K M, Rasch M J, Schölkopf B, Smola A. A kernel two-sample test. The Journal of Machine Learning Research, 2012, 13: 723−773
    [74] Tzeng E, Hoffman J, Zhang N, Saenko K, Darrell T. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv: 1412.3474, 2014.
    [75] Long M S, Cao Y, Wang J M, Jordan M I. Learning transferable features with deep adaptation networks. In: Proceedings of the 32nd International Conference on Machine Learning. Lille, France: JMLR.org, 2015. 97−105
    [76] Gong B Q, Shi Y, Sha F, Grauman K. Geodesic flow kernel for unsupervised domain adaptation. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, USA: IEEE, 2012. 2066−2073
    [77] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 770−778
    [78] Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv: 1511.06434, 2015.
    [79] Donahue C, McAuley J, Puckette M. Synthesizing audio with generative adversarial networks. arXiv preprint arXiv: 1802.04208, 2018.
    [80] Lillicrap T P, Hunt J J, Pritzel A, Heess N, Erez T, Tassa Y, et al. Continuous control with deep reinforcement learning. arXiv preprint arXiv: 1509. 02971, 2015.
    [81] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems. Nevada, USA: ACM, 2012. 1106−1114
    [82] Hinton G E, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov R R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv: 1207.0580, 2012.
    [83] Mendenhall J, Meiler J. Improving quantitative structure-activity relationship models using artificial neural networks trained with dropout. Journal of Computer-Aided Molecular Design, 2016, 30(2): 177−189 doi: 10.1007/s10822-016-9895-2
    [84] Dahl G E, Sainath T N, Hinton G E. Improving deep neural networks for LVCSR using rectified linear units and dropout. In: Proceedings of the 2013 IEEE International Conference on Acoustics, Speech, and Signal Processing. Vancouver, BC, Canada: IEEE, 2013. 8609−8613
    [85] Baldi P, Sadowski P J. Understanding dropout. In: Proceedings of the 26th International Conference on Neural Information Processing Systems. Lake Tahoe, Nevada, USA: ACM, 2013. 2814−2822
    [86] Wang S I, Manning C D. Fast dropout training. In: Proceedings of the 30th International Conference on Machine Learning. Atlanta, GA, USA: ACM, 2013. 118−126
    [87] Wager S, Wang S I, Liang P. Dropout training as adaptive regularization. In: Proceedings of the 26th International Conference on Neural Information Processing Systems. Lake Tahoe, Nevada, United States: ACM, 2013. 351−359
    [88] Li X, Chen S, Hu X L, Yang J. Understanding the disharmony between dropout and batch normalization by variance shift. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA: IEEE, 2019. 2682−2690
    [89] Pham V, Bluche T, Kermorvant C, Louradour J. Dropout improves recurrent neural networks for handwriting recognition. In: Proceedings of the 14th International Conference on Frontiers in Handwriting Recognition. Heraklion, Greece: IEEE, 2014. 285−290
    [90] Gal Y, Ghahramani Z. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In: Proceedings of the 33rd International Conference on Machine Learning. New York, USA: JMLR.org, 2016. 1050−1059
    [91] Gal Y, Hron J, Kendall A. Concrete dropout. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, CA, USA, 2017. 3584−3593
    [92] Hofmann B. Regularization for Applied Inverse and III-Posed Problems: A Numerical Approach. Wiesbaden: Springer-Verlag, 1991. 380−394
    [93] Girosi F, Jones M, Poggio T. Regularization theory and neural networks architectures. Neural Computation, 1995, 7(2): 219−269 doi: 10.1162/neco.1995.7.2.219
    [94] Evgeniou T, Pontil M, Poggio T. Regularization networks and support vector machines. Advances in Computational Mathematics, 2000, 13(1): 1−1 doi: 10.1023/A:1018946025316
    [95] Guo P, Lyu M R, Chen C L P. Regularization parameter estimation for feedforward neural networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2003, 33(1): 35−44 doi: 10.1109/TSMCB.2003.808176
    [96] Ng A Y. Feature selection, L1 vs. L2 regularization, and rotational invariance. In: Proceedings of the 21st International Conference on Machine Learning. New York, USA: ACM, 2004. Article No.78
    [97] Schmidt M, Fung G, Rosales R. Fast optimization methods for l1 regularization: A comparative study and two new approaches. In: Proceedings of the 18th European Conference on Machine Learning. Berlin, Germany: Springer, 2007. 286−297
    [98] Bryer A R. Understanding regulation: Theory, strategy, and practice. Accounting in Europe, 2013, 10(2): 279−282 doi: 10.1080/17449480.2013.834747
    [99] Xu Z B, Chang X Y, Xu F M, Zhang H. L1/2 regularization: A thresholding representation theory and a fast solver. IEEE Transactions on Neural Networks and Learning Systems, 2012, 23(7): 1013−1027 doi: 10.1109/TNNLS.2012.2197412
    [100] Wan L, Zeiler M, Zhang S X, LeCun Y, Fergus R. Regularization of neural networks using dropconnect. In: Proceedings of the 30th International Conference on Machine Learning. Atlanta, GA, USA: ACM, 2013. 1058−1066
    [101] Zaremba W, Sutskever I, Vinyals O. Recurrent neural network regularization. arXiv preprint arXiv: 1409.2329, 2014.
    [102] Lamb A, Dumoulin V, Courville A. Discriminative regularization for generative models. arXiv preprint arXiv: 1602.03220, 2016.
    [103] Scardapane S, Comminiello D, Hussain A, Uncini A. Group sparse regularization for deep neural networks. Neurocomputing, 2017, 241: 81−89 doi: 10.1016/j.neucom.2017.02.029
    [104] Gu B, Sheng V S. A robust regularization path algorithm for ν-support vector classification. IEEE Transactions on Neural Networks and Learning Systems, 2017, 28(5): 1241−1248 doi: 10.1109/TNNLS.2016.2527796
    [105] Dasgupta S, Yoshizumi T, Osogami T. Regularized dynamic Boltzmann machine with delay pruning for unsupervised learning of temporal sequences. arXiv preprint arXiv: 1610. 01989, 2016.
    [106] Luo M N, Nie F P, Chang X J, Yang Y, Hauptmann A G, Zheng Q H. Adaptive unsupervised feature selection with structure regularization. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(4): 944−956 doi: 10.1109/TNNLS.2017.2650978
    [107] Tanner M A, Wong W H. The calculation of posterior distributions by data augmentation. Journal of the American Statistical Association, 1987, 82(398): 528−540 doi: 10.1080/01621459.1987.10478458
    [108] Frühwirth-Schnatter S. Data augmentation and dynamic linear models. Journal of Time Series Analysis, 1994, 15(2): 183−202 doi: 10.1111/j.1467-9892.1994.tb00184.x
    [109] Van Dyk D A, Meng X L. The art of data augmentation. Journal of Computational and Graphical Statistics, 2001, 10(1): 1−50 doi: 10.1198/10618600152418584
    [110] Hobert J P, Marchev D. A theoretical comparison of the data augmentation, marginal augmentation and PX-DA algorithms. The Annals of Statistics, 2008, 36(2): 532−554 doi: 10.1214/009053607000000569
    [111] Frühwirth-Schnatter S, Frühwirth R. Data augmentation and MCMC for binary and multinomial logit models. Statistical Modelling and Regression Structures: Festschrift in Honour of Ludwig Fahrmeir. Physica-Verlag HD, 2010. 111−132
    [112] Royle J A, Dorazio R M. Parameter-expanded data augmentation for Bayesian analysis of capture-recapture models. Journal of Ornithology, 2012, 152(2): 521−537
    [113] Westgate B S, Woodard D B, Matteson D S, Henderson S G. Travel time estimation for ambulances using Bayesian data augmentation. The Annals of Applied Statistics, 2013, 7(2): 1139−1161 doi: 10.1214/13-AOAS626
    [114] Cui X D, Goel V, Kingsbury B. Data augmentation for deep neural network acoustic modeling. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2015, 23(9): 1469−1477 doi: 10.1109/TASLP.2015.2438544
    [115] McFee B, Humphrey E J, Bello J P. A software framework for musical data augmentation. In: Proceedings of the 16th International Society for Music Information Retrieval Conference. Málaga, Spain: Molecular Oncology, 2015. 248−254
    [116] Xu Y, Jia R, Mou L L, Li G, Chen Y C, Lu Y Y, Jin Z. Improved relation classification by deep recurrent neural networks with data augmentation. arXiv preprint arXiv: 1601.03651, 2016.
    [117] Rogez G, Schmid C. MoCap-guided data augmentation for 3D pose estimation in the wild. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: ACM, 2016. 3108−3116
    [118] Touloupou P, Alzahrani N, Neal P J, Spencer S, McKinley T. Efficient model comparison techniques for models requiring large scale data augmentation. Bayesian Analysis, 2018, 13(2): 437−459 doi: 10.1214/17-BA1057
    [119] Moreau L, Bachmayer R, Leonard N E. Coordinated gradient descent: A case study of Lagrangian dynamics with projected gradient information. IFAC Proceedings Volumes, 2003, 36(2): 57−62 doi: 10.1016/S1474-6670(17)38867-5
    [120] Wilson D R, Martinez T R. The general inefficiency of batch training for gradient descent learning. Neural Networks, 2003, 16(10): 1429−1451 doi: 10.1016/S0893-6080(03)00138-2
    [121] Yang J. Newton-conjugate-gradient methods for solitary wave computations. Journal of Computational Physics, 2009, 228(18): 7007−7024 doi: 10.1016/j.jcp.2009.06.012
    [122] Bottou L. Large-scale machine learning with stochastic gradient descent. Proceedings of COMPSTAT'2010. Physica-Verlag HD, 2010. 177–186
    [123] Zinkevich M A, Weimer M, Smola A, Li L H. Parallelized stochastic gradient descent. In: Proceedings of the 23rd International Conference on Neural Information Processing Systems. British Columbia, Canada: ACM, 2010. 2595−2603
    [124] Bengio Y, Boulanger-Lewandowski N, Pascanu R. Advances in optimizing recurrent networks. In: Proceedings of the 2013 IEEE International Conference on Acoustics, Speech, and Signal Processing. Vancouver, USA: IEEE, 2013. 8624−8628
    [125] Valentino Z, Gianmario S, Daniel S, Peter R. Python Deep Learning: Next Generation Techniques to Revolutionize Computer Vision, AI, Speech and Data Analysis. Birmingham, UK: Packt Publishing Ltd, 2017.
    [126] Lucas J, Sun S Y, Zemel R, Grosse R. Aggregated momentum: Stability through passive damping. arXiv preprint arXiv: 1804. 00325, 2018.
    [127] Kingma D P, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980, 2014.
    [128] Chen Y D, Wainwright M J. Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees. arXiv preprint arXiv: 1509.03025, 2015.
    [129] Ruder S. An overview of gradient descent optimization algorithms. arXiv preprint arXiv: 1609.04747, 2016.
    [130] Konečný J, Liu J, Richtárik P, Takáč M. Mini-batch semi-stochastic gradient descent in the proximal setting. IEEE Journal of Selected Topics in Signal Processing, 2016, 10(2): 242−255 doi: 10.1109/JSTSP.2015.2505682
    [131] Andrychowicz M, Denil M, Colmenarejo S G, Hoffman M W, Pfau D, Schaul T, et al. Learning to learn by gradient descent by gradient descent. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: ACM, 2016. 3988−3996
    [132] Khirirat S, Feyzmahdavian H R, Johansson M. Mini-batch gradient descent: Faster convergence under data sparsity. In: Proceedings of the 56th IEEE Annual Conference on Decision and Control (CDC). Melbourne, Australia: IEEE, 2017. 2880−2887
    [133] Murugan P, Durairaj S. Regularization and optimization strategies in deep convolutional neural network. arXiv preprint arXiv: 1712.04711, 2017.
    [134] Han J, Moraga C. The influence of the sigmoid function parameters on the speed of backpropagation learning. In: Proceedings of the 1995 International Workshop on Artificial Neural Networks: from Natural to Artificial Neural Computation. Malaga-Torremolinos, Spain: ACM, 1995. 195−201
    [135] Yin X Y, Goudriaan J, Lantinga E A, Vos J, Spiertz H J. A flexible sigmoid function of determinate growth. Annals of Botany, 2003, 91(3): 361−371 doi: 10.1093/aob/mcg029
    [136] Malfliet W. The tanh method: A tool for solving certain classes of non-linear PDEs. Mathematical Methods in the Applied Sciences, 2005, 28(17): 2031−2035 doi: 10.1002/mma.650
    [137] Knowles D A, Minka T P. Non-conjugate variational message passing for multinomial and binary regression. In: Proceedings of the 24th International Conference on Neural Information Processing Systems. Granada, Spain: ACM, 2011. 1701−1709
    [138] Gimpel K, Smith N A. Softmax-margin CRFs: Training log-linear models with cost functions. In: Proceedings of the 2010 Human Language Technologies: the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Los Angeles, California, USA: ACM, 2010. 733−736
    [139] Glorot X, Bordes A, Bengio Y. Deep sparse rectifier neural networks. In: Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS). Fort Lauderdale, USA: PMLR, 2011. 315−323
    [140] Krizhevsky A. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014.
    [141] Tòth L. Phone recognition with deep sparse rectifier neural networks. In: Proceedings of the 2013 IEEE International Conference on Acoustics, Speech, and Signal Processing. Vancouver, Canada: IEEE, 2013. 6985−6989
    [142] Zeiler M D, Ranzato M, Monga R, Mao M, Yang K, Le Q V, et al. On rectified linear units for speech processing. In: Proceedings of the 2013 IEEE International Conference on Acoustics, Speech, and Signal Processing. Vancouver, Canada: IEEE, 2013. 3517−3521
    [143] Xu B, Wang N Y, Chen T Q, Li M. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv: 1505.00853, 2015.
    [144] Clevert D A, Unterthiner T, Hochreiter S. Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint arXiv: 1511.07289, 2015.
    [145] Martins A F T, Astudillo R F. From softmax to sparsemax: A sparse model of attention and multi-label classification. In: Proceedings of the 33rd International Conference on Machine Learning. New York City, USA: ACM, 2016. 1614−1623
    [146] Darken C, Moody J. Note on learning rate schedules for stochastic optimization. In: Proceedings of the 1990 Conference on advances in Neural Information Processing Systems. Colorado, USA: ACM, 1990. 832−838
    [147] Jacobs R A. Increased rates of convergence through learning rate adaptation. Neural Networks, 1988, 1(4): 295−307 doi: 10.1016/0893-6080(88)90003-2
    [148] Bowling M, Veloso M. Multiagent learning using a variable learning rate. Artificial Intelligence, 2002, 136(2): 215−250 doi: 10.1016/S0004-3702(02)00121-2
    [149] Cireșan D C, Meier U, Gambardella L M, Schmidhuber J. Deep, big, simple neural nets for handwritten digit recognition. Neural Computation, 2010, 22(12): 3207−3220 doi: 10.1162/NECO_a_00052
    [150] Zeiler M D. ADADELTA: An adaptive learning rate method. arXiv preprint arXiv: 1212.5701, 2012.
    [151] Chin W S, Zhuang Y, Juan Y C, Lin C J. A learning-rate schedule for stochastic gradient methods to matrix factorization. In: Proceedings of the 19th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining. Ho Chi Minh City, Vietnam: Springer, 2015. 442−455
    [152] Dauphin Y N, de Vries H, Bengio Y. Equilibrated adaptive learning rates for non-convex optimization. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. Quebec, Canada: ACM, 2015. 1504−1512
    [153] Liang J H, Ganesh V, Poupart P, Czarnecki K. Learning rate based branching heuristic for SAT solvers. In: Proceedings of the 19th International Conference on Theory and Applications of Satisfiability Testing. Bordeaux, France: Springer, 2016. 123−140
    [154] Smith L N. Cyclical learning rates for training neural networks. In: Proceedings of the 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). Santa Rosa, USA: IEEE, 2017. 464−472
    [155] Yam J Y F, Chow T W S. A weight initialization method for improving training speed in feedforward neural network. Neurocomputing, 2000, 30(1-4): 219−232 doi: 10.1016/S0925-2312(99)00127-7
    [156] Boedecker J, Obst O, Mayer N M, Asada M. Initialization and self-organized optimization of recurrent neural network connectivity. HFSP Journal, 2009, 3(5): 340−349 doi: 10.2976/1.3240502
    [157] Bengio Y. Practical recommendations for gradient-based training of deep architectures. Neural Networks: Tricks of the Trade (2nd Edition). Berlin, Germany: Springer, 2012.
    [158] Mishkin D, Matas J. All you need is a good init. arXiv preprint arXiv: 1511.06422, 2015.
    [159] Erhan D, Manzagol P A, Bengio Y, Bengio S, Vincent P. The difficulty of training deep architectures and the effect of unsupervised pre-training. In: Proceedings of the 12th International Conference on Artificial Intelligence and Statistics. Florida, USA: JMLR, 2009. 153−160
    [160] Erhan D, Bengio Y, Courville A, Manzagol P A, Vincent P, Bengio S. Why does unsupervised pre-training help deep learning? The Journal of Machine Learning Research, 2010, 11: 625-660
    [161] Hinton G, Deng L, Yu D, Dahl G E, Mohamed A R, Jaitly N, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012, 29(6): 82−97 doi: 10.1109/MSP.2012.2205597
    [162] Knyazev B, Shvetsov R, Efremova N, Kuharenko A. Convolutional neural networks pretrained on large face recognition datasets for emotion classification from video. arXiv preprint arXiv: 1711.04598, 2017.
    [163] Courbariaux M, Bengio Y, David J P. Binaryconnect: Training deep neural networks with binary weights during propagations. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. Quebec, Canada: ACM, 2015. 3123−3131
    [164] Umuroglu Y, Fraser N J, Gambardella G, Blott M, Leong P, Jahre M, Vissers K A. FINN: A framework for fast, scalable binarized neural network inference. In: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. Monterey, USA: ACM, 2017. 65−74
    [165] Hubara I, Courbariaux M, Soudry D, El-Yaniv R, Bengio Y. Binarized neural networks. In: Proceedings of the 2016 Advances in Neural Information Processing Systems. Barcelona, Spain: ACM, 2016. 4107−4115
    [166] Courbariaux M, Hubara I, Soudry D, El-Yaniv R, Bengio Y. Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or −1. arXiv preprint arXiv: 1602.02830, 2016.
    [167] Fraser N J, Umuroglu Y, Gambardella G, Blott M, Leong P, Jahre M, Vissers K A. Scaling binarized neural networks on reconfigurable logic. In: Proceedings of the 8th Workshop and 6th Workshop on Parallel Programming and Run-Time Management Techniques for Many-core Architectures and Design Tools and Architectures for Multicore Embedded Computing Platforms. Stockholm, Sweden: IEEE, 2017. 25−30
    [168] Huang G, Sun Y, Liu Z, Sedra D, Weinberger K Q. Deep networks with stochastic depth. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016. 646–661
    [169] Yamada Y, Iwamura M, Kise K. Deep pyramidal residual networks with separated stochastic depth. arXiv preprint arXiv: 1612.01230, 2016.
    [170] Chen D S, Zhang W B, Xu X M, Xing X F. Deep networks with stochastic depth for acoustic modelling. In: Proceedings of the 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). Jeju, South Korea: IEEE, 2016. 1−4
    [171] Huang G, Liu Z, Van Der Maaten L, Weinberger K Q. Densely connected convolutional networks. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 2261−2269
    [172] Buciluǎ C, Caruana R, Niculescu-Mizil A. Model compression. In: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Philadelphia, USA: ACM, 2006. 535−541
    [173] Gong Y C, Liu L, Yang M, Bourdev L. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv: 1412.6115, 2014.
    [174] Han S, Mao H Z, Dally W J. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv preprint arXiv: 1510.00149, 2015.
    [175] Chen W L, Wilson J T, Tyree S, Weinberger K Q, Chen Y X. Compressing neural networks with the hashing trick. In: Proceedings of the 32nd International Conference on Machine Learning. Lille, France: JMLR.org, 2015. 2285−2294
    [176] Luo P, Zhu Z Y, Liu Z W, Wang X G, Tang X O. Face model compression by distilling knowledge from neurons. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence. Arizona, USA: AAAI, 2016. 3560−3566
    [177] Cheng Y, Wang D, Zhou P, Zhang T. A survey of model compression and acceleration for deep neural networks. arXiv preprint arXiv: 1710.09282, 2017.
    [178] Ullrich K, Meeds E, Welling M. Soft weight-sharing for neural network compression. arXiv preprint arXiv: 1702.04008, 2017.
  • 期刊类型引用(2)

    1. 卢弘,王耀南,乔非,方遒. 面向可持续生产中多任务调度的双重增强模因算法. 自动化学报. 2024(04): 731-744 . 本站查看
    2. 李瑞,龚文引. 改进的基于分解的多目标进化算法求解双目标模糊柔性作业车间调度问题. 控制理论与应用. 2022(01): 31-40 . 百度学术

    其他类型引用(13)

  • 加载中
图(15) / 表(12)
计量
  • 文章访问数:  2886
  • HTML全文浏览量:  409
  • PDF下载量:  686
  • 被引次数: 15
出版历程
  • 网络出版日期:  2020-07-10
  • 刊出日期:  2020-07-10

目录

/

返回文章
返回