2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

一种连续型深度信念网的设计与应用

乔俊飞 潘广源 韩红桂

乔俊飞, 潘广源, 韩红桂. 一种连续型深度信念网的设计与应用. 自动化学报, 2015, 41(12): 2138-2146. doi: 10.16383/j.aas.2015.c150239
引用本文: 乔俊飞, 潘广源, 韩红桂. 一种连续型深度信念网的设计与应用. 自动化学报, 2015, 41(12): 2138-2146. doi: 10.16383/j.aas.2015.c150239
QIAO Jun-Fei, PAN Guang-Yuan, HAN Hong-Gui. Design and Application of Continuous Deep Belief Network. ACTA AUTOMATICA SINICA, 2015, 41(12): 2138-2146. doi: 10.16383/j.aas.2015.c150239
Citation: QIAO Jun-Fei, PAN Guang-Yuan, HAN Hong-Gui. Design and Application of Continuous Deep Belief Network. ACTA AUTOMATICA SINICA, 2015, 41(12): 2138-2146. doi: 10.16383/j.aas.2015.c150239

一种连续型深度信念网的设计与应用

doi: 10.16383/j.aas.2015.c150239
基金项目: 

国家自然科学基金(61203099,61225016,61533002),北京市科技计划课题(Z141100001414005,Z141101004414058),高等学校博士学科点专项科研基金资助课题(20131103110016),北京市科技新星计划(Z131104000413007),北京市教育委员会科研计划项目(KZ201410005002,km201410005001)资助

详细信息
    作者简介:

    乔俊飞北京工业大学教授. 主要研究方向为智能控制, 神经网络分析与设计.E-mail: junfeq@bjut.edu.cn

    通讯作者:

    潘广源北京工业大学博士研究生.主要研究方向为智能信息处理, 深度学习,神经网络结构设计和优化.本文通信作者.

Design and Application of Continuous Deep Belief Network

Funds: 

Supported by National Natural Science Foundation of China (61203099, 61225016, 61533002), Beijing Science and Technology Project (Z141100001414005, Z141101004414058), Program Foundation from Ministry of Education (20131103110016), Beijing Nova Program (Z131104000413007), Beijing Municipal Education Commission Science and Technology Development Program (KZ201410005002, km201410005001)

  • 摘要: 针对深度信念网(Deep belief network, DBN)学习连续数据时预测精度较差问题, 提出一种双隐层连续型深度信念网. 该网络首先对输入数据进行无监督训练, 利用连续型传递函数实现数据特征提取, 设计基于对比分歧算法的权值训练方法, 并通过误差反传对隐层权值进行局部寻优, 给出稳定性分析, 保证训练输出结果稳定在规定区域. 利用 Lorenz 混沌序列、CATS 序列和大气 CO2 预测实验对该网络进行测试, 结果表明, 连续型深度信念网具有结构精简、 收敛速度快、 预测精度高等优点.
  • [1] Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks. Science, 2006, 313(5786): 504-507
    [2] Hinton G E, Osindero S, Teh Y W. A fast learning algorithm for deep belief nets. Neural Computation, 2006, 18(7): 1527-1554
    [3] Deselaers T, Hasan S, Bender O, Ney H. A deep learning approach to machine transliteration. In: Proceedings of the 4th EACL Workshop on Statistical Machine Translation. Athens, Greece: Association for Computational Linguistics, 2009. 233-241
    [4] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors. Nature, 1986, 323(6088): 533-536
    [5] Bengio Y, Lamblin P, Popovici D, Larochelle H. Greedy layer-wise training of deep networks. In: Proceedings of the 20th Advances in Neural Information Processing Systems. Vancouver, British Columbia, Canada: NIPS, 2007. 153-160
    [6] Arel I, Rose D C, Karnowski T P. Deep machine learninga new frontier in Artificial intelligence research. IEEE Computational Intelligence Magazine, 2010, 5(4): 13-18
    [7] Dahl G E, Dong Y, Deng L, Acero A. Large vocabulary continuous speech recognition with context-dependent DBN-HMMS. In: Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing. Prague: IEEE, 2011. 4688-4691
    [8] Fasel I, Berry J. Deep belief networks for real-time extraction of tongue contours from ultrasound during speech. In: Proceedings of the 20th International Conference on Pattern Recognition. Istanbul: IEEE, 2010. 1493-1496
    [9] Zhang S L, Bao Y B, Zhou P, Jiang H, Dai L R. Improving deep neural networks for LVCSR using dropout and shrinking structure. In: Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Florence: IEEE, 2014. 6849-6853
    [10] Bengio Y. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009, 2(1): 1-127
    [11] Wei Q L, Wang F Y, Liu D R, Yang X. Finite-approximation-error-based discrete-time iterative adaptive dynamic programming. IEEE Transactions on Cybernetics, 2014, 44(12): 2820-2833
    [12] Liu D R, Wei Q L. Policy iteration adaptive dynamic programming algorithm for discrete-time nonlinear systems. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(3): 621-34
    [13] Zhang H G, Qin C B, Jiang B, Luo Y H. Online adaptive policy learning algorithm for H∞ state feedback control of unknown affine nonlinear discrete-time systems. IEEE Transactions on Cybernetics, 2014, 44(12): 2706-2718
    [14] Schmidhuber J. Deep learning in neural networks: an overview. Neural Nerwork, 2015, 61: 85-117
    [15] Längkvist M, Karlsso L, Loutfi A. A review of unsupervised feature learning and deep learning for time-series modeling. Pattern Recognition Letters, 2014, 42: 11-24
    [16] Taylor G M, Hinton G E, Roweis S. Modeling human motion using binary latent variables. In: Proceedings of the 20th Advances in Neural Information Processing Systems. Vancouver, British Columbia, Canada: NIPS, 2007. 1345-1352
    [17] Chen H, Murray A F. A continuous restricted Boltzmann machine with an implementable training algorithm. IEEE Proceedings Vision, Image and Signal Processing, 2003, 150(3): 153-158
    [18] Zhang R, Shen F R, Zhao J X. A model with fuzzy granulation and deep belief networks for exchange rate forecasting. In: Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN). Beijing, China: IEEE, 2014. 366-373
    [19] Chen J F, Jin Q J, Chao J. Design of deep belief networks for short-term prediction of drought index using data in the Huaihe river basin. Mathematical Problems in Engineering, 2012, 2012: 235929
    [20] Song R Z, Xiao W D, Zhang H G, Sun C Y. Adaptive dynamic programming for a class of complex-valued nonlinear systems. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(9): 1733-1739
    [21] Song R Z, Lewis F, Wei Q L, Zhang H G, Jiang Z P, Levine D. Multiple actor-critic structures for continuous-time optimal control using input-output data. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26(4): 851-865
    [22] Hinton G E. Training products of experts by minimizing contrastive divergence. Neural Computation, 2002, 14(8): 1771-1800
    [23] Bergstra J, Bengio Y. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 2012, 13(1): 281-305
    [24] Wu Zhi-Wei, Chai Tian-You, Wu Yong-Jian. A hybrid prediction model of energy consumption per ton for fused magnesia. Acta Automatica Sinica, 2013, 39(12): 2002-2011(吴志伟, 柴天佑, 吴永建. 电熔镁砂产品单吨能耗混合预报模型. 自动化学报, 2013, 39(12): 2002-2011)
    [25] Patan K. Stability analysis and the stabilization of a class of discrete-time dynamic neural networks. IEEE Transactions on Neural Networks, 2007, 18(3): 660-673
    [26] Crone S F, Nikolopoulos K. Results of the NN3 neural network forecasting competition. In: Proceedings of the 27th International Symposium on Forecasting Program. New York, USA: ISF, 2007. 1-129
    [27] Chen Q L, Chai W, Qiao J F. A stable online self-constructing recurrent neural network. Advances in Neural Networks --ISNN 2011. Berlin Heidelberg: Springer, 2011, 6677: 122-131
    [28] Chang L C, Chen P A, Chang F J. Reinforced two-step-ahead weight adjustment technique for online training of recurrent neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2012, 23(8): 1269-1278
    [29] Zhang G P. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing, 2003, 50: 159-175
    [30] Lmaury A, Oja E, Simula O, Verleysen M. Time series prediction competition: the CATS benchmark. In: Proceedings of IJCNN'2004 --International Joint Conference on Neural Networks. Budapest: IJCNN, 2004. 1615-1620
  • 加载中
计量
  • 文章访问数:  2131
  • HTML全文浏览量:  133
  • PDF下载量:  1767
  • 被引次数: 0
出版历程
  • 收稿日期:  2015-04-21
  • 修回日期:  2015-09-23
  • 刊出日期:  2015-12-20

目录

    /

    返回文章
    返回