2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于改进并行回火算法的RBM网络训练研究

李飞 高晓光 万开方

李飞, 高晓光, 万开方. 基于改进并行回火算法的RBM网络训练研究. 自动化学报, 2017, 43(5): 753-764. doi: 10.16383/j.aas.2017.c160326
引用本文: 李飞, 高晓光, 万开方. 基于改进并行回火算法的RBM网络训练研究. 自动化学报, 2017, 43(5): 753-764. doi: 10.16383/j.aas.2017.c160326
LI Fei, GAO Xiao-Guang, WAN Kai-Fang. Research on RBM Networks Training Based on Improved Parallel Tempering Algorithm. ACTA AUTOMATICA SINICA, 2017, 43(5): 753-764. doi: 10.16383/j.aas.2017.c160326
Citation: LI Fei, GAO Xiao-Guang, WAN Kai-Fang. Research on RBM Networks Training Based on Improved Parallel Tempering Algorithm. ACTA AUTOMATICA SINICA, 2017, 43(5): 753-764. doi: 10.16383/j.aas.2017.c160326

基于改进并行回火算法的RBM网络训练研究

doi: 10.16383/j.aas.2017.c160326
基金项目: 

国家自然科学基金 61573285

中央高校基本科研业务费专项基金 3102015BJ(Ⅱ)GH01

国家自然科学基金 61305133

详细信息
    作者简介:

    李飞 西北工业大学电子信息学院博士研究生.2011年获得西北工业大学系统工程专业学士学位.主要研究方向为机器学习和深度学习.E-mail:nwpulf@mail.nwpu.edu.cn

    万开方 西北工业大学电子信息学院博士研究生.2010年获得西北工业大学系统工程专业学士学位.主要研究方向为航空火力控制.E-mail:yibai2003@126.com

    通讯作者:

    高晓光 西北工业大学电子信息学院教授.1989年获得西北工业大学飞行器导航与控制系统博士学位.主要研究方向为贝叶斯和航空火力控制.E-mail:cxg2012@nwpu.edu.cn

Research on RBM Networks Training Based on Improved Parallel Tempering Algorithm

Funds: 

National Natural Science Foundation of China 61573285

Fundamental Research Funds for the Central Universities 3102015BJ(Ⅱ)GH01

National Natural Science Foundation of China 61305133

More Information
    Author Bio:

    Ph. D. candidate at the School of Electronics and Information, Northwestern Polytechnical University.He received his bachelor degree in system engineering from Northwestern Polytechnical University in 2011. His research interest covers machine learning and deep learning

    Ph. D. candidate at the School of Electronics and Information, Northwestern Polytechnical University. He received his bachelor degree in system engineering from Northwestern Polytechnical University in 2010. His main research interest is airborne flre control

    Corresponding author: GAO Xiao-Guang Professor at the School of Electronics and Information, Northwestern Polytechnical University. She received her Ph. D. degree in aircraft navigation and control system from Northwestern Polytechnical University. Her research interest covers Bayes and airborne flre control. Corresponding author of this paper
  • 摘要: 目前受限玻尔兹曼机网络训练算法主要是基于采样的算法.当用采样算法进行梯度计算时,得到的采样梯度是真实梯度的近似值,采样梯度和真实梯度之间存在较大的误差,这严重影响了网络的训练效果.针对该问题,本文首先分析了采样梯度和真实梯度之间的数值误差和方向误差,以及它们对网络训练性能的影响,然后从马尔科夫采样的角度对以上问题进行了理论分析,并建立了梯度修正模型,通过修正梯度对采样梯度进行数值和方向的调节,并提出了基于改进并行回火算法的训练算法,即GFPT(Gradient fixing parallel tempering)算法.最后给出GFPT算法与现有算法的对比实验,仿真结果表明,GFPT算法可以极大地减小采样梯度和真实梯度之间的误差,大幅度提升受限玻尔兹曼机网络的训练效果.
    1)  本文责任编委 乔俊飞
  • 图  1  RBM结构

    Fig.  1  Configuration of RBM

    图  2  梯度示意图

    Fig.  2  Diagram of gradient

    图  3  重构误差对比图

    Fig.  3  Contrast of reconstruction errors

    图  4  收敛精度对比图

    Fig.  4  Contrast of convergence accuracy

    图  5  重构误差对比图

    Fig.  5  Contrast of reconstruction errors

    图  6  收敛精度对比图

    Fig.  6  Contrast of convergence accuracy

    图  7  运行时间对比图

    Fig.  7  Contrast of runtime

    图  8  运行效率对比图

    Fig.  8  Contrast of operating efficiency

    图  9  GFPT算法与PT算法效率对比图

    Fig.  9  Contrast of operating efficiency between GFPT and PT

    图  10  原始图片

    Fig.  10  Original image

    图  11  CD1算法重构图

    Fig.  11  Reconstructed image by CD1

    图  12  CD5算法重构图

    Fig.  12  Reconstructed image by CD5

    图  13  CD10算法重构图

    Fig.  13  Reconstructed image by CD10

    图  14  PT5算法重构图

    Fig.  14  Reconstructed image by PT5

    图  15  PT10算法重构图

    Fig.  15  Reconstructed image by PT10

    图  16  GFPT5算法重构图

    Fig.  16  Reconstructed image by GFPT5

    图  17  GFPT10算法重构图

    Fig.  17  Reconstructed image by GFPT10

    图  18  GFPT5算法训练网络后的隐层特征图

    Fig.  18  Diagram of hidden layer feature learned by GFPT5

    图  19  GFPT10算法训练网络后获得的隐层特征图

    Fig.  19  Diagram of hidden layer feature learned by GFPT10

    表  1  训练算法及参数

    Table  1  Training algorithms and parameters

    CD1 CD5 CD10 PT5 PT10 GFPT5 GFPT10
    η 0.1 0.1 0.1 0.1 0.1 0.1 0.1
    k 1 5 10 1 1 1 1
    T - - - 20 20 20 20
    M - - - 5 10 5 10
    λ 0.1 0.1
    batch 60 60 60 60 60 60 60
    iter 1 000 1 000 1 000 1 000 1 000 1 000 1000
    下载: 导出CSV
  • [1] Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks. Science, 2006, 313(5786): 504-507 doi: 10.1126/science.1127647
    [2] Le Roux N, Heess N, Shotton J, Winn J. Learning a generative model of images by factoring appearance and shape. Neural Computation, 2011, 23(3): 593-650 doi: 10.1162/NECO_a_00086
    [3] Lee H, Grosse R, Ranganath R, Ng A Y. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th Annual International Conference on Machine Learning (ICML). Montreal, Canada: ACM, 2009. 609-616
    [4] Bengio Y. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009, 2(1): 1-127 doi: 10.1561/2200000006
    [5] Deng L, Abdel-Hamid O, Yu D. A deep convolutional neural network using heterogeneous pooling for trading acoustic invariance with phonetic confusion. In: Proceedings of the 2013 International Conference on Acoustics, Speech and Signal Processing (ICASSP). Vancouver, BC, Canada: IEEE, 2013. DOI: 10.1109/ICASSP.2013.6638952
    [6] Deng L. Design and learning of output representations for speech recognition. In: Proceedings of the 2013 Neural Information Processing Systems (NIPS) Workshop on Learning Output Representations. South Lake Tahoe, United States: NIPS, 2013.
    [7] Tan C C, Eswaran C. Reconstruction and recognition of face and digit images using autoencoders. Neural Computing and Application, 2010, 19(7): 1069-1079 doi: 10.1007/s00521-010-0378-4
    [8] 郭潇逍, 李程, 梅俏竹.深度学习在游戏中的应用.自动化学报, 2016, 42(5): 676-684 http://www.aas.net.cn/CN/abstract/abstract18857.shtml

    Guo Xiao-Xiao, Li Cheng, Mei Qiao-Zhu. Deep learning applied to games. Acta Automatica Sinica, 2016, 42(5): 676-684 http://www.aas.net.cn/CN/abstract/abstract18857.shtml
    [9] 田渊栋.阿法狗围棋系统的简要分析.自动化学报, 2016, 42(5): 671-675 http://www.aas.net.cn/CN/abstract/abstract18856.shtml

    Tian Yuan-Dong. A simple analysis of AlphaGo. Acta Automatica Sinica, 2016, 42(5): 671-675 http://www.aas.net.cn/CN/abstract/abstract18856.shtml
    [10] 段艳杰, 吕宜生, 张杰, 赵学亮, 王飞跃.深度学习在控制领域的研究现状与展望.自动化学报, 2016, 42(5): 643-654 http://www.aas.net.cn/CN/abstract/abstract18852.shtml

    Duan Yan-Jie, Lv Yi-Sheng, Zhang Jie, Zhao Xue-Liang, Wang Fei-Yue. Deep learning for control: the state of the art and prospects. Acta Automatica Sinica, 2016, 42(5): 643-654 http://www.aas.net.cn/CN/abstract/abstract18852.shtml
    [11] 耿杰, 范剑超, 初佳兰, 王洪玉.基于深度协同稀疏编码网络的海洋浮筏SAR图像目标识别.自动化学报, 2016, 42(4): 593-604 http://www.aas.net.cn/CN/abstract/abstract18846.shtml

    Geng Jie, Fan Jian-Chao, Chu Jia-Lan, Wang Hong-Yu. Research on marine floating raft aquaculture SAR image target recognition based on deep collaborative sparse coding network. Acta Automatica Sinica, 2016, 42(4): 593-604 http://www.aas.net.cn/CN/abstract/abstract18846.shtml
    [12] Deng L, Hinton G, Kingsbury B. New types of deep neural network learning for speech recognition and related applications: an overview. In: Proceedings of the 2013 International Conference on Acoustics, Speech and Signal Processing (ICASSP). Vancouver, BC, Canada: IEEE, 2013. DOI: 10.1109/ICASSP.2013.6639344
    [13] Erhan D, Courville A C, Bengio Y, Vincent P. Why does unsupervised pre-training help deep learning? In: Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS). Sardinia, Italy: JMLR, 2010. 201-208
    [14] Salakhutdinov R, Hinton G E. Deep Boltzmann machines. In: Proceedings of the 12th International Conference on Artificial Intelligence and Statistics (AISTATS). Florida, USA: JMLR, 2009. 448-455
    [15] Swersky K, Chen B, Marlin B, De Freitas M. A tutorial on stochastic approximation algorithms for training restricted Boltzmann machines and deep belief nets. In: Proceedings of the 2010 Information Theory and Applications Workshop (ITA). La Jolla, California, USA: IEEE, 2010. 1-10
    [16] Hinton G E, Osindero S, Teh Y W. A fast learning algorithm for deep belief nets. Neural Computation, 2006, 18(7): 1527-1554 doi: 10.1162/neco.2006.18.7.1527
    [17] Fischer A, Igel C. Bounding the bias of contrastive divergence learning. Neural Computation, 2011, 23(3): 664-673 doi: 10.1162/NECO_a_00085
    [18] Tieleman T. Training restricted Boltzmann machines using approximations to the likelihood gradient. In: Proceedings of the 25th International Conference on Machine Learning (ICML). Helsinki, Finland: ACM, 2008. 1064-1071
    [19] Tieleman T, Hinton G E. Using fast weights to improve persistent contrastive divergence. In: Proceedings of the 26th International Conference on Machine Learning (ICML). Montréal, Canada: ACM, 2009. 1033-1040
    [20] Sutskever I, Tieleman T. On the convergence properties of contrastive divergence. In: Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS). Sardinia, Italy: JMLR, 2010. 789-795
    [21] Fischer A, Igel C. Parallel tempering, importance sampling, and restricted Boltzmann machines. In: Proceedings of the 5th Workshop on Theory of Randomized Search Heuristics (ThRaSH). Copenhagen, Denmark: University of Copenhagen, 2011. 99-119
    [22] Desjardins G, Courville A, Bengio Y. Adaptive parallel tempering for stochastic maximum likelihood learning of RBMs. In: Proceedings of NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. Whistler, Canada: Computer Science, 2010. arXiv: 1012.3476
    [23] Cho K, Raiko T, Ilin A. Parallel tempering is efficient for learning restricted Boltzmann machines. In: Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN). Barcelona, Spain: IEEE, 2010. 3246-3253
    [24] Brakel P, Dieleman S, Schrauwen B. Training restricted Boltzmann machines with multi-tempering: harnessing parallelization. In: Proceedings of the 22nd International Conference on Artificial Neural Networks. Berlin Heidelberg, Germany: Springer, 2012. 92-99
    [25] Desjardins G, Courville A C, Bengio Y, Vincent P, Dellaleau O. Tempered Markov chain Monte Carlo for training of restricted Boltzmann machines. In: Proceedings of the 13th International Workshop on Artificial Intelligence and Statistics (AISTATS). Washington, DC, USA: IEEE, 2010. 45-152
    [26] Fischer A, Igel C. Training restricted Boltzmann machines: an introduction. Pattern Recognition, 2014, 47(1): 25-39 doi: 10.1016/j.patcog.2013.05.025
    [27] Bengio Y, Delalleau O. Justifying and generalizing contrastive divergence. Neural Computation, 2009, 21(6): 1601-1621 doi: 10.1162/neco.2008.11-07-647
  • 加载中
图(19) / 表(1)
计量
  • 文章访问数:  3318
  • HTML全文浏览量:  213
  • PDF下载量:  755
  • 被引次数: 0
出版历程
  • 收稿日期:  2016-04-11
  • 录用日期:  2016-08-31
  • 刊出日期:  2017-05-01

目录

    /

    返回文章
    返回