摘要:
针对递归神经网络BP(Back Propagation)学习算法收敛慢的缺陷,提出一种新的递归
神经网络快速并行学习算法.首先,引入递推预报误差(RPE)学习算法,并且证明了其稳定性;
进一步地,为了克服RPE算法集中运算的不足,设计完整的并行结构算法.本算法将计算分配
到神经网络中的每个神经元,完全符合神经网络的并行结构特点,也利于硬件实现.仿真结果表
明,该算法比传统的递归BP学习算法具有更好的收敛性能.理论分析和仿真实验证明,该算法
与RPE集中运算算法相比可以大大节省计算时间.
Abstract:
Because the recurrent BP algorithm suffers from the drawback of slow convergence,
a new fast parallel learning algorithm for recurrent neural networks is proposed.
First, the recursive predictive error(RPE)learning algorithm for recurrent neural
networks is introduced, and its stability is demonstrated. Furthermore, in order to
overcome the disadvantage of the centralized computing of RPE learning algorithm, a
parallel structure algorithm is derived. In the new parallel learning algorithm, the
computation is distributed to each neuron in the network, which is coherent with the
massively parallel nature of the network, and also convenient for hardware implementation.
Simulation results show that better convergence performance of the proposed algorithm
over the traditional recurrent BP algorithm. Meanwhile, theoretical analysis and
simulation results show that the new parallel algorithm also cut lots of computational time
compared with the RPE central