联想记忆神经网络的训练
On the Training of Neural Network for Associative Memory
-
摘要: 提出了一种联想记忆神经网络的优化训练方案,说明网络的样本吸引域可用阱深参数作 一定程度的控制,使网络具有尽可能好的容错性.计算表明,训练网络可达到α<1(α=M/ N,N是神经元数,M是贮存样本数),而仍有良好的容错性,明显优于外积法、正交化外积法、 赝逆法等常用方案.文中还对训练网络的对称性与收敛性问题进行了讨论.Abstract: In this paper, an optimized training scheme of neural network for associative memory is proposed. We show that the basins of attraction for samples attractors can be controlled in some extent by a pitfall depth parameter, therefore, the fault-tolerance of network can be made as good as possible. Numerical simulations show that with this scheme, the capacity of network can reach α<1 (α= M/N, here N is the number of neurons and M is the number of stored samples) and still with good fault-tolerance. The results are much better than the popular schemes such as outer-product scheme, orthogonalized outer-product scheme, pseudo-inverse matrix scheme and etc.. The problems on symmetry and convergence of trained networks are discussed too.
-
Key words:
- Neural network /
- associative memory /
- fault-tolerance /
- attactor /
- basin of attraction
计量
- 文章访问数: 3130
- HTML全文浏览量: 254
- PDF下载量: 1271
- 被引次数: 0