Discriminative Model Combination Using Decision Tree Based Phonetic Context Modeling
-
摘要: 上下文相关的区分性模型组合的局限在于引入大的模型权重参数集, 在数据有限时容易导致区分性权重训练过拟合. 针对该问题,本文提出利用决策树进行上下文建模, 采用最小音子错误准则构建决策树以获得最优上下文相关权重参数集. 决策树构造过程中通过评估目标函数的一阶近似增量来加速最优问题集的选择, 并利用精细问题集来获得更好的声学区分能力. 基于多模型组合的语音识别实验表明,该方法能够增强权重训练对过拟合的鲁棒性, 在大幅减小参数数量的情况下降低误识率,并优于在特征空间进行组合的方法.Abstract: One limitation of context dependent discriminative model combination is that a large number of parameters will be introduced, which is liable to overtraining with limited training data. We propose context modeling using phonetic decision trees in lattice based discriminative model combination. Question in tree node is chosen to optimize the minimum phone error criterion. First order approximation of the objective function increment is used for fast question selection. Results on speech recognition show that the method is capable of improving the robustness to overtraining and obtains error reduction with many fewer parameters. It is also shown that the model combination using tree based context modeling is superior to feature combination approach.
-
Key words:
- Discriminative model combination /
- context /
- decision tree /
- minimum phone error /
- speech recognition
计量
- 文章访问数: 1641
- HTML全文浏览量: 37
- PDF下载量: 768
- 被引次数: 0