Ensembles of multi-scale VGG acoustic models
Michael Heck, Masayuki Suzuki, et al.
INTERSPEECH 2017
Although neural language models have emerged, n-gram language models are still used for many speech recognition tasks. This paper proposes four methods to improve n-gram language models using text generated from a recurrent neural network language model (RNNLM). First, we use multiple RNNLMs from different domains instead of a single RNNLM. The final n-gram language model is obtained by interpolating generated n-gram models from each domain. Second, we use subwords instead of words for RNNLM to reduce the out-of-vocabulary rate. Third, we generate text templates using an RNNLM for template-based data augmentation for named entities. Fourth, we use both forward RNNLM and backward RNNLM to generate text. We found that these four methods improved performance of speech recognition up to 4% relative in various tasks.
Michael Heck, Masayuki Suzuki, et al.
INTERSPEECH 2017
David Haws, Xiaodong Cui
ICASSP 2019
Nobuaki Minematsu, Ibuki Nakamura, et al.
IEICE Trans Inf Syst
Yinghui Huang, Samuel Thomas, et al.
ASRU 2019