George Saon, Tom Sercu, et al.
INTERSPEECH 2016
In recent years, server-based automatic speech recognition (ASR) systems have become ubiquitous, and unprecedented amounts of speech data are now available for system training. The availability of such training data has greatly improved ASR accuracy, but how to maximize the ASR performance in new domains or domains where ASR systems currently fail (thus limiting data availability) is still an important open question. In this paper, we propose a framework for mapping large speech corpora to different acoustic environments, so that such data can be transformed to build high-quality acoustic models for other acoustic domains. In our experiments using a large corpus, our proposed method reduced errors by 18.6%.
George Saon, Tom Sercu, et al.
INTERSPEECH 2016
Sriram Ganapathy, Samuel Thomas, et al.
INTERSPEECH 2015
Tohru Nagano, Takashi Fukuda, et al.
ASRU 2019
Sashi Novitasari, Takashi Fukuda, et al.
INTERSPEECH 2022