Asset Modeling using Serverless Computing
Srideepika Jayaraman, Chandra Reddy, et al.
Big Data 2021
Recently proposed methods for discriminative language modeling require alternate hypotheses in the form of lattices or N-best lists. These are usually generated by an Automatic Speech Recognition (ASR) system on the same speech data used to train the system. This requirement restricts the scope of these methods to corpora where both the acoustic material and the corresponding true transcripts are available. Typically, the text data available for language model (LM) training is an order of magnitude larger than manually transcribed speech. This paper provides a general framework to take advantage of this volume of textual data in the discriminative training of language models. We propose to generate probable N-best lists directly from the text material, which resemble the N-best lists produced by an ASR system by incorporating phonetic confusability estimated from the acoustic model of the ASR system. We present experiments with Japanese spontaneous lecture speech data, which demonstrate that discriminative LM training with the proposed framework is effective and provides modest gains in ASR accuracy. © 2011 Elsevier B.V. All rights reserved.
Srideepika Jayaraman, Chandra Reddy, et al.
Big Data 2021
Bhuvana Ramabhadran, Jing Huang, et al.
INTERSPEECH - Eurospeech 2003
Benedikt Blumenstiel, Johannes Jakubik, et al.
NeurIPS 2023
Sudeep Sarkar, Kim L. Boyer
Computer Vision and Image Understanding