Discourse segmentation in aid of document summarization
B.K. Boguraev, Mary S. Neff
HICSS 2000
In this paper, we propose a bilevel joint unsupervised and supervised training (BL-JUST) framework for automatic speech recognition. Compared to the conventional pretraining and fine-tuning strategy which is a disconnected twostage process, BL-JUST tries to optimize an acoustic model such that it simultaneously minimizes both the unsupervised and supervised loss functions. Because BL-JUST seeks matched local optima of both loss functions, acoustic representations learned by the acoustic model strike a good balance between being generic and task-specific. We solve the BL-JUST problem using penaltybased bilevel gradient descent and evaluate the trained deep neural network acoustic models on various datasets with a variety of architectures and loss functions. We show that BL-JUST can outperform the widely-used pre-training and fine-tuning strategy and some other popular semi-supervised techniques.
B.K. Boguraev, Mary S. Neff
HICSS 2000
Maciel Zortea, Miguel Paredes, et al.
IGARSS 2021
Yvonne Anne Pignolet, Stefan Schmid, et al.
Discrete Mathematics and Theoretical Computer Science
Thomas M. Cover
IEEE Trans. Inf. Theory