Combining metrics for mesh simplification and parameterization
Jordan Smith, Ioana Boier-Martin
SIGGRAPH 2005
In this paper we define two alternatives to the familiar perplexity statistic (hereafter lexical perplexity), which is widely applied both as a figure of merit and as an objective function for training language models. These alternatives, respectively acoustic perplexity and the synthetic acoustic word error rate, fuse information from both the language model and the acoustic model. We show how to compute these statistics by effectively synthesizing a large acoustic corpus, demonstrate their superiority (on a modest collection of models and test sets) to lexical perplexity as predictors of language model performance, and investigate their use as objective functions for training language models. We develop an efficient algorithm for training such models, and present results from a simple speech recognition experiment, in which we achieved a small reduction in word error rate by interpolating a language model trained by synthetic acoustic word error rate with a unigram model.
Jordan Smith, Ioana Boier-Martin
SIGGRAPH 2005
Upol Ehsan, Elizabeth Watkins, et al.
CHI 2025
Claudio Santos Pinhanez, Edem Wornyo
CHI 2025
Katherine Panciera, Reid Priedhorsky, et al.
CHI 2010