Peter L. Williams, Nelson L. Max, et al.
IEEE TVCG
For large-vocabulary handwriting-recognition applications, such as note-taking, word-level language modeling is of key importance, to constrain the recognizer's search and to contribute to the scoring of hypothesized texts. We discuss the creation of a word-unigram language model, which associates probabilities with individual words. Typically, such models are derived from a large, diverse text corpus. We describe a three-stage algorithm for determining a word unigram from such a corpus. First is tokenization, the segmenting of a corpus into words. Second, we select for the model a subset of the set of distinct words found during tokenization. Complexities of these stages are discussed. Finally, we create recognizer-specific data structures for the word set and unigram. Applying our method to a 600-million-word corpus, we generate a 50,000-word model which eliminates 45% of word-recognition errors made by a baseline system employing only a character-level language model. © 2001 IEEE.
Peter L. Williams, Nelson L. Max, et al.
IEEE TVCG
Julia Rubin, Krzysztof Czarnecki, et al.
SPLC 2013
Graham Mann, Indulis Bernsteins
DIMEA 2007
Arnon Amir, M. Lindenbaum
Computer Vision and Image Understanding