Abstract
Model M, a novel class-based exponential language model, has been shown to significantly outperform word n-gram models in state-of-the-art machine translation and speech recognition systems. The model was motivated by the observation that shrinking the sum of the parameter magnitudes in an exponential language model leads to better performance on unseen data. Being a class-based language model, Model M makes use of word classes that are found automatically from training data. In this paper, we extend Model M to allow for different clusterings to be used at different word positions. This is motivated by the fact that words play different roles depending on their position in an n-gram. Experiments on standard NIST and GALE Arabic-to-English development and test sets show improvements in machine translation quality as measured by automatic evaluation metrics. © 2011 IEEE.