SPEAKER-INDEPENDENT UPFRONT DIALECT ADAPTATION IN A LARGE VOCABULARY CONTINUOUS SPEECH RECOGNIZER
Abstract
Large vocabulary continuous speech recognition systems show a significant decrease in performance if a users pronunciation differs largely from those observed during system training. This can be considered as the main reason why most commercially available systems recommend - if not enforce - the individual end user to read an enrollment script for the speaker dependent reestimation of acoustic model parameters. Thus, the improvement of recognition rates for dialect speakers is an important issue both with respect to a broader acceptance and a more convenient or natural use of such systems. This paper compares different techniques that aim on a better speaker independent recognition of dialect speech in a large vocabulary continuous speech recognizer. The methods discussed comprise Bayesian adaptation and speaker clustering techniques and deal with both the availability and absence of dialect training material. Results are given for a case study that aims on the improvement of a German speech recognizer for Austrian speakers.