Publication
ICME 2004
Conference paper

Multistage information fusion for audio-visual speech recognition

Abstract

This paper looks into the information fusion problem in the context of audio-visual speech recognition. Existing approaches to audio-visual fusion typically address the problem in either the feature domain or the decision domain. In this work, we consider a hybrid approach that aims to take advantages of both the feature fusion and the decision fusion methodologies. We introduce a general formulation to facilitate information fusion at multiple stages, followed by an experimental study of a set of fusion schemes allowed by the framework. The proposed method is implemented on a real-time audio-visual speech recognition system, and evaluated on connected digit recognition tasks under varying acoustic conditions. The results show that the multistage fusion system consistently achieves lower word error rates than the reference feature fusion and decision fusion systems. It is further shown that removing the audio only channel from the multistage system only leads to minimal degradations in recognition performance while providing a noticeable reduction in computational load.

Date

Publication

ICME 2004

Authors

Share