Daniel M. Bikel, Vittorio Castelli
ACL 2008
A multimodal conversational system is developed to provide an intuitive and flexible means for controlling vehicle systems and providing a user with the option to operate the system with speech, touch or any combination of the two. The speech recognition engine makes use of dynamic semantic models that keep track of the current and past contextual information and dynamically modify the language model to increase the accuracy of the speech recognizer. The interaction is controlled by a dialogue manager that responds to input signals with output actions. A general multimodal dialogue-manager architecture is developed that allows for a complete separation between the interaction logic and the input signals.
Daniel M. Bikel, Vittorio Castelli
ACL 2008
Michael C. McCord, Violetta Cavalli-Sforza
ACL 2007
Liqun Chen, Matthias Enzmann, et al.
FC 2005
Oliver Bodemer
IBM J. Res. Dev