Hybrid reinforcement learning with expert state sequences
Xiaoxiao Guo, Shiyu Chang, et al.
AAAI 2019
Development of a robust two-way real-time speech translation system exposes researchers and system developers to various challenges of machine translation (MT) and spoken language dialogues. The need for communicating in at least two different languages poses problems not present for a monolingual spoken language dialogue system, where no MT engine is embedded within the process flow. Integration of various component modules for real-time operation poses challenges not present for text translation. In this paper, we present the CCLINC (Common Coalition Language System at Lincoln Laboratory) English-Korean two-way speech translation system prototype trained on doctor-patient dialogues, which integrates various techniques to tackle the challenges of automatic real-time speech translation. Key features of the system include (i) language-independent meaning representation which preserves the hierarchical predicate-argument structure of an input utterance, providing a powerful mechanism for discourse understanding of utterances originating from different languages, word-sense disambiguation and generation of various word orders of many languages, (ii) adoption of the DARPA Communicator architecture, a plug-and-play distributed system architecture which facilitates integration of component modules and system operation in real time, and (iii) automatic acquisition of grammar rules and lexicons for easy porting of the system to different languages and domains. We describe these features in detail and present experimental results.
Xiaoxiao Guo, Shiyu Chang, et al.
AAAI 2019
Rie Kubota Ando
CoNLL 2006
Benjamin N. Grosof
AAAI-SS 1993
Baihan Lin, Guillermo Cecchi, et al.
IJCAI 2023