Pre-training BERT on domain resources for short answer grading
Chul Sung, Tengfei Ma, et al.
EMNLP-IJCNLP 2019
Crosslingual word embeddings represent lexical items from different languages in the same vector space, enabling transfer of NLP tools. However, previous attempts had expensive resource requirements, difficulty incorporating monolingual data or were unable to handle polysemy. We address these drawbacks in our method which takes advantage of a high coverage dictionary in an EM style training algorithm over monolingual corpora in two languages. Our model achieves state-of-the-art performance on bilingual lexicon induction task exceeding models using large bilingual corpora, and competitive results on the monolingual word similarity and cross-lingual document classification task.
Chul Sung, Tengfei Ma, et al.
EMNLP-IJCNLP 2019
Yang Zhao, Hiroshi Kanayama, et al.
LREC 2022
Takaaki Tanaka, Yusuke Miyao, et al.
LREC 2016
Aldo Pareja, Giacomo Domeniconi, et al.
AAAI 2020