Universal dependencies for Japanese
Takaaki Tanaka, Yusuke Miyao, et al.
LREC 2016
Deletion-based sentence compression in the English language has made significant progress over the past few decades. However, there is a lack of large-scale and high-quality parallel corpus (i.e., (sentence, compression) pairs) for the Chinese language to train an efficient compression system. To remedy this shortcoming, we present a dependency-tree-based method to construct a Chinese corpus with 151k pairs of sentences and compression based on Chinese language-specific characteristics. Subsequently, we trained both extractive and generative neural compression models using the constructed corpus. The experimental results show that our compression model can generate high-quality compressed sentences on both automatic and human evaluation metrics compared with the baselines. The results of the faithfulness evaluation also indicated that the Chinese compression model trained on our constructed corpus can produce more faithful compressed sentences. Furthermore, a dataset with 1,000 pairs of sentences and ground truth compression was manually created for automatic evaluation, which, we believe, will benefit future research on Chinese sentence compression.
Takaaki Tanaka, Yusuke Miyao, et al.
LREC 2016
Daniel Zeman, Martin Popel, et al.
CoNLL 2017
Hiroshi Kanayama, Yusuke Miyao, et al.
COLING 2012
Hiroshi Kanayama, Ran Iwamoto
LREC 2020