Shivashankar Subramanian, Ioana Baldini, et al.
IAAI 2020
Transition-based parsers for Abstract Meaning Representation (AMR) rely on node-to-word alignments. These alignments are learned separately from parser training and require a complex pipeline of rule-based components, pre-processing, and post-processing to satisfy domain-specific constraints. Parsers also train on a point-estimate of the alignment pipeline, neglecting the uncertainty due to the inherent ambiguity of alignment. In this work we explore two avenues for overcoming these limitations. First, we propose a neural aligner for AMR that learns node-to-word alignments without relying on complex pipelines. We subsequently explore a tighter integration of aligner and parser training by considering a distribution over oracle action sequences arising from aligner uncertainty. Empirical results show this approach leads to more accurate alignments and generalization better from the AMR2.0 to AMR3.0 corpora. We attain a new state-ofthe art for gold-only trained models, matching silver-trained performance without the need for beam search on AMR3.0.
Shivashankar Subramanian, Ioana Baldini, et al.
IAAI 2020
Kevin Gu, Eva Tuecke, et al.
ICML 2024
Gabriele Picco, Lam Thanh Hoang, et al.
EMNLP 2021
Daiki Kimura, Tsunehiko Tanaka, et al.
NAACL 2022