Nothing Special   »   [go: up one dir, main page]

Adversarial Training for Cross-Domain Universal Dependency Parsing

Motoki Sato, Hitoshi Manabe, Hiroshi Noji, Yuji Matsumoto


Abstract
We describe our submission to the CoNLL 2017 shared task, which exploits the shared common knowledge of a language across different domains via a domain adaptation technique. Our approach is an extension to the recently proposed adversarial training technique for domain adaptation, which we apply on top of a graph-based neural dependency parsing model on bidirectional LSTMs. In our experiments, we find our baseline graph-based parser already outperforms the official baseline model (UDPipe) by a large margin. Further, by applying our technique to the treebanks of the same language with different domains, we observe an additional gain in the performance, in particular for the domains with less training data.
Anthology ID:
K17-3007
Volume:
Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies
Month:
August
Year:
2017
Address:
Vancouver, Canada
Editors:
Jan Hajič, Dan Zeman
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
71–79
Language:
URL:
https://aclanthology.org/K17-3007
DOI:
10.18653/v1/K17-3007
Bibkey:
Cite (ACL):
Motoki Sato, Hitoshi Manabe, Hiroshi Noji, and Yuji Matsumoto. 2017. Adversarial Training for Cross-Domain Universal Dependency Parsing. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 71–79, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Adversarial Training for Cross-Domain Universal Dependency Parsing (Sato et al., CoNLL 2017)
Copy Citation:
PDF:
https://aclanthology.org/K17-3007.pdf