Nothing Special   »   [go: up one dir, main page]

Positional Encoding to Control Output Sequence Length

Sho Takase, Naoaki Okazaki


Abstract
Neural encoder-decoder models have been successful in natural language generation tasks. However, real applications of abstractive summarization must consider an additional constraint that a generated summary should not exceed a desired length. In this paper, we propose a simple but effective extension of a sinusoidal positional encoding (Vaswani et al., 2017) so that a neural encoder-decoder model preserves the length constraint. Unlike previous studies that learn length embeddings, the proposed method can generate a text of any length even if the target length is unseen in training data. The experimental results show that the proposed method is able not only to control generation length but also improve ROUGE scores.
Anthology ID:
N19-1401
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3999–4004
Language:
URL:
https://aclanthology.org/N19-1401
DOI:
10.18653/v1/N19-1401
Bibkey:
Cite (ACL):
Sho Takase and Naoaki Okazaki. 2019. Positional Encoding to Control Output Sequence Length. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3999–4004, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Positional Encoding to Control Output Sequence Length (Takase & Okazaki, NAACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/N19-1401.pdf
Code
 takase/control-length
Data
DUC 2004