Nothing Special   »   [go: up one dir, main page]

Selective Token Generation for Few-shot Natural Language Generation

Daejin Jo, Taehwan Kwon, Eun-Sol Kim, Sungwoong Kim


Abstract
Natural language modeling with limited training data is a challenging problem, and many algorithms make use of large-scale pretrained language models (PLMs) for this due to its great generalization ability. Among them, additive learning that incorporates a task-specific adapter on top of the fixed large-scale PLM has been popularly used in the few-shot setting. However, this added adapter is still easy to disregard the knowledge of the PLM especially for few-shot natural language generation (NLG) since an entire sequence is usually generated by only the newly trained adapter. Therefore, in this work, we develop a novel additive learning algorithm based on reinforcement learning (RL) that selectively outputs language tokens between the task-general PLM and the task-specific adapter during both training and inference. This output token selection over the two generators allows the adapter to take into account solely the task-relevant parts in sequence generation, and therefore makes it more robust to overfitting as well as more stable in RL training. In addition, to obtain the complementary adapter from the PLM for each few-shot task, we exploit a separate selecting module that is also simultaneously trained using RL. Experimental results on various few-shot NLG tasks including question answering, data-to-text generation and text summarization demonstrate that the proposed selective token generation significantly outperforms the previous additive learning algorithms based on the PLMs.
Anthology ID:
2022.coling-1.510
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5837–5856
Language:
URL:
https://aclanthology.org/2022.coling-1.510
DOI:
Bibkey:
Cite (ACL):
Daejin Jo, Taehwan Kwon, Eun-Sol Kim, and Sungwoong Kim. 2022. Selective Token Generation for Few-shot Natural Language Generation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5837–5856, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Selective Token Generation for Few-shot Natural Language Generation (Jo et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.510.pdf
Code
 kakaobrain/stg
Data
CNN/Daily MailMS MARCO