Hello, it's GPT-2--how can I help you? towards the use of pretrained language models for task-oriented dialogue systems
P Budzianowski, I Vulić - arXiv preprint arXiv:1907.05774, 2019 - arxiv.org
arXiv preprint arXiv:1907.05774, 2019•arxiv.org
Data scarcity is a long-standing and crucial challenge that hinders quick development of
task-oriented dialogue systems across multiple domains: task-oriented dialogue models are
expected to learn grammar, syntax, dialogue reasoning, decision making, and language
generation from absurdly small amounts of task-specific data. In this paper, we demonstrate
that recent progress in language modeling pre-training and transfer learning shows promise
to overcome this problem. We propose a task-oriented dialogue model that operates solely …
task-oriented dialogue systems across multiple domains: task-oriented dialogue models are
expected to learn grammar, syntax, dialogue reasoning, decision making, and language
generation from absurdly small amounts of task-specific data. In this paper, we demonstrate
that recent progress in language modeling pre-training and transfer learning shows promise
to overcome this problem. We propose a task-oriented dialogue model that operates solely …
Data scarcity is a long-standing and crucial challenge that hinders quick development of task-oriented dialogue systems across multiple domains: task-oriented dialogue models are expected to learn grammar, syntax, dialogue reasoning, decision making, and language generation from absurdly small amounts of task-specific data. In this paper, we demonstrate that recent progress in language modeling pre-training and transfer learning shows promise to overcome this problem. We propose a task-oriented dialogue model that operates solely on text input: it effectively bypasses explicit policy and language generation modules. Building on top of the TransferTransfo framework (Wolf et al., 2019) and generative model pre-training (Radford et al., 2019), we validate the approach on complex multi-domain task-oriented dialogues from the MultiWOZ dataset. Our automatic and human evaluations show that the proposed model is on par with a strong task-specific neural baseline. In the long run, our approach holds promise to mitigate the data scarcity problem, and to support the construction of more engaging and more eloquent task-oriented conversational agents.
arxiv.org