Nothing Special   »   [go: up one dir, main page]

Improving Slot Filling in Spoken Language Understanding with Joint Pointer and Attention

Lin Zhao, Zhe Feng


Abstract
We present a generative neural network model for slot filling based on a sequence-to-sequence (Seq2Seq) model together with a pointer network, in the situation where only sentence-level slot annotations are available in the spoken dialogue data. This model predicts slot values by jointly learning to copy a word which may be out-of-vocabulary (OOV) from an input utterance through a pointer network, or generate a word within the vocabulary through an attentional Seq2Seq model. Experimental results show the effectiveness of our slot filling model, especially at addressing the OOV problem. Additionally, we integrate the proposed model into a spoken language understanding system and achieve the state-of-the-art performance on the benchmark data.
Anthology ID:
P18-2068
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
426–431
Language:
URL:
https://aclanthology.org/P18-2068
DOI:
10.18653/v1/P18-2068
Bibkey:
Cite (ACL):
Lin Zhao and Zhe Feng. 2018. Improving Slot Filling in Spoken Language Understanding with Joint Pointer and Attention. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 426–431, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Improving Slot Filling in Spoken Language Understanding with Joint Pointer and Attention (Zhao & Feng, ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-2068.pdf
Video:
 https://aclanthology.org/P18-2068.mp4